text
stringlengths 1.36k
1.27M
|
|---|
A PRELIMINARY ANALYSIS OF CHINOOK SALMON CODED-WIRE TAG RECOVERY DATA FROM IRON GATE, TRINITY RIVER AND COLE RIVERS HATCHERIES, BROOD YEARS 1978-2004
David G. Hankin$^1$ and Eric Logan$^2$
$^1$Humboldt State University, Department of Fisheries Biology, Arcata, CA 95521
$^2$P.O. Box 154, Salyer, CA 95563
Prepared under contract for The Hoopa Valley Tribal Council and the Arcata Office, U.S. Fish and Wildlife Service, via the Humboldt State University Sponsored Programs Foundation. Financial support provided by the U.S. Bureau of Reclamation, Klamath Basin Area Office.
Cite as Hankin, D.G. and E. Logan. 2010.
January 9, 2010
## Contents
1 Introduction 4
2 Data Sources and Manipulations 7
2.1 Releases 7
2.2 Ocean Recoveries 8
2.3 Non-Landed Ocean Mortalities 9
2.3.1 KOHM Methods, with Minor Modifications 9
2.3.2 Alternative Methods 11
2.4 Freshwater Recoveries 12
2.5 Lengths at Hatchery Return 12
2.5.1 TRH 13
2.5.2 IGH 13
2.5.3 CRH 13
2.6 Mean Weights at Release 14
3 Analysis and Simulation Methods 16
3.1 Basic Cohort Analysis/Estimation Methods 16
3.1.1 Ocean Fishery Cut-Off (Birth-) Date 18
3.1.2 Pooling Across Release Groups Within Brood Years 19
3.2 Simulation of Estimator Variances vs CWT Release Group Size 19
3.3 Effects of River Flows 20
3.4 Alternative Mixes of Fingerling and Yearling Releases 21
4 Results 22
4.1 Non-Landed Mortalities 22
4.1.1 Importance for Parameter Estimation 23
4.1.2 Alternative Methods for Calculations of Non-Landed Mortalities 23
4.2 Importance of Cut-off Date 29
4.3 Effect of Release Type 30
4.3.1 Survival Rates versus Release Type 30
4.3.2 Survival Rates versus Size at Release 32
4.3.3 Maturation Probabilities versus Release Type 33
4.3.4 Size at Age versus Release Type 35
Chapter 1
Introduction
This report constitutes the second of two deliverable products developed through the “Iron Gate Hatchery” contract awarded by the Hoopa Valley Tribal Council to Dr. David Hankin via the Humboldt State University Sponsored Programs Foundation. The first deliverable product, consisting of an overview of the annual salmon and steelhead propagation process at Iron Gate Hatchery, including recommendations for modest changes that would assist introduction of a constant fractional marking program, was delivered in January 2008 and was finalized in March 2008 (Logan and Hankin 2008).
This second deliverable product consists of a preliminary analysis of coded-wire tag (CWT) recovery data compiled from about 25 years of releases from each of three hatcheries and four races of Chinook salmon: fall Chinook salmon released from Iron Gate Hatchery (upper Klamath River, California), spring and fall Chinook salmon released from Trinity River Hatchery (upper Trinity River, California), and spring Chinook salmon released from Cole Rivers Hatchery (upper Rogue River, Oregon). The primary objective of these analyses was to determine the degree to which estimated brood year survival rates for comparable releases were correlated with one another across hatcheries and races so as to evaluate (a) the potential use of hatchery fish survival rates as a proxy for ocean survival conditions shortly following releases, and (b) to determine if the ratios of survival rates for fish released from Iron Gate Hatchery as compared to Trinity River Hatchery have recently become less than in earlier years, possibly due to changes in water management in the upper Klamath River. A recent stock-recruitment analysis of Klamath River fall Chinook salmon (STT 2005) used mean brood year survival rates of fingerling releases of fall Chinook salmon from Iron Gate and Trinity River hatcheries as a proxy for post-rearing survival conditions and thereby considerably improved the fit of a Ricker stock-recruitment model for Klamath fall Chinook salmon. Presumably, variation in post-rearing survival is primarily a reflection of variation in ocean survival conditions. If so, then one might expect to see strong covariation of survival rates, for similar release types and
races, for CWT groups released from Iron Gate, Trinity River, and perhaps also Cole Rivers hatcheries.
In addition to the principal focus on survival rates, analyses of CWT recovery data were also designed to shed light on the degree to which age-specific maturation probabilities and size at age were affected by stock type and by release strategy. In an earlier analysis of just three or four brood years of CWT recovery data for Chinook salmon released from California and Oregon hatcheries, Hankin (1990) believed he had identified consistent effects of release type on maturation schedules, fishery exploitation and size at age. It was of interest to determine whether or not these apparent patterns would be reinforced or modified based on analysis of this much longer, approximately 25 year, period of brood releases.
The analyses described in this report were carried out with future improvement of marking at Iron Gate Hatchery as a principal interest, with a specific objective of introducing a constant fractional marking (CFM) program, ideally at a 25% marking rate that would match that used at Trinity River Hatchery since the 2000 brood year. At Iron Gate Hatchery, as at many other Chinook salmon hatcheries, juvenile Chinook are released as subyearlings during early summer (e.g., June releases of *fingerlings*) or during early fall (e.g., October releases of *yearlings*). Fingerlings typically average about 90 fish per pound (about 5 g) at release whereas yearlings typically average 9-10 fish per pound (about 45-50 g) at release. At Iron Gate Hatchery (IGH), annual releases consist of about 5 million fingerling and 900 thousand yearling fall Chinook salmon. At Trinity River Hatchery (TRH), annual releases of fall Chinook salmon are about 2 million fingerlings and 900 thousand yearlings, and annual releases of spring Chinook salmon are about 1 million fingerlings and 400 thousand yearlings. Cole Rivers Hatchery (CRH) annually releases about 1.6 million spring Chinook salmon “smolts”, subyearlings released from August through October; these fish are similar in size to the yearlings released from IGH and TRH.
The large number of fingerlings released from IGH creates substantial logistical issues with respect to implementation of a constant fraction marking (CFM) program. Therefore, we developed alternative mixes of fingerling and yearling releases that, based on our CWT analyses, would generate similar total catches and escapement. If fewer fingerlings but more yearlings were released, then implementation of a 25% CFM program would be more feasible.
Over the past 25 years there have been substantial changes in ocean and freshwater fisheries regulations. In particular, restrictions to ocean fishing due to legal recognition of the special fishing rights of the Hoopa and Yurok tribal members in the Klamath and Trinity rivers (Parravano v. Babbitt 1995) have reduced the number of CWT recoveries in ocean fisheries and have thereby reduced the accuracy with
which ocean fishery exploitation rates can be estimated from a CWT release group of fixed size. Therefore, we engaged in some simplified but, we think, useful simulations of variances of estimated life history and fishery parameters based on standard cohort analysis methods as a function of CWT release group size. These simulation results can provide substantial guidance concerning appropriate release group sizes given reduced ocean fisheries.
An initial draft of this report was distributed on 12 June 2008 and considered IGH and TRH CWT release groups through the 2001 brood year. This final version of the report includes updated analyses (where judged warranted) based on IGH and TRH CWT releases through the 2004 brood year. Among other things, these updated analyses include a more explicit focus on the impact of “cut-off date” (see sections 3.1.1 and 4.2) on estimated ocean fishery exploitation rates. Some analyses were judged not to require updating and therefore remain based on CWT releases through the 2001 brood year. Analysis results for the 2004 brood year are based on returns through age 4 only; as few Klamath/Trinity Chinook salmon survive to mature at age 5, however, analysis results for 2004 brood year releases should be closely comparable to those for brood years with CWT recovery data complete through age 5.
Chapter 2
Data Sources and Manipulations
We compiled CWT release and recovery data for four hatchery stocks of Chinook salmon: Iron Gate Hatchery (IGH) and Trinity River Hatchery TRH (TRH) Fall Chinook salmon; TRH spring Chinook salmon; and Cole Rivers Hatchery (CRH) spring Chinook salmon. For each stock, the basic categories of data types included (a) release information; (b) estimated ocean recoveries; (c) estimated freshwater returns; and (d) lengths at hatchery return.
2.1 Releases
Information about hatchery releases of CWT Chinook was obtained from the Pacific States Marine Fisheries Commission’s (PSMFC) coded-wire tag database, known as the regional mark information system, or RMIS. Information was obtained for IGH and TRH fall Chinook releases from brood years 1978 through 2006, TRH spring Chinook releases from brood years 1976 through 2006, and CRH spring Chinook releases from brood years 1975 through 2005.
Releases from each hatchery and brood year often included several coded-wire tag (CWT) release groups. In general, the RMIS release table includes one data record per CWT release group. Release information for each CWT release group includes tag code, species, race, brood year, release date, average size, release group size (number released with adipose fin clip and coded-wire tag), agency, hatchery, and release location.
In most cases RMIS information was found to be accurate and complete. However, we did find and correct some important errors in RMIS release data. In particular, three 2001 brood year spring Chinook CWT release groups (093515, 093517, and 093518) from CRH were erroneously labeled as fall Chinook release groups. Also, we found that RMIS release records for tag codes 060830 and 060831 released from
IGH were erroneous duplicates of the release records for tag codes 063830 and 063831.
It was often challenging to determine, from RMIS release location data, whether individual CWT groups were released directly from or near the hatchery rearing location (on-site releases). Fish released at locations distant from the hatchery of rearing (off-site releases) tend to stray (fail to return to hatchery of origin) at much higher rates than fish released on-site. Because stray escapement of hatchery fish to natural spawning areas may not be adequately estimated, estimates of total freshwater escapement for off-site releases may be negatively biased as compared to those for hatchery CWT groups released on-site thereby also affecting estimated survival rates and other parameters. In most, but not all, cases, off-site releases were identifiable by the entries in the release location code field of the CWT release table. We restricted our CWT analyses to those groups which we believed to have been released on-site. In the Klamath-Trinity and Rogue systems relatively few CWT groups have been released off-site and these have often been small (e.g., IGH pond program in 1980s). Essentially no CWT groups are currently released off-site from any of these facilities.
2.2 Ocean Recoveries
RMIS was the original source of all records of ocean recoveries of CWT Chinook. For TRH spring Chinook, we initially acquired CWT recovery data from RMIS, but then processed these data through modified versions of KOHM (Klamath Ocean Harvest Model) programs. Because CWT recovery data have been subjected to intensive scrutiny for management of Klamath River Chinook, we took advantage of summarized records of ocean recoveries of IGH and TRH CWT fall Chinook that had been prepared for KOHM purposes (A. Grover, CDFG, pers. comm.). The KOHM records were re-processed in our project so as to produce the statistical summaries required for our calculations. KOHM summaries were generally by hatchery, release type and month, whereas summaries required for our project analyses were generally by individual CWT code and year.
For most analyses in our project, it was adequate that KOHM ocean CWT recovery records were summarized by month of capture, but other analyses required date of ocean captures. To produce such data, we modified and ran one of the KOHM initial programs to generate a table that included capture date for individual observed recoveries. The output records were otherwise identical to those produced in the original KOHM table.
Individual ocean CWT recoveries were labeled as occurring before or after an assumed cut-off date for river entry for fish maturing in that year. For TRH and CRH spring Chinook, we assumed that age $i$ maturing fish would entry freshwater prior to June 15, and for IGH and TRH fall Chinook we (initially) assumed that all
age $i$ maturing fish entered freshwater prior to September 1 (consistent with KOHM assumptions). For these *cut-off dates* we determined the estimated number of ocean recoveries at age $i$ prior to and following the cut-off dates. Because application of our cohort analysis methods (see below) to CWT recovery data using the September 1 cut-off date produced some implausible results, particularly for post-maturation ocean fishery exploitation rates of TRH fall Chinook (see section 4.2), we explored the consequences of using alternative cut-off dates of 15 September and 01 October, respectively. As KOHM data summaries are made on a monthly basis, re-expression of landings and non-landed mortalities was straightforward for the 01 October cut-off date. Generating data summaries for a 15 September cut-off date was more complicated and is explained below:
- **Step 1.** From the existing KOHM processed ocean data, we pooled pre-and post-September 1 impacts by CWT code and age of capture. Pre-and Post Sept 1 NLM were pooled in the same way.
- **Step 2.** We extracted RMIS ocean recovery records matching the CWT codes and capture ages in tables generated in Step 1. We summed the CWT recovery sampling expansions for pre- and post-September 15 capture dates. From these sums we calculated proportions of CWT recoveries occurring pre- and post September 15 for the CWT codes and ages of interest.
- **Step 3.** We multiplied the pooled impacts and NLM from step (1) by the pre-and post September 15 catch proportions calculated in Step 2. In this way we re-repartitioned the impacts and NLM from pre- and post-September 1 intervals to pre- and post-September 15 intervals for the CWT codes and ages of interest.
- **Step 4.** Results from Step 3 were integrated into the table containing ocean recovery data for recovery years prior to 2007.
### 2.3 Non-Landed Ocean Mortalities
#### 2.3.1 KOHM Methods, with Minor Modifications
For the broad years of CWT releases explored in this project, ocean fisheries have been regulated such that hooked fish below legal minimum length limits (which may differ by sport and commercial fisheries) must be released. Some unknown fraction of these hooked and released fish will die. No records are kept of the numbers of fish that are hooked and released, so non-landed mortalities are therefore generally *imputed* rather than estimated from CWT recovery data. Members of the KOHM
Modeling Team have devised a method for imputing ocean non-landed mortalities of IGH and TRH fall Chinook that relies on observations of landed CWT’d fish; incorporates age, release type, month of capture, expected mean and variance in length for a given month and legal minimum retention length (which may vary by year and/or fishery type); and assumes fishery-specific hooking mortality and drop-off rates. Details about the derivation and use of the KOHM method for estimating ocean non-landed mortalities are in Goldwasser at al. (2001).
In our project we modified the KOHM methods to estimate ocean non-landed mortalities for IGH and TRH fall Chinook, and applied these modified methods also to TRH spring Chinook. The modifications mainly involved minor changes in the temporal strata within which coded-wire tag recoveries and non-landed mortalities were summed. In our cohort analyses of CWT recovery data, we required non-landed mortalities by age and fishery type for two time periods (pre- and post- cut-off date) for each year for each CWT release group.
An important table in the KOHM method of estimating ocean non-landed mortalities is called Plegal. Table Plegal is used, essentially, to calculate an answer to this question: “In a cohort of Klamath River Chinook salmon originating from a release of fingerlings or yearlings, respectively, what is the expected fraction of the fish that have lengths above some given legal minimum length $l_{\text{min}}$ at age $i$ in month $j$?” Values in the Plegal table were derived by KOHM personnel through analysis of many sets of lengths of ocean-caught IGH and TRH Fall Chinook for specific combinations of release group, age, and month of capture. We recognize that there may be unknown errors in using the existing Plegal table, based on IGH and TRH fall Chinook, for imputing non-landed mortalities for TRH spring Chinook. However, a Plegal table for TRH spring Chinook has not yet been derived, and we believed that it would be better to apply the existing Plegal table rather than to ignore non-landed mortalities for releases of TRH spring Chinook salmon. Let $p_{\text{legal}}$ be the expected fraction of fish belonging to a CWT release group that are above the legal minimum size limit at age $i$ in month $j$, and let $\hat{H}_{\text{legal}}$ be the estimated number of (legal-sized) fish that have been harvested in a fishery for some month. Then, assuming that legal and sublegal fish of age $i$ are contacted at the same rate in ocean fisheries, the total number of fish from that CWT group that were contacted in that fishery/month stratum could be estimated as:
$$\hat{C}_{\text{total}} = \frac{\hat{H}_{\text{legal}}}{p_{\text{legal}}}$$
The imputed number of contacts with sublegal fish would therefore be:
$$\hat{C}_{\text{sublegal}} = \hat{C}_{\text{total}} - \hat{H}_{\text{legal}} = \frac{\hat{H}_{\text{legal}}(1 - p_{\text{legal}})}{p_{\text{legal}}}$$
and the imputed number of non-landed mortalities would be:
\[
\tilde{D}_{sublegal} = \hat{C}_{sublegal} \cdot p_{hook},
\]
where \(p_{hook}\) is hooking mortality rate, set at 0.14 for recreational fisheries and 0.26 for commercial fisheries.
Non-landed ocean mortalities of CRH spring Chinook were not calculated. A spreadsheet with cohort reconstruction of CRH spring Chinook CWT release groups, provided by Tom Satterthwaite, ODFW, did include estimates of ocean non-landed mortalities. However, those estimates were calculated by methods very different from and not comparable to those used by KOHM personnel to calculate ocean non-landed mortalities for IGH and TRH fall Chinook. Therefore, when comparing estimate life history or fishery parameters across IGH, TRH and CRH stocks of Chinook salmon, we omitted inclusion of imputed non-landed ocean mortalities for CRH spring Chinook. As noted in section 4.1.1, omission of non-landed mortalities (as calculated using KOHM methods) would lead to small negative bias in estimates of survival rates from release to ocean age 2 and small positive bias in estimates of age-specific conditional maturation probabilities.
### 2.3.2 Alternative Methods
We believe that existing KOHM methods for imputing non-landed mortalities could be improved in two respects. First, the method described above results in zero imputed non-landed mortalities whenever there are no estimated landed recoveries of legal-sized fish. At age 2, this means that imputed non-landed mortalities are almost always zero, a result that seems highly implausible, especially for release groups at large in the early 80s when ocean fishery exploitation rates for legal-sized fish often exceeded 50%. Second, the use of a fixed set of \(p_{legal}\) values for a particular month and release type invokes an implicit assumption that interannual variation in size at age has no effect on \(p_{legal}\) values. Our analyses of mean lengths at hatchery return suggest, however, that there is dramatic interannual variation in mean length at age 3. This means that in some years an unusually large fraction of age 3 fish might exceed \(l_{min}\) and would have high \(p_{legal}\) (with correspondingly lower non-landed mortalities), whereas in other years an unusually small fraction of age 3 fish might exceed \(l_{min}\) and would have small \(p_{legal}\) (with correspondingly higher non-landed mortalities). In the Results section of this report, we briefly consider an alternative approach to imputation of age 2 non-landed mortalities, and we describe procedures and results of using mean lengths at hatchery return to generate year- and month-specific values of \(p_{legal}\) that we believe could produce more plausible imputations of non-landed mortalities at age 3.
2.4 Freshwater Recoveries
Complete records of freshwater CWT recoveries for the Klamath-Trinity basin were not available from RMIS. Summarized data for freshwater recoveries of IGH and TRH CWT fall Chinook CWT groups were instead provided by KOHM personnel. We assembled and summarized our own records of freshwater CWT recoveries of TRH spring Chinook from brood years 1976 through 2006 from data provided by CDFG and the Yurok and Hoopa Tribal Fisheries departments. Total CWT recoveries in the upper Trinity sport fishery and natural spawning areas were estimated by CDFG personnel based on Willow Creek weir mark-recapture data, hatchery returns, sport angler surveys and reward tag returns, and actual CWT recoveries. CWT recoveries in the Klamath sport fishery and natural spawning areas were also based partly on observed CWT recoveries and partly on estimation from carcass and spawner surveys, angler surveys, reward tag returns, and other sources of information.
Records of CWT spring Chinook released from CRH and recovered at CRH were available from the RMIS. However, RMIS records of recoveries of CRH CWT spring Chinook in the Rogue River sport catch and natural area spawning were incomplete or lacking. Our primary source for CRH freshwater recovery data was a spreadsheet provided by Tom Satterthwaite (ODFW) for cohort reconstruction of CRH CWT release groups. The spreadsheet was translated into a database table, and the freshwater recovery data were extracted for use in the current analysis. Some values reported in the CRH cohort spreadsheet (straying, prespawning mortality and river harvest) were often not based on actual CWT recoveries, but were derived by Satterthwaite. Pre-spawning mortalities, often substantial in the Rogue, were derived by regression analyses for years in which there were not field programs designed to estimate these mortalities.
2.5 Lengths at Hatchery Return
For each tag code and age of hatchery return, we attempted to calculate the mean and standard deviation of lengths at hatchery return; this information is not consistently available in RMIS records. In some cases we calculated means and variances of hatchery lengths from raw length data (hatchery records). In other cases, we relied upon previously calculated length statistics available in published or unpublished reports. Below we summarize the steps that we took to generate a fairly complete series of length statistics for individual CWT groups at hatchery return.
2.5.1 TRH
Wade Sinnen, CDFG, provided a data table containing 66,180 individual records of Chinook returning to TRH in years 1990 through 2008. From that table we summarized 62,162 records of TRH spring and fall Chinook with length measurements and deciphered CWT codes (21,925 spring Chinook, 40,236 fall Chinook). Wade Sinnen also provided three tables with individual records of Chinook returns to TRH in 1987, 1988, and 1989. The original tables had 13,153 records. We summarized 10,549 lengths from measured TRH CWT spring and fall Chinook (3,634 spring Chinook, 6,915 fall Chinook). Mark Zuzpan, CDFG, provided a table of spring and fall Chinook returns to TRH in years 1983 through 1985 and 1987 through 1993. The original table had 29,529 records. We summarized 7,025 CWT Chinook length records from return years 1983 through 1985 (5,471 fall Chinook, 1,554 spring Chinook) from Zuzpan’s table. We also worked up CDFG TRH hatchery records (including date, tag code, and length) for CWT spring and fall Chinook returning to TRH in 1986: 2,340 records (1,327 fall Chinook, 1,013 spring Chinook). Finally, length statistics from hatchery returns of CWT groups of TRH spring and fall Chinook released from brood years from 1977 through 1980 were based on summaries presented in Hankin (1990, his Table 5, mean lengths only).
2.5.2 IGH
Mark Hampton, CDFG, provided a spreadsheet of IGH CWT fall Chinook hatchery return data from years 1986 through 2006. Morgan Knechtle, CDFG, provided return data for 2007 and 2008. We translated these data to database format and summarized 16,049 lengths. Length statistics for returns of CWT fall Chinook to IGH in 1984 were calculated from hardcopy IGH hatchery records, based on 444 records. Length statistics for IGH CWT fall Chinook released from brood years 1976 through 1980 were based on Hankin (1990, his Table 6, mean lengths only).
2.5.3 CRH
RMIS records provided lengths of individual CWT recoveries at CRH for brood years 1979 through 2000 (return years 1981–2005). These data were summarized (count, mean, standard deviation) for each tag code and age. Additional mean lengths for 1975 through 1980 brood year recoveries of CRH CWT spring Chinook recovered at CRH at ages 2 through 4 were based on Hankin (1990, his Table 3, mean lengths only). Length data for CWT recoveries at CRH in 2006 were provided by John D. Leppink, Oregon CWT Data Base Coordinator.
2.6 Mean Weights at Release
Theoretically, mean weights at release should be recorded in RMIS records for every individual CWT release group. Prior to the 1997 brood year at TRH, typically just a single CWT code was used to “represent” all fingerling releases of fall or spring Chinook salmon and mean weight at release was recorded for such CWT groups. Beginning with the 1997 brood year, however, constant fractional marking (Hankin 1982) of releases from all raceways was initiated and distinctive codes were applied to fish from individual raceways to allow assessment of the possible influence of size at release on subsequent survival rates. Fish are ponded sequentially in raceways at TRH (see Zajanc and Hankin 1998), according to spawning dates of parents, so that the fish that are ponded first (from earliest-spawning parents) typically have larger mean weight at release than fish that are ponded last from latest-spawning parents.
Mean weights of individual CWT release groups have not, however, been consistently measured or reported at TRH and it appears that this has also been the case at IGH. Instead, at TRH (and apparently also at IGH) in many years only a single mean weight has been reported for from 2-9 distinct CWT codes. Reasoning that fish ponded from later-spawning parents would, at the time of release, have a smaller mean size than fish ponded from earlier-spawning parents, we developed methods to make a reasonable guess of mean release size for individual CWT groups when reported mean weights at release corresponded to fish reared in more than one raceway. We emphasize that these are at best reasonable guesses. Unfortunately, in all but one instance, we had no way to determine whether or not these guessed weights are close to true mean weights at release.
Mean weight at release data were not available for individual TRH CWT releases in 1999, 2000 or 2001 BYs. Instead, mean weights were reported for CWT groups reared in 2-3 adjacent raceways. Based on information on raceway rearing locations, we generated plausible mean weights at release for individual CWT groups. For the 1999 BY, there were two pairs of CWTs from adjacent pooled raceways. We assumed that the reported mean weight for two pooled adjacent raceways was equal to a weighted average of the mean weights in the adjacent raceways. Let $N_j$ be the number released from raceway $j$ ($j = 1, 2$), let $w_j$ be the mean weight of fish from raceway $j$, and let $\bar{w}$ be the mean weight for the combined raceways. Then $\bar{w} = (N_1 w_1 + N_2 w_2)/(N_1 + N_2)$. We next assumed that the mean weights of fish from adjacent raceways for all four CWT groups released from the 1999 brood year differed from one another by a constant amount $x$. Let $y$ denote the mean weight of the CWT release group (CWT #065257) that was ponded last for the 1999 BY. Then, the mean weights of fish in previously ponded raceways were assumed to be $y + x$ (065256), $y + 2x$ (065255), and $y + 3x$ (065254). This setup leads to creation of the following simultaneous equations:
\[
\frac{N_1 y + N_2 (y + x)}{N_1 + N_2} = \bar{w}_{1,2} = 5.02 \\
\frac{N_3 (y + 2x) + N_4 (y + 3x)}{N_3 + N_4} = \bar{w}_{3,4} = 5.71,
\]
where 5.02 and 5.71 are the mean weights (g) reported for the combined CWT groups (065257 and 065256) and (065255 and 065254), respectively. These simultaneous equations were solved for \(x\) (giving a result of 0.3362 g) and \(y\) (giving a result of 4.8562 g), thereby generating plausible mean weights at release of 4.86 g, 5.19 g, 5.53 g, and 5.86 g, respectively, for CWT groups 065257, 065256, 065255, and 065254, respectively. These calculated weights are similar to corresponding mean weights that we calculated based on earlier hatchery weight records for fish (prior to release) from individual raceways: 4.89 g, 5.14 g, 5.38 g, and 6.10 g, respectively.
For the 2000 BY, there were an unusually large number of individual CWT groups. Three separate CWT codes were used to tag fish from raceway C3/4 (with a reported pooled weight of 5.27 g); a single CWT was used to tag fish from raceway C1/2 (reported mean weight = 6.87 g); and 3 CWT codes each were used to tag fish from raceways A1/2, A3/4, and B1/2, respectively, with a single pooled mean weight reported for all of these CWT groups (8.10 g). As for the 1999 BY releases, we assumed that the reported mean weight across all three raceways should theoretically equal a weighted mean weight from the individual races, so that:
\[
8.10 g = \frac{N_1 y + N_2 (y + x) + N_3 (y + 2x)}{N_1 + N_2 + N_3}
\]
where \(N_1\), \(N_2\), and \(N_3\) are the numbers of fish in raceways B1/2, A3/4 and A1/2, respectively. We constructed a program in R that generated guesses for \(x\), given \(y\), so as to achieve the reported mean weight of 8.10 g and also ensure that the mean weight of fish from raceway B1/2 exceeded the reported mean weight of fish released from raceway C1/2 (6.87 g).
For the 2001 BY, two pairs of CWT release groups were released from two sets of adjacent raceways with a single reported mean weight for each of these adjacent raceway pairs. We were able to generate plausible guesses for raceway-specific weights for these 2001 BY release groups using the previously-described simultaneous equation approach used for 1999 BY CWT releases.
For the 2002-2004 brood years, raceways for release were unknown, so we made no adjustments to reported mean weights at release (with the exception of correcting a clearly erroneous listing from the 2003 BY).
Chapter 3
Analysis and Simulation Methods
3.1 Basic Cohort Analysis/Estimation Methods
We adopted a cohort analysis approach that is similar to, though not identical with, the analysis methods that are currently used by the KOHM modeling team in the context of regulation of ocean salmon fisheries. Our analysis methods differ in two chief respects.
First, we make no attempt to account for ocean natural mortalities that may occur during ocean fishing seasons. Instead, we assume that no natural mortalities occur during ocean fishing seasons but that all ocean natural mortalities occur between the end of one year’s fishing season and the beginning of the next year’s fishing season. Second, we define age classes to begin with the numbers alive immediately prior to the beginning of spring ocean fisheries whereas current analysis methods appear to instead employ a “birth-date” (termed cut-off date in this report) of September 1, after which a cohort increases in age by one year. Neither of these differences should have any substantial impact on relative values of most estimated parameters, but choice of “birth-date” can have a substantial impact on estimation of ocean exploitation rates following maturation at age (see sections 3.1.1 and 4.2).
We assume that Chinook salmon from the Klamath and Rogue rivers may mature at ages two through five only. Define the following cohort model variables:
\[ A_i(t) = \text{spring ocean abundance at age } i, \text{ immediately prior to fishing in year } t; \]
\[ C_{i,\text{pre}}(t) = \text{ocean fishery landings (recreational and commercial) at age } i \text{ in year } t, \text{ prior to the cut-off date after which immature fish only are assumed to remain in the ocean;} \]
$C_{i, pos}(t) =$ ocean fishery landings (recreational and commercial) at age $i$ in year $t$, following the cut-off date;
$H_i(t) =$ freshwater harvest (net + sport) at age $i$ in year $t$;
$S_i(t) =$ total freshwater escapement (freshwater harvests plus hatchery returns plus stray escapement) at age $i$ in year $t$.
The numbers of fish that remain alive at age and that are available for capture or escapement at age depend on two additional sets of model parameters and a final single parameter:
$p_i(t) =$ probability of surviving natural causes of death between ages $i$ and $i+1$ over the interval from the end of ocean fisheries in year $t$ to the beginning of ocean fisheries in year $t+1$;
$\sigma_i(t) =$ (conditional) age-specific maturation probability = probability of maturing at age $i$ given alive at age $i$ and not captured in ocean fisheries in year $t$ prior to the cut-off date;
$p_0(t) =$ probability of surviving from release to ocean age 2, just prior to ocean fisheries, for a cohort released from brood year $t$.
For cohort analysis based on recoveries from a single CWT release group, one must assume either that the ocean survival rates, the $p_i(t)$, are known or that the $\sigma_i(t)$ are assumed known and are time-invariant. We assume that the $p_i(t) = p_i$ are known, as is common practice, and we fix $p_2 = 0.50$ and $p_3 = p_4 = 0.80$.
Having defined the above variables and parameters, our cohort analysis proceeds from the oldest age as outlined below (omitting year-specific notation, which is assumed implicit), substituting estimated values for the $C_{i, pre}$, $C_{i, pos}$, and $S_i$:
At Age 5, assuming that all fish mature by age 5:
$$\hat{A}_5 = \hat{C}_{5, pre} + \hat{S}_5$$
At Ages 3 and 4:
$$\hat{A}_i = \frac{\hat{A}_{i+1}}{p_i} + \hat{C}_{i, pre} + \hat{C}_{i, pos} + \hat{S}_i = \frac{\hat{A}_{i+1}}{0.80} + \hat{C}_{i, pre} + \hat{C}_{i, pos} + \hat{S}_i$$
At Age 2:
$$\hat{A}_2 = \frac{\hat{A}_3}{p_0} + \hat{C}_{2, pre} + \hat{C}_{2, post} + \hat{S}_2 = \frac{\hat{A}_3}{0.50} + \hat{C}_{2, pre} + \hat{C}_{2, post} + \hat{S}_2$$
Let the numbers of fish belonging to a CWT release group be denoted by $R$. Then, survival from release to age 2, just prior to ocean fisheries, can be estimated as:
$$\hat{p}_o = \frac{\hat{A}_2}{R}$$
Conditional age-specific maturation probabilities at ages $i = 2, 3, 4$ can be estimated as:
$$\hat{\sigma}_i = \frac{\hat{S}_i}{\hat{A}_i - \hat{C}_{i,pre}}$$
Pre-maturation and post-maturation age-specific ocean fishery exploitation rates, $E_{i,pre}$ and $E_{i,pos}$, can be estimated as:
$$\hat{E}_{i,pre} = \frac{\hat{C}_{i,pre}}{\hat{A}_i}$$
$$\hat{E}_{i,pos} = \frac{\hat{C}_{i,pos}}{\hat{A}_i - \hat{C}_{i,pre} - \hat{S}_i}$$
Finally, age-specific freshwater exploitation rates, $u_i$, can be estimated as:
$$\hat{u}_i = \frac{\hat{H}_i}{\hat{S}_i}$$
### 3.1.1 Ocean Fishery Cut-Off (Birth-) Date
In cohort analyses of CWT recoveries for Klamath River fall Chinook salmon, a cut-off date of 01 September has been used to separate ocean fishery catches consisting of immature and maturing Chinook (prior to the cut-off date) from those consisting of immature fish only (following the cut-off date). At a January 2008 meeting considering some of the preliminary findings from our research, D. Hillemeier (Yurok Tribal Fisheries) noted that use of this September 1 cut-off date might be responsible for what appeared to be unrealistically high estimates of post-season ocean fishery exploitation rates for Trinity River fall Chinook salmon, and he also noted that TRH Chinook enter the lower Klamath River net fishery later than IGH fall Chinook. We therefore explored the consequences of adopting alternative cut-off dates of 15 September and 01 October. Using those alternative cut-off dates, we evaluated their merits by examination of the plausibility of estimated post-season ocean fishery exploitation rates for TRH Chinook, in particular, and the relative agreement between exploitation rates estimated for IGH and TRH fall Chinook. Alteration of the cut-off date required that we generate corresponding estimates of ocean fishery catches and non-landed mortalities prior to and following the alternative cut-off dates, but there were otherwise no changes made in cohort analysis estimation methods.
3.1.2 Pooling Across Release Groups Within Brood Years
To simplify presentation and interpretation of cohort analysis results, we calculated weighted averages of estimated parameters across multiple CWT releases groups of the same type released from the same brood year. Let $\hat{\theta}_{ij}$ be an estimated parameter based on recoveries of CWT group $i$ ($i = 1, 2, \ldots, k$) released in year $j$, and let $R_i$ denote corresponding release group sizes. Then, pooled parameter estimates for year $j$ were calculated as:
$$\hat{\theta}_j = \frac{\sum_{i=1}^{k} \hat{\theta}_{ij} R_i}{\sum_{i=1}^{k} R_i}$$
3.2 Simulation of Estimator Variances vs CWT Release Group Size
We explored the relationships between variances of estimated life history and fishery parameters and CWT release group size by constructing a simplified simulation model. Given release of a known number of fish belonging to a CWT release group, we assume that a multinomial model adequately captures the essential structure of the true (but unknown) numbers of fish that are accounted for by a simplified set of fates: caught in the pre-maturation or post-maturation ocean fisheries at age $i$; maturing and returning to freshwater at age $i$. The probabilities of each of these events can be expressed in terms of the cohort analysis model parameters previously described. For example, P(captured in pre-maturation fishery at age 3)=
$$p_0(1 - E_{2,pre})(1 - \sigma_2)(1 - E_{2,post})p_2E_{3,pre}$$
The multinomial model should provide a good theoretical representation of the variability in the actual outcomes (fates) suffered by individual fish, assuming that fates of individual fish are independent of one another. We used a multinomial generating function in R to generate the true numbers of fish that might be caught in ocean fisheries or escape to spawn at ages 2-5 given multinomial model parameters.
Superimposed on this natural “process” variability is uncertainty in the numbers of fish that suffer specific fates as it depends on sampling error. For example, not all fish in the ocean catch are examined for presence of adipose-fin clips which indicate presence of a CWT. Instead, only about 20% of the ocean catch is sampled at random. We very crudely modeled variability due to sampling by assuming a 20% sampling rate in ocean fisheries and in freshwater escapements. Given the assumed 20% sampling rate, we assumed that numbers of observed CWTs would be Poisson
distributed with mean and variance equal to 20% of the observed outcome under the multinomial model. We used the function `Rpois()` in R to generate the Poisson model sampling outcomes. Finally, the simulated numbers of CWT’d fish observed via sampling were then scaled up by a factor of 5 to account for the 20% sampling fraction.
Given the above simulation structure, fishery and life history parameters were set equal to the mean estimated values for IGH fingerling releases of fall Chinook salmon that were at large over the period 1996 through 2006 to represent “current conditions”. We assumed that annual survival rates were known and equal to those assumed for cohort analysis methods. For each simulated complete outcome of the mutually exclusive fates of a CWT release group and associated Poisson sampling, we applied our cohort analysis methods to calculate estimates of life history and fishery parameters, and we calculated expected values, bias, variance, and mean square error over a large number (100,000) of such independent simulations. We varied the number of fish in release groups to determine the relationships between release group size and variability in estimates of life history and fishery parameters. We emphasize that the simulated bias and variability of estimates that we generated are conditioned on the strong (and no doubt incorrect) assumption that conditional ocean survival rates ($p_2$, $p_3$, $p_4$, $p_5$) are known.
### 3.3 Effects of River Flows
At the suggestion of Mark Hampton, CDFG, we explored the possibility that survival rates of IGH fingerling releases of fall Chinook might be affected by flows in the upper Klamath River at or near the time of release. We also examined the possibility that survival rates of TRH releases of Chinook salmon might be affected by flows in the upper Trinity River.
Our river flow data were monthly mean flows during June of the year of release based on USGS gauges at Seiad (upper Klamath River) and Burnt Ranch (upper Trinity River). We used simple scatterplots of estimated pooled brood year survival rates for fingerlings against mean June flows during the year of release to explore the possibility that survival rates from release to age 2 were affected by freshwater flows at time of release. We used a Monte Carlo permutation test to evaluate the possibility that the very suggestive scatterplot that emerged for IGH might have originated due to chance under a null hypothesis of independence of flows and survival rates. Methods used for this permutation test are described in the Results section, in the context of the scatterplot, without which it would be difficult to describe the logic of this statistical approach.
3.4 Alternative Mixes of Fingerling and Yearling Releases
One of the serious logistical constraints to implementation of a constant fractional marking program at Iron Gate Hatchery and, in certain years, at Trinity River Hatchery, is the very large number of fish that are released as fingerlings. Fingerlings originating from late-spawning fall Chinook parents are smaller than those originating from early-spawning parents, and there is often only a brief temporal window within which they are sufficiently large for tagging prior to release. Thus, if there are large numbers of such very small fingerlings planned for release in early June, it may be difficult or impossible to achieve a desired tagging objective for these fish. In contrast, fish released as yearlings are large and may be tagged over a much longer period of time. Therefore, if it were possible for hatcheries to reduce production of fingerlings but increase production of yearlings, and to produce similar numbers of adult returns, then the logistics of achieving a constant fractional marking program would be simplified. In addition, reduced releases of fingerlings might have reduced impacts (e.g., reduced competition, stress) on wild juveniles that appear to move downstream at a time that is similar to time of releases for fingerlings.
Based on our cohort analysis results, we averaged the estimated life history parameters for fingerling and for yearling fall Chinook salmon released from Iron Gate and Trinity River hatcheries over the brood years 1978-2001, and we averaged fishery parameters over the brood years 1990-2001, to produce performance parameters that characterized long-term average life history traits and “current” fishery management regimes which have recognized the Native American fishing rights in the Klamath River. Given these parameter sets, we calculated the expected ocean impacts (catches + non-landed mortalities), freshwater catches (net plus sport), and freshwater escapements that would result from specified releases of fingerlings and yearlings. We began with the existing production goals at the two hatcheries (2.00 million fingerlings and 0.90 million yearlings at TRH; 4.92 million fingerlings and 0.90 million yearlings at IGH), and calculated expected catches and escapements for alternative production goals that reduced the numbers of fingerlings, increased the numbers of yearlings, but achieved very similar catches and escapements. For these calculations, for a reduction in releases of $y$ fingerlings, the releases of yearlings were increased by $y/S_{y/f}$, where $S_{y/f}$ is the ratio of average survival rates to age 2 for fish released as yearlings as compared to fingerlings. At IGH and TRH these calculated ratios were 4.088 and 5.526, respectively. This very simple rule produced essentially stable total production with relatively small changes in expected allocation of catches across ocean and freshwater fisheries.
Chapter 4
Results
Unless otherwise noted, analysis results presented in this section are based on an assumed fall IGH and TRH Chinook "cut-off date" of September 1 so as to make analysis results most comparable to those generated using existing KOHM methods. As noted below, however, it appears that a more appropriate cut-off date may be September 15. Impact on parameter estimation is trivial with the exception of ocean fishery exploitation rates which receive only minor treatment in this report. Also, with the exception of analyses that focus specifically on the impact of methods used to calculate non-landed mortalities (see below), all reported analysis results for IGH and TRH releases of fall and spring Chinook salmon are based on methods for calculating non-landed mortalities that are currently used for management of Klamath River Chinook salmon.
4.1 Non-Landed Mortalities
Calculated values of non-landed mortalities are important in the context of ocean fishery management of Chinook salmon because fish may be caught and released at age 2, when they are almost always below legal size limits, or at age 3 when varying proportions of a cohort may be below legal size. Mortalities due to such capture and release are counted toward ocean fishery allocations of fish. Hatchery release practices (e.g., release of juveniles as fingerlings or yearlings) may influence the proportion of fish that exceed legal size at age 3, thereby also affecting non-landed mortalities in ocean fisheries. In this section we examine the importance of estimates of non-landed mortalities with respect to estimation of life history parameters; we present an alternative procedure for imputing non-landed mortalities at age 3; and we express concern regarding current methods used to calculate non-landed mortalities at age 2.
4.1.1 Importance for Parameter Estimation
Because it was impractical to develop methods for calculating non-landed mortalities for CRH spring Chinook salmon that would be analogous to those used in the KOHM process, we compared survival rates of CRH and TRH spring Chinook based on cohort analyses that ignored non-landed mortalities altogether. Ignoring non-landed mortalities would clearly lead to negative bias in estimation of ocean fishery impact rates, but may have very small impact on estimates of survival rates to age 2 and the life history parameters that were the primary focus of our research.
Tables 4.1 and 4.2 show that exclusion of imputed non-landed mortalities from cohort analyses has a very minor but predictable impact on estimated survivorship to age two and on age-specific conditional maturation probabilities for IGH and TRH fall Chinook salmon CWT releases (pooled across multiple CWT groups released from the same brood year). Namely, estimated survival rates are slightly lower when non-landed mortalities are excluded from cohort analyses and age-specific maturation probabilities are slightly higher when non-landed mortalities are excluded. For example, for Iron Gate fall Chinook salmon released as fingerlings, a linear regression of estimated age 3 maturation probabilities estimated excluding non-landed mortalities against estimated age 3 maturation probabilities including non-landed mortalities had a near perfect correlation (adjusted R-square = 0.9997), a slope not significantly different from 1 (0.996 +/- 0.0078), and a small but significant positive intercept (0.0057). The effects are so minor that it is very clearly meaningful to compare life history and survival parameters across stocks (i.e., TRH spring Chinook vs CRH spring Chinook) when non-landed mortalities are excluded from CWT recovery data for both stocks.
4.1.2 Alternative Methods for Calculations of Non-Landed Mortalities
Age 3 Non-Landed Mortalities
As noted in the Methods section of this report, current KOHM methods for calculating ocean non-landed mortalities at age 3 rely on an implicit simplification (or assumption) that monthly mean length and variance in length at age (for a given stock and release type, e.g. IGH fall Chinook released as fingerlings) are fixed parameters without interannual variation. Assuming that mean length at age, measured among returning hatchery spawners, provides a good index of size at age, there is strong evidence (see below: Covariance Among Estimated Parameters Across Hatcheries and Races/Size at Age) that this implicit simplification is seriously violated. Indeed, interannual variation in size at age is quite striking. Because age 3 Chinook are generally only partially vulnerable to ocean harvest (i.e., a substantial proportion of a cohort may be below the legal size limit during a particular month), assumption of an invariant growth pattern can theoretically lead to serious underestimation or overestimation of non-landed mortalities in a given fishing season according to whether size at age for a given cohort is much smaller than or much larger than the expected long-term average. Below, we propose an alternative method for calculating non-landed mortalities that accounts for interannual variability in length at age.
Table 4.1. Estimated mean survival rates from release to ocean age 2 for fall Chinook salmon released as fingerlings or yearlings from IGH or TRH including or excluding non-landed mortalities calculated using modified KOHM methods.
| Brood Year | IGH Fingerlings | TRH Yearlings | IGH Fingerlings | TRH Yearlings |
|------------|-----------------|---------------|-----------------|---------------|
| 1979 | 0.03126 | 0.07167 | 0.02940 | 0.06614 |
| 1980 | 0.01080 | 0.05053 | 0.01038 | 0.04975 |
| 1981 | 0.01904 | 0.01864 | 0.01863 | 0.01832 |
| 1982 | 0.00627 | 0.04256 | 0.00609 | 0.04092 |
| 1983 | 0.01634 | 0.27139 | 0.01565 | 0.25541 |
| 1984 | 0.01362 | 0.14209 | 0.01305 | 0.13767 |
| 1985 | 0.01021 | 0.14516 | 0.00973 | 0.13992 |
| 1986 | 0.00062 | 0.11545 | 0.00061 | 0.10971 |
| 1987 | 0.00088 | 0.02185 | 0.00082 | 0.02056 |
| 1988 | NA | 0.01716 | NA | 0.01707 |
| 1989 | 0.00029 | 0.00323 | 0.00028 | 0.00319 |
| 1990 | 0.01345 | 0.00428 | 0.01330 | 0.00427 |
| 1991 | 0.00261 | 0.01093 | 0.00258 | 0.01081 |
| 1992 | 0.02579 | 0.12683 | 0.02548 | 0.12562 |
| 1993 | 0.00102 | 0.01851 | 0.00101 | 0.01846 |
| 1994 | 0.00166 | 0.01621 | 0.00165 | 0.01605 |
| 1995 | 0.00158 | 0.07281 | 0.00158 | 0.07264 |
| 1996 | 0.00305 | 0.01077 | 0.00303 | 0.01072 |
| 1997 | 0.03884 | 0.10080 | 0.03821 | 0.09975 |
| 1998 | 0.01133 | 0.05617 | 0.01120 | 0.05557 |
| 1999 | 0.01628 | 0.07803 | 0.01598 | 0.07648 |
| 2000 | 0.00977 | 0.09080 | 0.00946 | 0.08802 |
| 2001 | 0.00039 | 0.08784 | 0.00036 | 0.08286 |
For existing KOHM methods, we begin with an estimated number of fish landed (by hatchery CWT group, TRH or IGH, and release type, fingerling or yearling) in the ocean fishery at age 3. "Lookup tables" are then used to find the expected mean fish length (for that month), variance in length (for that month), minimum size limit in effect, and thereby calculate an expected proportion of the fish that would exceed legal size, $P_{legal}$, for that month and location. This calculated $P_{legal}$
value is then used to generate an estimated non-catch mortality experienced by fish of sub-legal size. Although this method of calculating non-landed mortalities makes a good deal of sense, failure to account for the very substantial interannual variation in size at age may lead to substantial errors in imputed non-landed mortalities. We propose to incorporate interannual variation in length at age in the following manner.
Table 4.2. Estimated age-specific conditional maturation probabilities at ages 2 ($\sigma_2$) and 3 ($\sigma_3$) for fall Chinook salmon released as fingerlings or yearlings from IGH or TRH including or excluding non-landed mortalities calculated using modified KOHM methods.
| Brood Year | $\sigma_3$: IGH Finger. | $\sigma_2$: TRH Finger. | $\sigma_3$: IGH Finger. | $\sigma_2$: TRH Finger. |
|------------|-------------------------|-------------------------|-------------------------|-------------------------|
| 1979 | 0.44618 | 0.05144 | 0.45710 | 0.05325 |
| 1980 | 0.47666 | 0.32339 | 0.48382 | 0.32609 |
| 1981 | 0.36791 | 0.11503 | 0.37204 | 0.11623 |
| 1982 | 0.38596 | 0.02489 | 0.39388 | 0.02541 |
| 1983 | 0.23345 | 0.18394 | 0.23900 | 0.19151 |
| 1984 | 0.15823 | 0.09125 | 0.16207 | 0.09496 |
| 1985 | 0.47868 | 0.12343 | 0.48472 | 0.12787 |
| 1986 | 0.39049 | 0.21811 | 0.39855 | 0.22286 |
| 1987 | 0.13675 | 0.13196 | 0.13675 | 0.13449 |
| 1988 | NA | 0.19652 | NA | 0.19817 |
| 1989 | 0.56368 | 0.22330 | 0.56368 | 0.22330 |
| 1990 | 0.59997 | NA | 0.60245 | NA |
| 1991 | 0.39910 | 0.14692 | 0.40127 | 0.14773 |
| 1992 | 0.60288 | 0.07122 | 0.60621 | 0.07240 |
| 1993 | 0.47021 | 0.05459 | 0.47098 | 0.05504 |
| 1994 | 0.40570 | 0.06674 | 0.40804 | 0.06698 |
| 1995 | 0.71663 | 0.06806 | 0.71808 | 0.06835 |
| 1996 | 0.48638 | 0.02330 | 0.48701 | 0.02341 |
| 1997 | 0.85647 | 0.04179 | 0.85815 | 0.04217 |
| 1998 | 0.53461 | 0.04828 | 0.53726 | 0.04881 |
| 1999 | 0.37217 | 0.01069 | 0.37614 | 0.01085 |
| 2000 | 0.46727 | 0.04492 | 0.47588 | 0.04633 |
| 2001 | 0.70451 | 0.06270 | 0.70604 | 0.06526 |
Let $\mu_{3i}$ denote the expected mean length of an age 3 fish originated from a given release type and hatchery during month $i$, and let $\sigma_i^2$ denote the associated expected variance in length at age. (These values are available from the Lookup Tables.) Let $\mu_3^*(t)$ denote the observed mean age 3 length, measured at hatchery return, in year...
$t$; let $\overline{\mu}_3 = \sum_{t=1978}^{2001} \mu_3^*(t)/24$ denote the observed grand mean length at age 3, measured at hatchery return, over all CWT groups of a given race and release type; and let $\sigma_{\mu_3}^2 = \sum_{t=1978}^{2001} (\mu_3^*(t) - \overline{\mu}_3)^2/(24-1)$, the observed interannual variance among the hatchery mean lengths at age 3. We wish to use the observed hatchery mean lengths at age 3, $\mu_3^*(t)$, to adjust the expected mean lengths at age 3 in ocean fisheries, the $\mu_{3i}$, thereby producing the adjusted values $\mu_{3i}^*(t)$.
If lengths at age 3 are normally distributed in the ocean and at hatchery return, we believe that it may be reasonable to invoke the following equality:
$$\frac{\mu_{3i}^*(t) - \mu_{3i}}{\sigma_{3i}} = \frac{\mu_3^*(t) - \overline{\mu}_3}{\sigma_{\mu_3}}$$
(4.1)
Solving equation 4.1 for $\mu_{3i}^*(t)$ gives:
$$\mu_{3i}^*(t) = \mu_{3i} + (\mu_3^*(t) - \overline{\mu}_3)\frac{\sigma_{3i}}{\sigma_{\mu_3}}$$
(4.2)
The adjusted values $\mu_{3i}^*(t)$ would then be used to calculate age 3 non-landed mortalities in the usual manner used in KOHM analyses except that $\mu_{3i}^*(t)$ would replace $\mu_{3i}$.
Based on lengths of individual fish measured at hatcheries, it appears that variance in length at age increases with mean length at age. Therefore, it is probably appropriate to also modify variance in length at age once mean length at age has been adjusted as suggested above. Rather than use the assumed constant $\sigma_{3i}$ from the lookup tables, it would probably be better to assume a constant coefficient of variation (standard deviation/mean). In analysis of extensive hatchery length data, we found that coefficients of variation in length at age had essentially no trend with mean length (see Figure 4.1) and averaged about 0.0813. Therefore, expected standard deviation could be calculated as 0.0813 times $\mu_{3i}^*(t)$.
Table 4.3 compares age 3 non-landed mortalities calculated using the current KOHM methods and our proposed alternative method (including CV adjustment) for two IGH yearling CWT release groups for which the mean lengths at hatchery return at age 3 were unusually large (1982 BY CWT code 065908, 724 mm, 28.58 in) or small (1980 BY CWT code 065906, 590 mm, 23.23 in) compared to the approximately 25 years average (648 mm, 25.52 in). For the release group with unusually large length at age 3, the total number of non-landed mortalities imputed by the proposed revised method was just 19.2% that for calculated using existing KOHM methods; for the release group with unusually small length at age 3, the total number of imputed non-landed mortalities was 324 times larger than calculated using the existing KOHM methods. Clearly, failure to account for interannual variation in length
Figure 4.1. Coefficients of variation (CV = standard deviation/mean) in length at ages 2-4 among returning fall Chinook hatchery spawners at TRH and IGH, 1978-2001 brood years. Horizontal dotted line shows value of mean CV (0.0813).
at age could have serious consequences for the errors associated with imputation of non-landed mortalities, especially when mean size at age is considerably less than the long-term average assumed in current KOHM calculations. Adjustment for CV had essentially no impact on the imputed values for non-landed mortalities.
We recognize that the results presented in Table 4.3 are rather extreme and that there are some unsettling aspects of the adjustment of monthly mean lengths. We briefly consider these issues in the Discussion section.
Age 2 Non-Landed Mortalities
The current KOHM methods for imputing non-landed mortalities generate values of zero whenever no legal-sized fish are reported captured. Imputed non-catch mortalities for age 2 Chinook will therefore almost always have zero value because it is very unusual for age 2 Klamath River Chinook to exceed the minimum size limits. Although it might be argued that age 2 non-landed mortalities are not of importance because fishery allocations are all expressed in terms of adult fish (age 3 and older), this argument fails to consider the fact that ocean interceptions and non-landed mortalities of age 2 Chinook will reduce the numbers of fish surviving to be alive at age 3 and therefore should be included in overall ocean fishery impacts.
Table 4.3. Imputed non-landed mortalities during the months of May through October for two illustrative IGH CWT groups releases of yearlings from brood years for which the mean lengths were unusually large (065906) or unusually small (065908). Column headings with asterisks are based on our proposed methods; headings without asterisks are those based on the existing KOHM calculations. Calculation methods including the size adjustment are otherwise consistent with the existing KOHM methods. Mean lengths and standard deviations (S.D.) in inches.
| Month | Existing KOHM | Proposed Length-Adjusted |
|-------|---------------|--------------------------|
| | $\bar{l}_3$ | S.D. | $\bar{NLM}$ | $\bar{l}_3$ | S.D.* | $\bar{NLM}^*$ |
| 5 | 24.4 | 2.3 | 37.1 | 29.8 | 2.4 | 3.4 |
| 6 | 26.4 | 2.3 | 16.7 | 31.8 | 2.6 | 3.1 |
| 7 | 26.4 | 2.3 | 13.8 | 31.8 | 2.6 | 4.7 |
| 8 | 26.5 | 2.4 | 71.2 | 32.2 | 2.6 | 14.9 |
| 9 | 26.6 | 2.3 | 27.1 | 31.9 | 2.6 | 5.8 |
| 10 | 26.4 | 2.1 | 3.1 | 31.4 | 2.6 | 0.6 |
| Total:| | | 169.0 | | | 32.9 |
| Month | Existing KOHM | Proposed Length-Adjusted |
|-------|---------------|--------------------------|
| | $\bar{l}_3$ | S.D. | $\bar{NLM}$ | $\bar{l}_3$ | S.D.* | $\bar{NLM}^*$ |
| 5 | 24.4 | 2.3 | 9.4 | 20.3 | 1.7 | 10,917.1 |
| 6 | 26.4 | 2.3 | 7.9 | 22.4 | 1.8 | 326.6 |
| 7 | 26.4 | 2.3 | 11.9 | 22.4 | 1.8 | 533.2 |
| 8 | 26.5 | 2.4 | 5.9 | 22.3 | 1.8 | 325.5 |
| 9 | 26.6 | 2.3 | 0.5 | 22.6 | 1.8 | 0.5 |
| 10 | 26.4 | 2.1 | 2.0 | 22.7 | 1.8 | 71.3 |
| Total:| | | 37.6 | | | 12,174.2 |
Let $\phi_{com}$ and $\phi_{rec}$ be “age 2 shaker contact” multipliers for commercial and recreational fisheries. These multipliers are intended to reflect the likelihood that commercial fishermen, in particular, deliberately avoid contact with age 2 Chinook given their lack of commercial value and/or that age 2 Chinook have ocean migration or distribution patterns that are different from those for age 3 and older Chinook. Thus, the $\phi_{com}$ and $\phi_{rec}$ may be considerably less than 1. Then, assuming that no age 2 Chinook exceed legal size in a particular year, $t$, the expected non-landed mortalities at age 2, for a particular CWT group, in the pre-maturity fishery, would be:
$$NLM_{com}(t) = A_2 \phi_{com} E_{pre, com}(t) p_{hook, com}$$
$$NLM_{rec}(t) = A_2 \phi_{rec} E_{pre, rec}(t) p_{hook, rec}$$
where $E_{pre, com}(t)$ and $E_{pre, rec}(t)$ are the exploitation rates for fully vulnerable fish in year (t). To apply this kind of approach, one would have to rely on other CWT groups at large at age 4 in year $t$ to estimate the exploitation rates for fully vulnerable fish. Also, it seems likely that the non-catch mortality rates, $p_{hook, com}$ and $p_{hook, rec}$ might be greater for age 2 fish than the 0.14 and 0.26 values, respectively, that are assumed at age 3.
In his spreadsheets for cohort analysis of Rogue River spring Chinook CWT groups released from CRH, Satterthwaite included age 2 non-landed mortalities that were calculated using a very similar method to the one proposed above. Although the $\phi_{com}$ and $\phi_{rec}$ are unknown, a range of plausible values might be conjectured (e.g., 0.2-0.5) and imputations of age 2 non-landed mortalities could be generated over this range of plausible multipliers. Only through such hypothetical calculations could one judge whether or not the current failure to address age 2 non-landed mortalities is a serious flaw in existing KOHM calculations.
### 4.2 Importance of Cut-off Date
Choice of cut-off date (September 1, September 15, or October 1) had negligible or trivial impact on calculated survival rates from release to age 2 or on age-specific conditional maturation probabilities. Choice of cut-off date did, however, have modest impact on estimated "pre-maturation" ocean fishery exploitation rates ($\hat{E}_{i, pre}$) and often very substantial impact on estimated "post-maturation" ocean fishery exploitation rates ($\hat{E}_{i, pos}$). Whenever there were substantial ocean fishery recoveries beyond September 1, estimated pre-maturation exploitation rates generally increased modestly with increasing cut-off date, whereas estimated post-maturation exploitation rates decreased, sometimes dramatically (Table 4.4).
For both TRH and IGH fall Chinook CWT releases, estimated post-maturation ocean fishery exploitation rates were implausibly large in some years when the usual
September 1 cutoff date was used. For example, mean estimates of $E_{4,pos}$ were 0.764, 0.391, 1.000, 0.194, 0.4681 and 0.255 for brood years 1983, 1985, 1991, 1992, 2000 and 2002, respectively, for TRH fingerling releases, and were 0.478, 0.379, 0.168, 0.134 for brood years 1989, 1983, 1992 and 1999, respectively, for IGH fingerling releases. When the cut-off date was moved to September 15 (or October 1), estimates of $E_{4,pos}$ were always either zero or could not be calculated (no recoveries beyond age 4). Similar, though less dramatic, implausibly large estimates of $E_{3,pos}$ were also sometimes calculated when the cut-off date was September 1, but estimates did not seem implausibly large for cut-off dates of September 15 or October 1.
### 4.3 Effect of Release Type
For a given stock and race (IGH fall Chinook, TRH spring or fall Chinook, CRH spring Chinook), release type had important effects on survival rates to age 2, size at age, and on age-specific ocean fishery exploitation rates and maturation probabilities. Observed effects were fully consistent with those earlier noted by Hankin (1990) based on CWT recovery data for a very limited set of brood years. Generally, release at a larger size in a later month had the following effects: (1) increased survival to age 2; (2) reduced size at age of return to freshwater; (3) reduced age 3 ocean fishery exploitation rates; and (4) reduced maturation probabilities at ages 2 and 3. We consider these effects in detail below.
#### 4.3.1 Survival Rates versus Release Type
Mean survival rates from release to age 2 (pooled across release groups from the same brood year) were substantially greater for yearlings than fingerlings for IGH and TRH fall Chinook and TRH spring Chinook, but showed relatively little clear differences among CWT releases of CRH spring Chinook made during the months of August, September or October. Interannual variation in survival rates was extreme for IGH and TRH releases of fall and spring Chinook (maximum survival rates were from 80-250 times minimum survival rates for these stocks), but variation in survival rates was less extreme for CRH releases of spring Chinook (max/min ranged from about 12 - 23). Mean survival rates were 0.0104, 0.0149, and 0.0175 for finglering releases of IGH fall Chinook, TRH fall Chinook and TRH spring Chinook, respectively; 0.0340, 0.0661 and 0.0657 for yearling releases of IGH fall Chinook, TRH fall Chinook and TRH spring Chinook, respectively; and were 0.05490, 0.04390, and 0.04303 for CRH spring Chinook released in the months of August, September and October, respectively. Coefficients of variation of survival rates were nearly equal to 100% for all stocks and release types at IGH and TRH and were slightly less for CRH releases (Table 4.5).
Table 4.4. Estimated mean pre-maturation and post-maturation ocean fishery exploitation rates at ages 3 and 4 ($\hat{E}_{i,pre}$) for TRH releases of fingerling fall Chinook salmon based on alternative cut-off dates of September 1, September 15, and October 1. Estimated exploitation rates are displayed only for brood years for which there were non-trivial CWT recoveries beyond September 1.
| Brood Year | Sept 1 | Sept 15 | Oct 1 | Sept1 | Sept15 | Oct1 |
|------------|--------|---------|-------|-------|--------|------|
| $\hat{E}_{3,pre}$ | | | | | | |
| 1979 | 0.3798 | 0.3872 | 0.3968| 0.5046| 0.5065 | 0.5065|
| 1980 | 0.1639 | 0.1711 | 0.1711| 0.6013| 0.6013 | 0.6013|
| 1982 | 0.0900 | 0.1070 | 0.1191| 0.3478| 0.3478 | 0.3478|
| 1983 | 0.4382 | 0.4382 | 0.4395| 0.3589| 0.3830 | 0.3830|
| 1984 | 0.4253 | 0.4385 | 0.4385| 0.3268| 0.3268 | 0.3268|
| 1985 | 0.3388 | 0.3433 | 0.3439| 0.4162| 0.4271 | 0.4271|
| 1986 | 0.3407 | 0.3550 | 0.3550| 0.3333| 0.3333 | 0.3333|
| 1991 | 0.0554 | 0.0554 | 0.0554| 0.1841| 0.2218 | 0.2218|
| 1992 | 0.1245 | 0.1524 | 0.1546| 0.2003| 0.2056 | 0.2056|
| 1996 | 0.0176 | 0.0197 | 0.0197| 0.1058| 0.1058 | 0.1058|
| 1997 | 0.0706 | 0.0742 | 0.0755| 0.0780| 0.0840 | 0.0840|
| 1998 | 0.0518 | 0.0558 | 0.0558| 0.2164| 0.2864 | 0.2864|
| 1999 | 0.0565 | 0.0581 | 0.0588| 0.2550| 0.2925 | 0.2946|
| 2000 | 0.2309 | 0.2583 | 0.2641| 0.5509| 0.5574 | 0.5574|
| 2001 | 0.3232 | 0.3393 | 0.3393| 0.1275| 0.1281 | 0.1281|
| 2002 | 0.0224 | 0.0556 | 0.0581| 0.0332| 0.0366 | 0.0366|
| 2003 | 0.406 | 0.0480 | 0.0480| | | |
| 2004 | 0.1619 | 0.2132 | 0.2151| 0.0026| 0 | 0.0026|
| Brood Year | Sept 1 | Sept 15 | Oct 1 | Sept1 | Sept15 | Oct1 |
|------------|--------|---------|-------|-------|--------|------|
| $\hat{E}_{3,pos}$ | | | | | | |
| 1979 | 0.0537 | 0.0349 | 0.0082| 0.0689| 0 | 0 |
| 1980 | 0.0481 | 0 | 0 | 0 | 0 | 0 |
| 1982 | 0.0537 | 0.0258 | 0.0049| | | |
| 1983 | 0.0165 | 0.0165 | 0.0053| 0.7643| 0 | 0 |
| 1984 | 0.1407 | 0.0079 | 0.0079| 0 | 0 | 0 |
| 1985 | 0.0311 | 0.0038 | 0 | 0.3911| 0 | 0 |
| 1986 | 0.1206 | 0 | 0 | | | |
| 1991 | 0 | 0 | 0 | 0.9999| | |
| 1992 | 0.1313 | 0.0197 | 0.0103| 0.1935| 0 | 0 |
| 1996 | 0.0045 | 0 | 0 | 0 | 0 | 0 |
| 1997 | 0.0315 | 0.0146 | 0.0098| | | |
| 1998 | 0.0692 | 0.0371 | 0.0371| | | |
| 1999 | 0.0210 | 0.0188 | 0.0178| 0.4681| 0.1120 | 0.0990|
| 2000 | 0.1931 | 0.0635 | 0.0300| | | |
| 2001 | 0.0884 | 0.0160 | 0.0160| | | |
| 2002 | 0.0839 | 0.0152 | 0.0098| 0.2552| | |
| 2004 | 0.2917 | 0.0141 | 0 | | | |
Table 4.5. Minimum, maximum, mean, standard deviation, coefficient of variation and number of brood years for estimated survival rates from release to ocean age 2 for fall and spring Chinook salmon released from IGH and TRH, brood years 1979-2004, and spring Chinook salmon released from CRH, brood years 1975-2001. Estimated survival rates for individual brood years were pooled over all CWT groups of the same type from a given brood year and include non-landed mortalities with the exception of CRH releases for which non-landed mortalities are excluded. Note that means are not always calculated over identical sets of brood years as CWT groups were not tagged from all release types in all years.
| | IGH Fing. | TRH Fing. | IGH Year. | TRH Year. |
|------------------|-----------|-----------|-----------|-----------|
| minimum: | 0.00029 | 0.00020 | 0.00200 | 0.00323 |
| maximum: | 0.03884 | 0.04764 | 0.11262 | 0.27140 |
| mean: | 0.01043 | 0.01492 | 0.03397 | 0.06610 |
| s.d.: | 0.01025 | 0.01520 | 0.02997 | 0.06153 |
| c.v.: | 0.98274 | 1.01876 | 0.88224 | 0.93086 |
| n: | 25 | 24 | 23 | 25 |
| | TRH Fing. | CRH Aug. | CRH Sept. | CRH Oct. | TRH Year. |
|------------------|-----------|-----------|-----------|-----------|-----------|
| minimum: | 0.00023 | 0.01196 | 0.00634 | 0.01245 | 0.00185 |
| maximum: | 0.05949 | 0.14680 | 0.14440 | 0.15120 | 0.24931 |
| mean: | 0.01753 | 0.05490 | 0.04390 | 0.04969 | 0.06019 |
| s.d.: | 0.01826 | 0.03570 | 0.03566 | 0.04303 | 0.06657 |
| c.v.: | 1.04164 | 0.65027 | 0.81230 | 0.86597 | 1.10600 |
| n: | 22 | 19 | 22 | 25 | 25 |
### 4.3.2 Survival Rates versus Size at Release
#### TRH
Beginning with the 1997 brood year, individual CWT groups have been released from each raceway at TRH in an attempt to ensure that there are representative CWT groups for fish released at different sizes. Theoretically, variation in size at release could lead to variation in survival and the previous practice of selecting a single "representative raceway" for coded-wire tagging would lead to erroneous extrapolations of the performance of hatchery releases whenever applied to the full fingerling production. Tables 4.6 and 4.7 provide strong suggestive evidence that size at release has important effect on survival of fingerling fall Chinook salmon released from TRH. In general, estimated survival rates are consistent with an hypothesis that larger size at release, within a given brood year, generates greater survival rate. Across brood
years, there was a relatively consistent pattern of largest survival rates for fish released at the largest size and lowest survival rates for fish released at the smallest size. Generally, survival rates for the smallest sized releases were about one half those for the largest sized fish released from the same brood year. We were unable to engage in any sophisticated analyses of these data, however, because we felt that such analysis would be unwarranted given the very large number of releases for which mean weights at release were not directly measured but were instead conjectured (“assumed”) based on usual patterns of size across raceways as they reflect dates of parental spawnings. Also, we note that although size at release may explain a substantial amount of variation in survival rates within individual brood years, it is obvious that interannual variation (between brood year variation) in survival rates makes a much greater contribution to overall variation in survival rates than does variation due to size at release within a particular brood year (within brood year variation).
**IGH**
Although IGH appears to have been using distinct CWT codes to tag fish reared in individual raceways since about the 1993 brood year, reported mean weights for multiple CWT release groups have almost always had the same value for multiple groups, ruling out any ability to evaluate the possible effect of size at release on survival rates of fish released as fingerlings from IGH.
### 4.3.3 Maturation Probabilities versus Release Type
Age-specific maturation probabilities were always lower for yearling than for fingerling release groups of fall and spring Chinook salmon from IGH and TRH, and mean age-specific maturation probabilities decreased with increasing duration of rearing time prior to release for CRH spring Chinook salmon. Mean estimated age-specific maturation probabilities also were distinctive among stocks. For example, TRH fall Chinook salmon released as fingerlings or yearlings had a much greater tendency to mature at age 2 (mean $\sigma_{2F} = 0.1073$, mean $\sigma_{2Y} = 0.0409$) than did IGH fall Chinook salmon released as fingerlings or yearlings (mean $\sigma_{2F} = 0.0323$, mean $\sigma_{2Y} = 0.0089$). TRH spring Chinook salmon released as fingerlings or yearlings had lower age-specific maturation than did TRH fall Chinook salmon of the same release type. Age-specific maturation probabilities were lowest for CRH spring Chinook salmon, especially at ages 3 and 4. Whereas all TRH and IGH stocks and release types had mean age 4 maturation probabilities that exceeded about 90%, mean age 4 maturation probabilities for CRH spring Chinook released from August through October ranged from about 70-80% and mean age 3 maturation probabilities ranged from about 6-12% (compare with 36% and 56% for TRH releases of yearling and fingerling spring Chinook salmon, Table 4.8).
Table 4.6. Estimated survival rates from release to age 2 ($\hat{p}_0$) for TRH fall Chinook salmon fingerlings released at different sizes, 1997 - 2001 brood years. Note that mean weights at release were not reported for all individual CWT release groups for the 1999, 2000 and 2001 brood years. See text for explanation of “assumed” values.
| BY | Raceway | CWT Code | # Released | Reported | Assumed | $\hat{p}_0$ |
|----|---------|----------|------------|----------|---------|-----------|
| 1997 | C1/2 | 065236 | 48,381 | 5.15 | 5.15 | 0.0269 |
| 1997 | C3/4 | 065235 | 49,785 | 4.54 | 4.54 | 0.0198 |
| 1997 | D1/2 | 065234 | 49,353 | 4.20 | 4.20 | 0.0277 |
| 1997 | D3/4 | 065233 | 50,927 | 4.12 | 4.12 | 0.0216 |
| 1997 | E1/2? | 065239 | 18,304 | 2.83 | 2.83 | 0.0222 |
| 1998 | C1/2 | 065242 | 46,399 | 4.28 | 4.28 | 0.0062 |
| 1998 | C3/4 | 065243 | 42,659 | 3.84 | 3.84 | 0.0055 |
| 1998 | D1/2 | 065244 | 49,332 | 3.36 | 3.36 | 0.0039 |
| 1998 | D3/4 | 065245 | 46,391 | 3.22 | 3.22 | 0.0034 |
| 1999 | C1/2 | 065254 | 44,835 | 5.71 | 5.86 | 0.0231 |
| 1999 | C3/4 | 065255 | 43,066 | 5.71 | 5.53 | 0.0124 |
| 1999 | D1/2 | 065256 | 43,921 | 5.01 | 5.19 | 0.0154 |
| 1999 | D3/4 | 065257 | 51,781 | 5.01 | 4.86 | 0.0142 |
| 2000 | A1/2 | 065265 | 32,795 | 8.10 | 9.22 | 0.0329 |
| 2000 | A1/2 | 065271 | 54,867 | 8.10 | 9.22 | 0.0370 |
| 2000 | A1/2 | 065272 | 36,035 | 8.10 | 9.22 | 0.0303 |
| 2000 | A3/4 | 065266 | 33,806 | 8.10 | 8.11 | 0.0277 |
| 2000 | A3/4 | 065275 | 64,250 | 8.10 | 8.11 | 0.0232 |
| 2000 | A3/4 | 065276 | 27,159 | 8.10 | 8.11 | 0.0335 |
| 2000 | B1/2 | 065267 | 34,852 | 8.10 | 7.00 | 0.0283 |
| 2000 | B1/2 | 065273 | 57,444 | 8.10 | 7.00 | 0.0273 |
| 2000 | B1/2 | 065274 | 32,096 | 8.10 | 7.00 | 0.0286 |
| 2000 | C1/2 | 065643 | 25,007 | 6.87 | 6.87 | 0.0306 |
| 2000 | C3/4 | 065268 | 33,240 | 5.27 | 5.27 | 0.0136 |
| 2000 | C3/4 | 065277 | 56,582 | 5.27 | 5.27 | 0.0145 |
| 2000 | C3/4 | 065278 | 34,183 | 5.27 | 5.27 | 0.0154 |
| 2001 | C1/2 | 065284 | 120,531 | 6.39 | 6.66 | 0.0032 |
| 2001 | C3/4 | 065285 | 114,624 | 6.39 | 6.10 | 0.0031 |
| 2001 | D1/2 | 065286 | 126,135 | 5.27 | 5.54 | 0.0038 |
| 2001 | D3/4 | 065287 | 121,607 | 5.27 | 4.99 | 0.0032 |
| 2001 | E3/4 | 065290 | 10,234 | 3.60 | 3.60 | 0.0013 |
| 2001 | E3/4 | 066291 | 8,269 | 3.60 | 3.60 | 0.0014 |
Table 4.7. Estimated survival rates from release to age 2 ($\hat{p}_0$) for TRH fall Chinook salmon fingerlings released at different sizes, 2002-2004 brood years. Reported mean weights (listed in descending value within brood year) at release may not be accurate for multiple groups with same mean weights. Reported mean weight for 2003 BY CWT release group 065316 (1.78 g) was inconsistent with reported mean length (69 mm), so the reported mean weight of group 065315 (4.32 g, with mean length 70 mm) is listed for this group. Raceways were unknown for these groups.
| BY | Raceway | CWT Code | # Released | Mean Weights (g) | $\hat{p}_0$ |
|----|---------|----------|------------|------------------|-------------|
| | | | | Reported | Assumed | |
| 2002 | 065298 | 124,602 | 5.97 | 5.97 | 0.02378 |
| 2002 | 065299 | 126,729 | 5.97 | 5.97 | 0.01889 |
| 2002 | 065306 | 124,014 | 5.37 | 5.37 | 0.01948 |
| 2002 | 065307 | 123,263 | 5.37 | 5.37 | 0.01508 |
| 2002 | 065292 | 10,355 | 4.30 | 4.30 | 0.01201 |
| 2003 | 065313 | 126,098 | 4.58 | 4.58 | 0.00351 |
| 2003 | 065314 | 132,574 | 4.58 | 4.58 | 0.00240 |
| 2003 | 065315 | 131,548 | 4.32 | 4.32 | 0.00103 |
| 2003 | 065316 | 128,982 | 4.32 | 4.32 | 0.00178 |
| 2003 | 065293 | 11,342 | 3.49 | 3.49 | 0.00070 |
| 2003 | 065294 | 5,230 | 3.49 | 3.49 | 0.00084 |
| 2004 | 065322 | 123,231 | 6.87 | 6.87 | 0.02629 |
| 2004 | 065323 | 120,440 | 6.21 | 6.21 | 0.02472 |
| 2004 | 065325 | 120,518 | 5.81 | 5.81 | 0.02548 |
| 2004 | 065324 | 122,180 | 5.53 | 5.53 | 0.01835 |
### 4.3.4 Size at Age versus Release Type
For IGH and TRH fall and spring Chinook salmon, mean lengths at age for hatchery returns were consistently larger for fish released as fingerlings than for fish released as yearlings, and mean lengths decreased with duration of rearing time prior to release for CRH spring Chinook salmon. Effects were large at age 2, modest at age 3, and generally very small by age 4. As for maturation probabilities, distinct differences were noted across stocks. For example, at ages 2 and 3, IGH fall Chinook averaged about 30 mm longer than TRH fall Chinook, and lengths at age 4 for CRH spring Chinook (762 - 785 mm for October - August releases) were considerably larger than for TRH spring Chinook (750 mm for fingerlings and 732 mm for yearlings). No attempt was made to statistically test for differences between means, in part because such tests can only be meaningfully made by ensuring that brood years are identical for any given comparison. As noted in Table 4.9, numbers of brood years used to
Table 4.8. Mean estimated age-specific maturation probabilities for IGH and TRH fall Chinook salmon released as fingerlings (June) or yearlings (October), for TRH spring Chinook salmon released as fingerlings (June) or yearlings (October), and for CRH spring Chinook salmon released as subyearlings during August, September or October. Reported means are simple averages of pooled brood-year-specific estimates. Numbers and identities of brood years vary slightly across stock types.
| Fall Chinook | IGH Fing. | TRH Fing. | IGH Year. | TRH Year. |
|--------------|-----------|-----------|-----------|-----------|
| Age 2 | 0.0315 | 0.1029 | 0.0087 | 0.0363 |
| Age 3 | 0.4808 | 0.6447 | 0.2385 | 0.5629 |
| Age 4 | 0.9327 | 0.9362 | 0.9348 | 0.9407 |
| Spring Chinook | TRH Fing. | CRH Aug. | CRH Sep. | CRH Oct. | TRH Year. |
|----------------|-----------|----------|----------|----------|-----------|
| Age 2 | 0.0409 | 0.0263 | 0.0150 | 0.0129 | 0.0199 |
| Age 3 | 0.5577 | 0.1168 | 0.0761 | 0.0613 | 0.3526 |
| Age 4 | 0.9365 | 0.8025 | 0.7271 | 0.7052 | 0.9031 |
calculate the reported means for the various stocks and release types varied substantially, especially for lengths at age 2. Age 2 maturation probabilities were sufficiently low for some stocks and release types (e.g., CRH spring Chinook, IGH fall yearlings) that there were no reported hatchery lengths for some or many brood years.
### 4.3.5 Ocean Fishery Exploitation Rates Versus Release Type
As an apparent consequence of the larger average size at age 3 for fingerling as compared to yearling releases, estimated age 3 pre-maturation ocean fishery exploitation rates for IGH and TRH fall Chinook salmon were typically substantially larger for fingerling releases than for yearling releases (Figure 4.2). Exploitation rates for fully vulnerable age 4 fish were typically substantially larger than for age 3 fish at large during the same fishing season, but age 4 exploitation rates for fingerlings were not consistently larger than those for yearlings (Figure 4.3). Indeed, estimated age 4 ocean fishery exploitation rates of yearling releases were often greater than those of fingerlings from the same brood year.
Table 4.9. Mean lengths at age for hatchery returns of IGH and TRH fall Chinook salmon released as fingerlings (June) or yearlings (October), for TRH spring Chinook salmon released as fingerlings (June) or yearlings (October), and for CRH spring Chinook salmon released as subyearlings during August, September or October. Reported means are simple averages of pooled brood-year-specific mean lengths. Numbers (in parentheses) and identities of brood years vary across stock types, due to absence of hatchery returns at ages 2 or 4 and or due to misreporting of means lengths in some years.
| Fall Chinook | IGH Fing. | TRH Fing. | IGH Year. | TRH Year. |
|--------------|-----------|-----------|-----------|-----------|
| Age 2 | 523.1 (17)| 490.2 (22)| 460.4 (15)| 456.5 (25)|
| Age 3 | 689.6 (24)| 664.1 (25)| 644.2 (24)| 635.3 (26)|
| Age 4 | 792.4 (24)| 763.6 (24)| 774.9 (24)| 758.9 (26)|
| Spring Chinook | TRH Fing. | CRH Aug. | CRH Sep. | CRH Oct. | TRH Year. |
|----------------|-----------|----------|----------|----------|-----------|
| Age 2 | 473.5 (21)| 430.5 (11)| 406.0 (06)| 404.7 (07)| 433.4 (24)|
| Age 3 | 658.6 (22)| 654.5 (15)| 632.5 (13)| 607.4 (14)| 617.2 (25)|
| Age 4 | 755.0 (22)| 785.4 (16)| 772.1 (17)| 762.4 (19)| 735.9 (25)|
Figure 4.2. Estimated mean age 3 pre-maturation ocean fishery exploitation rates for fingerling and yearling releases of fall Chinook salmon from IGH and TRH, brood years 1978-2004.
Figure 4.3. Estimated mean age 4 pre-maturation ocean fishery exploitation rates for fingerling and yearling releases of fall Chinook salmon from IGH and TRH, brood years 1978-2004.
4.4 Covariance Among Estimated Parameters Across Hatcheries and Races
All stocks showed striking interannual variation in survival rates from release to age 2, age-specific maturation probabilities, and size at age. In many cases, there was striking covariance between stocks in these attributes. Generally, covariance was strongest between stocks released from the same hatchery at the same approximate date (e.g., TRH spring and fall Chinook released as fingerlings or as yearlings). Covariation between stocks was also quite strong for IGH and TRH fall Chinook released as fingerlings or yearlings. Survival rates of CRH October releases of sub-yearling spring Chinook showed less covariation with TRH spring Chinook releases, but covariation was still evident even though distance between the mouths of the Klamath and Rogue Rivers is considerable (about 70 miles).
4.4.1 Survival Rates
In this section, we present an example of interannual variation in survival rates of fingerlings as compared to yearlings from the same stock type, but we focus our attention primarily on covariation between estimates of survival rates for the same release type but from different stocks. We begin with the strongest detected covariation (between spring and fall stocks of Chinook salmon released from TRH), we then
examine variation between estimated survival rates of stocks within the same system (fall Chinook released from IGH and TRH), and we conclude with a comparison of survival rates of CRH and TRH spring Chinook salmon released as subyearlings in October (termed “yearlings” in CA).
**Fingerlings vs Yearlings**
Survival rates of yearlings exceeded survival rates of fingerlings in almost all years for all stocks, but in a small number of instances survival rates of fish released as fingerlings in June were comparable to or slightly exceeded survival rates of fish released in October as yearlings. We used a log transformation of mean brood-year specific survival rates to evaluation correlations across years, as adopted in the recent stock-recruitment analysis of Klamath River Chinook salmon (STT 2005). Correlations between these log-transformed mean survival rates of fish released as fingerlings or yearlings from the same brood year were generally not strong. We present only one illustrative example, Iron Gate fall Chinook salmon released as fingerlings or yearlings (Figure 4.4) for which the correlation between estimated *log* mean survival rates was just 0.342 (*n* = 22 brood years). Correlations between survival rates of fish from different stocks released at the same approximate time (e.g., fingerling fall Chinook from IGH and TRH) were highly significant, however, and are discussed below. In most cases we present figures to visually illustrate covariation because these figures are in many cases very striking.
Although CRH released no Rogue spring Chinook salmon as fingerlings, it was possible to compare survival rates for fish released from the same brood years in the months of August, September or October (Figure 4.5). Correlations between log-transformed mean survival rates across brood years were very high for August compared to September releases (*r* = 0.851), but were less substantial for comparisons of August with October (*r* = 0.620) or September with October (*r* = 0.544).
**TRH Fall vs Spring Chinook**
The strongest degree of interannual covariation in log-transformed mean survival rates was seen in comparisons of survival rates of fingerlings and yearlings for fall and spring Chinook salmon released from TRH. Calculated correlations between log-transformed survival rates of fall and spring Chinook released from TRH were extremely high: 0.874 for fingerlings (Figure 4.6) and 0.911 for yearlings (Figure 4.7).
**IGH vs TRH Fall Chinook**
Although not as highly correlated as survival rates between releases of the same type for different races of Chinook reared at TRH, log-transformed survival rates for
Figure 4.4. Estimated brood year-specific log-transformed mean survival rates from release to age 2 for IGH releases of fall Chinook salmon as fingerlings (June) or yearlings (October), 1978-2004 brood years.
Figure 4.5. Estimated brood year-specific mean log-transformed survival rates from release to age 2 for CRH releases of spring Chinook salmon in August, September or October, 1975-2001 brood years.
Figure 4.6. Estimated brood year-specific log-transformed mean survival rates from release to age 2 for TRH releases of fingerling fall and spring Chinook salmon, 1979-2004 brood years.
Figure 4.7. Estimated brood year-specific log-transformed mean survival rates from release to age 2 for TRH releases of yearling fall and spring Chinook salmon, 1979-2004 brood years.
fall Chinook released as fingerlings from TRH were nevertheless very highly correlated with those for fingerlings released from IGH ($r = 0.799$, Figure 4.8), and log-transformed survival rates were also strongly correlated for yearlings released from the two Klamath system hatcheries ($r = 0.582$, Figure 4.9). We saw no clear evidence that relative survival rates of IGH fall Chinook compared to TRH fall Chinook have decreased over the past decade or so due to changes in water management policies in the upper Klamath River. Indeed, mean survival rates for fingerlings released from IGH actually exceeded those for fingerlings released from TRH for the 1997 and 1998 brood years, a relatively unusually situation when compared to the complete time series of survival rates (see Figure 4.8). Survival rates of IGH fingerlings and yearlings have typically been lower than for TRH fingerlings or yearlings released from the same brood year.

**Figure 4.8.** Estimated brood year-specific log-transformed mean survival rates from release to age 2 for TRH and IGH releases of fingerling fall Chinook salmon, 1979-2004 brood years.
**TRH vs CRH Spring Chinook**
Covariation of log-transformed mean survival rates from release to age 2 was less striking for October releases of spring Chinook from CRH as compared to TRH (yearlings). The correlation over the entire data set was just 0.322, considerably less than for other between stock or between hatchery calculations, but this correlation was heavily influenced by the 1991 brood year for which survival rates of CRH CWT groups were quite good (0.095) but for which survival rates of TRH CWT groups was unusually poor (0.0118, see Figure 4.10). With the 1991 data point removed, the
Figure 4.9. Estimated brood year-specific log-transformed mean survival rates from release to age 2 for TRH and IGH releases of yearling fall Chinook salmon, 1979-2004 brood years.
correlation improved to 0.515, similar to the calculated correlation between yearling releases of fall Chinook salmon made from IGH and TRH (0.582).
4.4.2 Maturation Probabilities
Maturation probabilities often displayed striking covariation across stocks and hatcheries. Similar to covariation in survival rates, we found strongest covariation in maturation probabilities between stocks released from the same hatchery. We present only two visually striking graphical examples of such covariation from the many comparisons that were made, but we provide a summary table that reports all calculated correlations.
The most striking examples of covariation of maturation probabilities were for comparisons of age 2 and age 3 maturation probabilities for TRH fall Chinook salmon released as fingerlings compared to yearlings ($r = 0.822$ and $r = 0.825$, respectively) or for comparisons of age 2 and age 3 maturation probabilities of CRH spring Chinook released in August, September or October. For example, Figures 4.11 and 4.12, respectively, display age 2 and age 3 maturation probabilities for TRH fall Chinook released as fingerlings as compared to yearlings. In addition to illustrating the strong covariation in estimated maturation probabilities for the two release types, Figure 4.11 also suggests a possible long-term trend to lower age 2 maturation probabilities for yearling releases and possibly also for fingerling releases.
Covariation of age 2 and age 3 maturation probabilities of Rogue spring Chinook released in August, September and October was even stronger than those for TRH fingerlings and yearlings. These correlations ranged from 0.751 (age 3 maturation probabilities for August vs October releases) to 0.984 (age 2 maturation probabilities for August vs October releases, see Figure 4.13) and exceeded 0.920 in 5 of 6 comparisons. Correlations for all calculated comparisons are presented in Table 4.10.
### 4.4.3 Size at Age
Interannual variation in size at age (measured by size of fish at return to hatcheries) was substantial across all stocks and races and there was striking covariation in size at age across all stocks and races. For illustrative purposes, we focus on comparisons between the most similar stocks (TRH fall and spring Chinook), similar stocks released from different hatcheries (IGH vs TRH fall Chinook) and the most dissimilar stocks (TRH and CRH spring Chinook).
Figure 4.11. Estimated brood year-specific age 2 maturation probabilities for TRH fall Chinook salmon released as fingerlings (June) or as yearlings (October), 1979-2004 brood years.
Figure 4.12. Estimated brood year-specific age 3 maturation probabilities for TRH fall Chinook salmon released as fingerlings (June) or as yearlings (October), 1979-2004 brood years.
Figure 4.13. Estimated brood year-specific age 2 maturation probabilities for CRH spring Chinook salmon released as subyearlings in August as compared to October, 1975-2001 brood years.
**TRH Fall vs Spring Chinook Released as Fingerlings**
Spring and fall Chinook reared and released from TRH as fingerlings share essentially identical rearing and release history and are very closely related from a genetic perspective. It would therefore be reasonable to suppose that lengths at age for these two stock types would be most highly correlated. As Figure 4.14 illustrates, lengths at age are indeed very highly correlated for these two populations.
**IGH vs TRH Fall Chinook Released as Fingerlings**
Although IGH and TRH fall Chinook are genetically distinct and experience different rearing and downstream migration histories, fingerlings from the two hatcheries are released at a similar time (early June) and should theoretically experience very similar ocean growing conditions. Therefore, it would be reasonable, *a priori*, to assume that lengths at age would be highly correlated between these two stocks from the Klamath River. As Figure 4.15 illustrates, size at age for these two stocks clearly covary in a striking fashion, with fish from IGH typically larger than those from TRH.
**TRH vs CRH Spring Chinook released as Yearlings**
TRH and CRH spring Chinook stocks are the most genetically distinct stocks considered in this report, do not share rearing or downstream migration histories,
Table 4.10. Calculated correlations between stocks and/or release types for estimated age 2 and age 3 maturation probabilities.
| Maturation Probability | Comparison | Correlation | Sample Size |
|------------------------|-----------------------------|-------------|-------------|
| $\sigma_2$ | IGH vs TRH Fingerlings | 0.687 | 23 |
| $\sigma_3$ | IGH vs TRH Fingerlings | 0.159 | 23 |
| $\sigma_2$ | IGH vs TRH Yearlings | 0.324 | 22 |
| $\sigma_3$ | IGH vs TRH Yearlings | 0.394 | 22 |
| $\sigma_2$ | IGH Fall: Fing. vs Year. | 0.545 | 22 |
| $\sigma_3$ | IGH Fall: Fing. vs Year. | 0.559 | 22 |
| $\sigma_2$ | TRH Fall: Fing. vs Year. | 0.822 | 24 |
| $\sigma_3$ | TRH Fall: Fing. vs Year. | 0.825 | 24 |
| $\sigma_2$ | TRH Spring: Fing. vs Year. | -0.034 | 24 |
| $\sigma_3$ | TRH Spring: Fing. vs Year. | 0.571 | 24 |
| $\sigma_2$ | CRH Spring: Aug. vs Sept. | 0.966 | 17 |
| $\sigma_2$ | CRH Spring: Aug. vs Oct. | 0.984 | 17 |
| $\sigma_2$ | CRH Spring: Sept vs Oct. | 0.950 | 18 |
| $\sigma_3$ | CRH Spring: Aug. vs Sept. | 0.920 | 18 |
| $\sigma_3$ | CRH Spring: Aug. vs Oct. | 0.751 | 17 |
| $\sigma_3$ | CRH Spring: Sept vs Oct. | 0.926 | 18 |
and their release locations are most distant from one another among those stocks considered in this report. Therefore, *a priori*, one might suspect that lengths at age would covary less strongly than for the previous two comparisons. As Figure 4.16 illustrates, however, lengths at age for October releases from these two stocks also exhibit striking interannual covariation. This strong degree of covariation presumably is a reflection of the fact that the two stocks experience very similar ocean growing conditions during their lives, probably also suggesting that the two stocks may share very similar ocean migration patterns.
Figure 4.14. Mean lengths at age (measured at hatchery return) for spring and fall Chinook salmon released from Trinity River Hatchery as fingerlings, 1978-2004 brood years.
4.5 Simulated Cohort Analysis Estimator Performance versus CWT Release Group Size
Results of simulations (based on IGH fingerling releases, see Table 4.15, and assuming a September 1 cut-off date) for release groups sizes ($R$) of 25K, 50K, 100K, 200K, and 400K, showed that cohort analysis estimators of most model parameters were approximately unbiased for release group sizes exceeding 50K fish; small positive bias seemed evident for estimation of $E_{4,pos}$, even at the largest release group sizes (400K). The estimator of survival from release to ocean age 2 was approximately unbiased for all release group sizes. Thus, with the exception of $E_{4,pos}$, selection of release group size should not be strongly influenced by bias considerations.
Simulated coefficients of variation (CV), however, varied considerably across model parameters. The most poorly behaved parameters were the post-maturation exploitation rates, $E_{3,pos}$ and $E_{4,pos}$, for which CV exceeded 200% for $R = 25K$ and for which CV exceeded 50% even for $R = 400K$. Parameters with intermediate behavior inFigure 4.15. Mean lengths at age (measured at hatchery return) for fall Chinook salmon released as fingerlings from IGH and TRH, 1978–2004 brood years.
cluded the two pre-maturation exploitation rates, $E_{3,pre}$ and $E_{4,pre}$, and the age 2 maturation parameter, $\sigma_2$. For these parameters, CV ranged from 76–97% for $R = 25K$, but then decreased to from 19-24% at $R = 400K$. Estimators of the remaining parameters, $\sigma_3$, $\sigma_4$ and $S_0$, were well-behaved, even for small release groups sizes. For these parameters, CV ranged from 11-30% at $R = 25K$ and decreased to from 3-7% at $R = 400K$ (Table 4.11).
As a rough rule of thumb for fishery management, it is desirable that CV for estimated parameters is not much more than 25%. On that basis, it seems that $R = 200K$ or perhaps slightly larger (say, 250K) would be a wise choice of release group size given the level of ocean exploitation that has been typical over the past 10-15 years. This would give CV less than 30% for the critical pre-maturation exploitation rates, about 33% for the age 2 maturation parameter, and no more than 10% for the remaining maturation parameters and for survival rate to age 2. This release group size would not be large enough to allow accurate estimation of the post-maturation exploitation rates at ages 3 or 4, however. Estimation of $E_{4,pos}$ is especially problematic because essentially all Klamath river Chinook mature at age 4;
thus, few should remain available to potential ocean capture in the post-maturation ocean fishery.
### 4.6 Survival Rates versus Flow - IGH and TRH
For IGH releases of fingerling fall Chinook salmon, a plot of *log* estimated mean annual survival rates against mean annual flows for the month of June at the Seiad gauge (Figure 4.17) suggested a highly non-random association of survival rates and flow and, among other things, implies that survival rates are generally high when fish are released under high flows.
A similar plot of *log* estimated mean annual survival rates for TRH releases of fingerling Chinook salmon again mean June flows at Burnt Ranch (upper Trinity River) failed to suggest any relation between survival and Trinity River flows (Figure 4.18), however, although June flows at the two locations were highly correlated with one another ($r=0.868$, see Figure 4.19).
Table 4.11. Simulated expected values and coefficients of variation (standard deviation/expected value) of parameters estimated from cohort analyses as a function of release group size ($R = 25K, 50K, 100K, 200K, 400K$). Assumes that age 2 ocean impact rates are 0. Based on multinomial model for fates of released fish, Poisson model for 20% level of sampling, and 100,000 independent simulations for each release group size.
| Parameter | True Value | 25K | 50K | 100K | 200K | 400K |
|-----------|------------|-------|-------|-------|-------|-------|
| $E_{3,pre}$ | 0.0852 | 0.0861| 0.0851| 0.0858| 0.0853| 0.0852|
| $E_{3,pos}$ | 0.0208 | 0.0214| 0.0216| 0.0211| 0.0210| 0.0208|
| $E_{4,pre}$ | 0.1579 | 0.1595| 0.1578| 0.1582| 0.1572| 0.1575|
| $E_{4,pos}$ | 0.0755 | 0.0945| 0.0874| 0.0869| 0.0810| 0.0800|
| $\sigma_2$ | 0.0286 | 0.0299| 0.0291| 0.0290| 0.0287| 0.0288|
| $\sigma_3$ | 0.4219 | 0.4246| 0.4248| 0.4216| 0.4219| 0.4227|
| $\sigma_4$ | 0.9345 | 0.9364| 0.9358| 0.9353| 0.9345| 0.9344|
| $S_0$ | 0.0097 | 0.0097| 0.0097| 0.0097| 0.0097| 0.0097|
| Coefficients of Variation |
|--------------------------|
| Parameter | 25K | 50K | 100K | 200K | 400K |
|--------------------------|-------|-------|-------|-------|-------|
| $E_{3,pre}$ | 0.7611| 0.5378| 0.3747| 0.2630| 0.1867|
| $E_{3,pos}$ | 2.2651| 1.5577| 1.0748| 0.7457| 0.5374|
| $E_{4,pre}$ | 0.8516| 0.5967| 0.4118| 0.2897| 0.2047|
| $E_{4,pos}$ | 2.9394| 2.9217| 2.6221| 2.0958| 1.4340|
| $\sigma_2$ | 0.9660| 0.6714| 0.4659| 0.3308| 0.2367|
| $\sigma_3$ | 0.2999| 0.2086| 0.1474| 0.1015| 0.0721|
| $\sigma_4$ | 0.1138| 0.0795| 0.0558| 0.0392| 0.0280|
| $S_0$ | 0.2340| 0.1671| 0.1177| 0.0840| 0.0584|
Our tentative conclusion that survival rates may be affected by river flow following release rests strongly on an observation that the four highest flows on Figure 4.17 are associated with generally high survival rates; at much lower flows variation in survival rates is large and the mean survival rate is considerable lower. Because our tentative interpretation of these data rests on the location of just four data points, we used a Monte Carlo permutation test (see, e.g., Good 2005) to calculate the probable statistical plausibility of this interpretation.
We calculated the probability that the relatively high survival rates would be associated with the four highest flows under a null hypothesis that survival rates and flows were completely uncorrelated with one another. To accomplish this test, we made a large number (40,000) of independent, random rearrangements of the times series
Figure 4.17. Estimated brood year-specific mean survival rates from release to age 2 for 1978-2004 brood year IGH releases of fingerling fall Chinook salmon (released in early June) plotted against mean June flow (cfs) at Seiad gauge, upper Klamath River, during year of release.
of flows and survival rates. For each set of random rearrangements, we calculated the difference between the mean survival rate (or mean $log($survival rate$))$ associated with the four highest flows and the mean survival rate (or mean $log($survival rate$))$ associated with the remaining flows. We then calculated the probability of observing a difference as large or larger than the actual difference implied by Figure 4.17. We found that the probability of finding differences as large as those observed were about 0.021 and 0.019, respectively, for the differences between mean survival rates and mean $log($survival rates$)$ (see Figure 4.20). We conclude that there is substantial, although not convincing, reason to reject a null hypothesis that flows and survival rates are uncorrelated. Therefore, we continue to suspect that survival rates of IGH fall Chinook salmon are positively associated with high flows in the upper Klamath River during the period of release and downstream migration.
Figure 4.18. Estimated brood year-specific *log* mean survival rates from release to age 2 for 1978-2004 brood year TRH releases of fingerling fall Chinook salmon (released in early June) plotted against mean June flow (cfs) at Burnt Ranch, upper Trinity River, during year of release.
Figure 4.19. Mean June flows at Seiad (Klamath River) and Burnt Ranch (Trinity River) USGS gauges, flow years 1979-2005.
Figure 4.20. Simulated distribution of differences between the mean survival rates (or log(survival rates)) associated with the four highest June flows at Seiad and the mean survival rates (or log(survival rates)) associated with lower flows, under a null hypothesis that flows and survival rates are uncorrelated. The probability of observed differences is equivalent to the proportion of the simulated differences that exceed the observed differences.
4.7 Alternative Mixes of Fingerling and Yearling Releases
4.7.1 Trinity River Hatchery
As noted previously, production goals for fall Chinook salmon at TRH are 2.00 million fingerlings, released in early June, and 0.90 million yearlings, released in early October. Mean estimated parameter values were used to characterize typical life history and (recent) fishery model parameters for TRH fingerlings and yearlings and are summarized in Table 4.12.
We used the parameter values from Table 4.12 to calculate expected age-specific ocean impacts (catches + non-landed mortalities) prior to (OCN.pre) and after (OCN.post) maturation; expected age-specific freshwater catches (FW.Catch); and expected age-specific freshwater escapements from fingerling (*.F) or yearling (*.Y) releases given a specified mixture of releases. Table 4.13 shows the expected adult (age 3 and older) contributions (numbers of fish) from fingerlings and yearlings for the current TRH production releases consisting of 2.00 million fingerlings and 0.90 million yearlings.
We also calculated a proxy for contributions of fish in terms of biomass by multiTable 4.12. Mean parameter values estimated from CWT recoveries of fingerling and yearling fall Chinook salmon released from Trinity River Hatchery: 1978-2001 BYs (life history parameters) and 1990-2001 BYs (fishery parameters).
| Parameter | Fingerlings | Yearlings |
|-----------------|-------------|-----------|
| Survival to Age 2 | 0.0146 | 0.0807 |
| $\sigma_2$ | 0.0989 | 0.0386 |
| $\sigma_3$ | 0.6257 | 0.5609 |
| $\sigma_4$ | 0.9483 | 0.9770 |
| $E_{2,pre}$ | 0.0000 | 0.0000 |
| $E_{2,pos}$ | 0.0000 | 0.0000 |
| $E_{3,pre}$ | 0.1009 | 0.0604 |
| $E_{3,pos}$ | 0.0111 | 0.0051 |
| $E_{4,pre}$ | 0.1896 | 0.1873 |
| $E_{4,pos}$ | 0.0248 | 0.0337 |
| $E_{5,pre}$ | 0.0292 | 0.0292 |
| $u_3$ | 0.1585 | 0.1137 |
| $u_4$ | 0.2559 | 0.2244 |
| $u_5$ | 0.2402 | 0.2402 |
| $l_2$ (mm) | 489.4 | 455.9 |
| $l_3$ (mm) | 659.5 | 631.9 |
| $l_4$ (mm) | 757.2 | 753.7 |
| $l_5$ (mm) | 850.0 | 850.0 |
Table 4.13. Expected age-specific ocean impacts, freshwater catches, and spawning escapements for current TRH production releases of 2.00 million fingerlings (*.F) and 0.900 million yearlings (*.Y).
| Age | OCN,pre.F | OCN,pos.F | FW.Catch.F | Escape.F |
|-----|-----------|-----------|------------|----------|
| 3 | 1,327 | 49 | 1,173 | 7,401 |
| 4 | 664 | 4 | 689 | 2,691 |
| 5 | 22 | 0 | 22 | 93 |
| Totals | 2,013 | 53 | 1,884 | 10,185 |
| Age | OCN,pre.Y | OCN,pos.Y | FW.Catch.Y | Escape.Y |
|-----|-----------|-----------|------------|----------|
| 3 | 794 | 76 | 3,033 | 19,133 |
| 4 | 2,233 | 7 | 2,422 | 9,465 |
| 5 | 32 | 0 | 34 | 140 |
| Totals | 3,059 | 83 | 5,489 | 28,738 |
plying expected numerical contributions by the cube of the corresponding length at age and scaling by a factor of $10^{-8}$. Expected combined (fingerling + yearling) contributions, expressed by numbers of fish and by biomass proxy, for different mixtures of fingerling and yearling releases are summarized in Table 4.14.
Table 4.14. Expected total age-specific ocean impacts, freshwater catches, and spawning escapements, expressed as numbers of adults (age 3 and older) or by associated biomass proxies, for different mixtures of fingerling and yearling releases of fall Chinook salmon from Trinity River Hatchery.
| Number Released (x10^6) | Number of Age 3 and Older Adults |
|-------------------------|----------------------------------|
| | Fingerlings | Yearlings | OCN Impacts | FW Catch | Escape | Total |
| 2.000 | 0.900 | | 5,209 | 7,372 | 31,550 | 44,131|
| 1.750 | 0.945 | | 4,978 | 7,433 | 31,770 | 44,182|
| 1.500 | 0.990 | | 4,748 | 7,495 | 31,989 | 44,232|
| 1.250 | 1.036 | | 4,520 | 7,562 | 32,235 | 44,318|
| 1.000 | 1.081 | | 4,290 | 7,623 | 32,455 | 44,368|
| Number Released (x10^6) | Biomass (Proxy): Age 3 and Older |
|-------------------------|----------------------------------|
| 2.000 | 0.900 | 20,824 | 27,007 | 111,351 | 159,182|
| 1.750 | 0.945 | 20,147 | 27,152 | 111,897 | 159,197|
| 1.500 | 0.990 | 19,471 | 27,297 | 112,444 | 159,211|
| 1.250 | 1.036 | 18,807 | 27,464 | 113,081 | 159,352|
| 1.000 | 1.081 | 18,130 | 27,609 | 113,628 | 159,366|
### 4.7.2 Iron Gate Hatchery
Mean parameter values used in calculations of expected contributions from alternative mixtures of releases of fingerling and yearling fall Chinook salmon from Iron Gate Hatchery are listed in Table 4.15. Summarized expected combined (fingerling + yearling) contributions given different mixtures of fingerling and yearling releases from IGH are presented in Table 4.16.
Table 4.15. Mean parameter values estimated from CWT recoveries of fingerling and yearling fall Chinook salmon released from Iron Gate Hatchery: 1978-2001 BYs (life history parameters) and 1990-2001 BYs (fishery parameters). We assume no age 2 ocean impacts.
| Parameter | Fingerlings | Yearlings |
|--------------------|-------------|-----------|
| Survival to Age 2 | 0.0097 | 0.0397 |
| $\sigma_2$ | 0.0286 | 0.0008 |
| $\sigma_3$ | 0.4219 | 0.2174 |
| $\sigma_4$ | 0.9345 | 0.9328 |
| $E_{3,pre}$ | 0.0852 | 0.0333 |
| $E_{3,pos}$ | 0.0208 | 0.0067 |
| $E_{4,pre}$ | 0.1579 | 0.1768 |
| $E_{4,pos}$ | 0.0755 | 0.0838 |
| $E_{5,pre}$ | 0.1673 | 0.1673 |
| $u_3$ | 0.1245 | 0.1260 |
| $u_4$ | 0.3065 | 0.3567 |
| $u_5$ | 0.3316 | 0.3316 |
| $\bar{l}_2$ (mm) | 528.4 | 475.3 |
| $\bar{l}_3$ (mm) | 688.1 | 648.3 |
| $\bar{l}_4$ (mm) | 789.4 | 774.1 |
| $\bar{l}_5$ (mm) | 850.0 | 850.0 |
Table 4.16. Expected total age-specific ocean impacts, freshwater catches, and spawning escapements, expressed as numbers of adults (age 3 and older) or by associated biomass proxies, for different mixtures of fingerling and yearling releases of fall Chinook salmon from Iron Gate Hatchery.
| Number Released (x10^6) | Number of Age 3 and Older Adults |
|-------------------------|----------------------------------|
| | Fingerlings | Yearlings | OCN Impacts | FW Catch | Escape. | Total |
| 4.920 | 0.900 | | | | | |
| 4.500 | 1.002 | | | | | |
| 4.000 | 1.125 | | | | | |
| 3.500 | 1.247 | | | | | |
| 3.000 | 1.370 | | | | | |
| | | | | | | |
| Number Released (x10^6) | Biomass (Proxy): Age 3 and Older |
|-------------------------|----------------------------------|
| 4.920 | 0.900 | 26,973 | 29,166 | 87,062 | 143,201 |
| 4.500 | 1.002 | 26,649 | 29,470 | 86,760 | 142,879 |
| 4.000 | 1.125 | 26,281 | 29,857 | 86,464 | 142,601 |
| 3.500 | 1.247 | 25,901 | 30,228 | 86,127 | 142,256 |
| 3.000 | 1.370 | 25,533 | 30,614 | 85,830 | 141,977 |
Analyses of the approximately 25 years of CWT recovery data for spring and fall Chinook salmon released from IGH, TRH and CRH have strongly confirmed the conjectures made by Hankin (1990). Namely, release of Chinook salmon at a larger size (e.g., 40-50g vs 4-5 g) in a later month (e.g., October vs June) leads to (a) substantially improved survival to age 2 (an average of 400-500% improvement); (b) substantially reduced size at ages 2 and 3; (c) reduced maturation probabilities at ages 2 and 3; and (c) reduced age 3 ocean fishery exploitation rates. For TRH and IGH fall Chinook stocks, sizes of adults originating from fingerling or yearling releases are similar at age 4, as are ocean fishery exploitation rates and age-specific maturation probabilities. For spring Chinook salmon from TRH and CRH, however, reductions in size at age and maturation probabilities were still evident at age 4 for fish released in a later month. We surmise that these differences in life history parameters are a direct reflection of the duration of ocean growth prior to maturation for fingerling (longer) as compared to yearling (shorter) releases and they are broadly consistent with an hypothesis that, for a given stock, size at age is an important factor influencing age at maturity (see Hankin et al. 1993.). It seems clear, however, that factors influencing maturation at age depend upon more than size at age. Although we certainly searched hard to find them, we did not detect any relationships suggesting a strong correlation between estimated age-specific maturation probabilities and size at age for specific release types from individual stocks. Wells et al. (2007) argue that maturation at age is correlated with growth rate at age rather than size at age per se, an attribute for which CWT recovery data do not provide a clear proxy.
Although we found no variable that was clearly correlated with age-specific maturation probabilities, we did note an apparent long-term decline in age 2 maturation probabilities of TRH fall Chinook released as fingerlings and yearlings (Figure 4.11) and we found that estimated mean age 4 maturation probabilities for IGH and TRH fall Chinook released as fingerlings or yearlings exceeded 94% (Table 4.8). The trend of decreasing age 2 maturation probabilities, if persistent, warrants further attention on two counts. First, it may reflect the long-term consequences of excluding or largely excluding jacks from matings (see Hankin, unpublished; Fitzgibbon 2004, Chen 2008), though this consequence is not necessarily undesirable. Second, the long-term pattern of decline in age 2 maturation probabilities among age 2 fingerling releases of fall Chinook, if shared by wild Klamath River Chinook, may complicate pre-season prediction of age 3 abundance of Klamath River Chinook based on returns of jacks in the preceding year. The exceptionally high age 4 maturation probabilities effectively mean that essentially no fall Chinook salmon reared at Klamath River hatcheries will return to freshwater as age 5 adult spawners. For example, assuming that a cohort has reached age 3 and is unexploited in the ocean, probabilities of maturing at ages 3, 4 and 5 can be calculated from the conditional age-specific maturation probabilities and ocean survival rate. For fingerling releases of Iron Gate fall Chinook salmon, such figures give the following probabilities: age 3 = 0.470 (= $\hat{\sigma}_3$); age 4 = 0.396 (= $(1 - \hat{\sigma}_3)p_3\hat{\sigma}_4 = (1 - .470) \cdot 0.8 \cdot 0.934$); age 5 = 0.022 (= $(1 - \hat{\sigma}_3)p_3p_4(1 - \hat{\sigma}_4)\sigma_5 = (1 - .470) \cdot 0.8^2 \cdot (1 - 0.934) \cdot 1$), where $\sigma_5$ is assumed equal to 1. These values generate unexploited age composition of adults that would be 52.9% age 3, 44.6% age 4, and just 2.5% age 5. Of course, Klamath fall Chinook are not unexploited in the ocean, and the effect of ocean fishing is to shift age structure toward younger ages (see Hankin and Healey 1986), thereby making it even less likely that any IGH fingerling fall Chinook will ever return to freshwater at age 5.
Analyses presented in this report are of a very preliminary nature, and we hope to engage in additional analysis if time and further funding permit. We believe that there is substantial potential for additional important insights to be gleaned from more sophisticated analysis of these CWT recovery data. Given the long-term commitment to rigorous sampling programs in ocean and freshwater for Klamath River Chinook salmon, the CWT recovery data examined in this report are of exceptionally high quality and certainly merit further analysis. In particular, we believe that there are two key features or improvements of additional analyses that would be highly informative. First, we believe that it is essential to identify some suitable proxy indicators of ocean growth and survival conditions that could be used as variables linking survival rates and growth to ocean conditions. Among other things, such variables could theoretically allow identification and separation of freshwater effects on survival of CWT groups as compared to ocean effects. Although it seems *a priori* reasonable to suppose that survival of CWT groups from release to age 2 is primarily a function of ocean conditions, especially for October releases of yearlings, there is also a suggestion (IGH flows vs survival rates of June fingerling releases) that high (upper) Klamath flows may have a substantial favorable impact on survival. This apparent effect may either be through reduction in downstream migrant mortality or through some unknown favorable impacts of the Klamath River plume on nearshore oceanographic conditions (increased nutrients encourage phytoplankton and zooplankton growth; increased turbidity reduces predation success; and so on). Ideally, two ocean
variables would be useful: one to describe ocean conditions that may primarily affect survival, presumably shortly following ocean entry of smolts, and one to describe ocean conditions that may primarily reflect growth. We hypothesize that survival to age two is primarily affected by ocean conditions shortly after ocean entry, whereas size at age 2 primarily reflects ocean conditions following the critical survival period following ocean entry. For the IGH, TRH and CRH CWT release groups considered in this report, correlations between age 2 mean lengths at hatchery return and corresponding estimated survival rates from release to age 2 were in almost all cases quite weak (ranging from 0.16 - 0.64 for fingerling and yearling releases of IGH and TRH fall Chinook).
It is also important and perhaps critical to revive and/or further explore the “multi-cohort” analysis procedures outlined by Hankin and Mohr (1993) and further developed by Mohr (unpublished). We recommend revisiting/reconsidering the methods developed by Mohr because we are concerned that our various findings concerning estimated survival rates and maturation probabilities and their covariances among stocks may to some degree result from the fact that the single cohort analysis methods relied upon for this report require that one invoke an assumption that either survival rates or maturation probabilities are known. In this report, we assumed that ocean survival rates were known and had the values 0.5 for age 2-3, and 0.8 for age 3-4 and age 4-5. Although this set of assumptions seems more appropriate than making the alternative assumption that age-specific maturation probabilities are fixed and known, it is at odds with the preliminary findings of Hankin and Mohr (1993) and Mohr (unpublished) and it is also at odds with common sense. For example, Hankin and Mohr (1993) found that estimated ocean survival rates for IGH and TRH CWT groups varied considerably over the years they studied, with adult fish affected by the strong 1982-83 El Nino event having extremely low survival rates (0.1-0.2 as compared to the assumed 0.8). Measurements of lengths at age for CWT release groups affected by the strong 1982-83 El Nino event showed consistently smaller lengths at age, a clear reflection of poor conditions for growth as a result of this event. This kind of very serious departure of apparent reality with assumed survival rates and its consequences for size at age could introduce very substantial (but generally unknown) bias in some years. It might also cause multiple stocks to appear to share strong interannual covariance in estimated maturation probabilities. For example, if cohort analyses assumed usual survival patterns, whereas adult survival between age 3 and 4 was unusually low, then estimated age 3 maturation probabilities might have a very serious positive bias due to serious underestimation of the number of fish that did not mature at age three, but which died at an unusually high rate due to during unusually poor ocean survival conditions between age 3 maturation and the following fall. Overall, these kinds of considerations make us believe that one should be extremely circumspect in interpretation of the degree of covariation of estimated parameters between stocks.
Nevertheless, the degree of observed covariation in estimated survival rates, length at age, and estimated maturation probabilities that was noted in this report was in a great many cases quite striking, and in almost all cases observed covariation was consistent with reasonable biological mechanisms. For estimated survival rates from release to age 2, covariation was strongest for the most closely related stocks released from the same hatchery at the same time (i.e., spring and fall Chinook released from TRH as fingerlings or as yearlings); was of intermediate strength for stocks released at approximately the same time at nearby hatcheries (e.g., IGH and TRH releases of fall Chinook as yearlings); was less strong though still highly significant for much less closely related stocks released at the same approximate time at hatcheries distant from one another (e.g., CRH and TRH spring Chinook salmon released in October); and was lowest and often not significant for stocks released at very different times (e.g., IGH fingerlings vs yearlings). These patterns are consistent with (a) survival rates of hatchery fish being largely determined by ocean conditions rather than conditions during freshwater migration; but (b) differences in freshwater downstream migration conditions must also affect post-release survival as covariance declines as less downstream migration history is shared; and (c) conditions for survival at ocean entry probably vary considerably within a given year because survival rates of fingerlings and yearlings released from the same stock type and hatchery are not strongly correlated. The final conjecture, (c), is least well supported because differences in survival rates for fish released in June as compared to October could alternatively reflect strong differences in conditions for downstream migration survival in those two months with ocean survival conditions being fairly stable.
One of the principle objectives of our CWT analyses was to determine whether or not survival rates of CWT groups are likely good indicators of ocean conditions for survival and whether recent survival rates for IGH releases of fall Chinook have decreased relative to TRH releases of fall Chinook, possibly due to recent changes in flow management in the upper Klamath (below IGH). As noted above, it appears that variation in survival rates of CWT release groups is primarily a function of variability in ocean conditions for survival, but it seems also clear that freshwater downstream migration history has a modest impact on survival to age 2. In the absence of an "ocean environment variable" that might be suitable for analysis (e.g., the new "Wells Production Index"), it is impossible to be much more definitive on this topic. However, we note that the very substantial short-term (between year) variation in estimated survival rates must be a reflection of local, nearshore conditions that affect survival of Chinook salmon smolts entering the ocean from the Klamath and Rogue rivers. Year-to-year variation is striking and appears of greater significance to these stocks than the kind of longer-term "favorable" or "unfavorable" conditions for survival that are implied by large-scale regime shifts in ocean climate (see, e.g., Mantua et al. 1997). Also, we note that our comparisons of estimated survival rates
of fall Chinook salmon released from IGH and TRH did not indicate that there has been a recent decline in relative survival rates of IGH as compared to TRH releases. Although average survival rates of IGH releases are typically less than those of TRH releases, several exceptions to this generalization were evident in the past decade.
Although the confounding of ocean natural mortality rates with age-specific maturation rates may to some degree be circumvented by use of multi-cohort methods as compared to single cohort analysis methods, we remain concerned that there is no good fix to the problems posed by non-landed mortalities in ocean fisheries. Although errors in calculation of non-landed mortalities may be unlikely to affect the kinds of broad conclusions or trends noted in this report, they can have important impacts on allocation negotiations between ocean and freshwater fishers. In this report, we have proposed an alternative procedure for imputation of age 3 non-landed mortalities that accounts for the substantial interannual variation in size at age. We believe that this approach, or some more appropriate modification of our approach that addresses this same issue, is superior to the current imputation methods that assume a fixed schedule of monthly lengths. As our numerical examples illustrated, these alternative procedures would generate much larger estimates of non-landed mortalities in years when size at age 3 is unusually small, and much smaller estimates in years when size at age 3 is unusually large. These consequences seem quite reasonable to us, but we recognize that the example behavior of our proposed alternative (see Table 4.3) suggests that our proposed methods are off target. Among other things, the adjusted lengths seem "too large" (exceed observed mean lengths at hatchery for age 3 mature fish for CWT code 065906) and they seem "too small" in May for CWT code 065908 (and generate an implausibly large non-catch mortality total in May for that code). We have no doubt that our suggested approach can be substantially improved! Nevertheless, we reiterate our concern that length adjustments should be made at age 3 according to the relative size of age 3 fish at maturity.
We also express concern that failure to adequately address non-landed mortalities at age 2 must lead to negative bias in estimates of ocean fishery impacts, negative bias in estimates of survival to age two, and positive bias in estimates of age two maturation probabilities. We recognize that our recommendations for imputation of non-landed mortalities at age 2 inject yet another assumed parameter (age 2 contact rate compared to fully vulnerable fish), but we believe that it is highly inappropriate to ignore age 2 non-landed mortalities or to assume that they are non-existent because there are no landed CWT recoveries at age 2. If all age 2 fish were below legal minimum size limits, as is typically the case, then there would be no reported age 2 landings. That certainly does not imply that no age 2 fish were contacted or that no age 2 fish suffered non-landed mortalities. We believe that on-board observations on recreational and commercial vessels are needed to better assess the probable magnitude of non-landed mortalities at age 2.
Finally, effective long-term implementation of constant fractional marking (CFM) programs at IGH and TRH will in part depend on the mix of fingerlings and yearlings that are released from these facilities. To the degree that releases can be shifted away from fingerlings and toward yearlings, effective implementation of a CFM program would become more feasible. Letting $S_{rel}$ denote the average survival rate to age 2 of yearlings as compared to fingerlings, the device of increasing yearling releases by the factor $X/S_{rel}$, where $X$ denotes a corresponding reduction in fingerling releases, would achieve an expected total catch and escapement of adults that is very close to the original policy (especially in terms of biomass landed in fisheries). Any potential shift toward yearlings must also be balanced against the fishery management need for sufficient numbers of fingerlings to be released to allow for estimation of life history and fishery parameters important for fishery management analysis. Based on the simulations of the performance of the cohort analysis estimators of these parameters, it appears that adequate release group sizes are somewhere between 200,000 and 400,000 fish. If such release group sizes were achieved by pooling across all CWT groups of the same release type from a given brood year, then a 25% CFM rate would imply that minimum total fingerling fall Chinook salmon releases should range from about 800,000-1,600,000, with 1,000,000 being a reasonable target value (i.e., 250,000 fish receiving CWT). Falling below this range would likely result in a substantial increase in errors of estimation associated with important management parameters that are currently estimated from CWT recovery data.
Chen, Y. 2007. Long-Term Effects of Alternative Hatchery Mating Practices and Size Selective Fishing on Age and Sex Composition of Chinook Salmon Populations Returning to Hatcheries. Unpublished MS thesis, Humboldt State University.
J. Fitzgibbons. 2004. Effects of hatchery mating practices and ocean fisheries on the age and sex structure of hatchery population of Chinook salmon (*Oncorhynchus tshawytscha*). Unpublished MS thesis, Humboldt State University.
Goldwasser, L., M.S. Mohr, A.M. Grover, and M.L. Palmer-Zwahlen. 2001. The supporting databases and three analyses for the revision of the Klamath Ocean Harvest Model. National Marine Fisheries Service and California Department of Fish and Game.
Good, P. 2005. Permutation, Parametric, and Bootstrap Tests of Hypotheses. Third Edition. Springer. 315 pp.
Hankin, D. G. 1982. Estimating escapement of Pacific salmon: marking practices to discriminate wild and hatchery fish. Trans. Am. Fish. Soc. 111: 286-298.
Hankin, D. G. 1990. Effects of month of release of hatchery-reared Chinook salmon on size at age, maturation schedule and fishery contribution. Oregon Dept. of Fish and Wildlife, Information Report 90-4. 37 pp.
Hankin, D. G., and M. C. Healey. 1986. Dependence of exploitation rates for maximum yield and stock collapse on age and sex structure of chinook salmon stocks. Can. J. Fish. Aquat. Sci. 43: 1746-1759.
Hankin, D.G., J.W. Nicholas, and T.W. Downey. 1993. Evidence for inheritance of age of maturity in chinook salmon, *Oncorhynchus tshawytscha*. Can. J. Fish. Aquat.
Hankin, D.G., and M.S. Mohr. 1993. New methods for analysis of coded wire tag recovery data. CA Coop. Fish. Res. Unit Agreement No. 14-16-0009-1547. Final Report.
Logan, E., and D.G. Hankin. 2008. A Detailed Review of the Annual Hatchery Production Cycle at Iron Gate Hatchery: With Recommendations for Small Changes in Hatchery Practices that Would Reduce Rearing Mortalities and Improve Accuracy of Inventories and Estimated Numbers of Fish Released. Contract report prepared for the Hoopa Valley Tribal Council. 60 pp.
Mantua, N.J., S.R. Hare, Y. Zhang, J.M. Wallace, and R.C. Francis. 1997. A Pacific interdecadal climate oscillation with impacts on salmon production. Bull. Am. Meteor. Soc. 78:1069-1079.
Parravano v. Babbitt. 1995. 70 F. 3d 539:545.
Salmon Technical Team. 2005. Klamath River Fall Chinook Stock-Recruitment Analysis. Pacific Fishery Management Council, Portland, OR. 31 pp.
Wells, B.K, C.B. Grimes, and J.B. Waldvogel. 2007. Quantifying the effects of wind, upwelling, curl, sea surface temperature and sea level height on growth and maturation of a California Chinook salmon (*Oncorhynchus tshawytscha*) population. Fish. Oceanogr. 16:363382.
Zajanc, D., and D. Hankin. 1998. A detailed review of the annual production cycle at Trinity River Hatchery: with recommendations for changes in hatchery practices that would improve accuracy of estimation of numbers released. Contract Report prepared for Hoopa Tribal Council. 35 pp. + appendixes
|
I, James R. Clapper, declare as follows:
(U) Introduction
1. (U) I am the Director of National Intelligence (DNI) of the United States. I have held this position since August 2010. Prior to becoming the Director of National Intelligence, I served for over three years in two Administrations as the Under Secretary of Defense for Intelligence, where I served as the principal staff assistant and advisor to the Secretary and Deputy Secretary of Defense on intelligence, counterintelligence, and security matters for the Department. In this capacity, I was also dual-hatted as the Director of Defense Intelligence for the Office of the
Director of National Intelligence (ODNI). Previously, I served as lieutenant general in the U.S. Air Force, as Director of the Defense Intelligence Agency, and as the first civilian director of the National Imagery and Mapping Agency, transforming it into the National Geospatial-Intelligence Agency.
2. (U) In the course of my official duties, I have been advised of this lawsuit and the allegations at issue in the plaintiff’s Second Amended Complaint against the Department of Homeland Security, the Federal Bureau of Investigation, the Terrorist Screening Center, and the National Counterterrorism Center. The statements made in this declaration are based on my personal knowledge, as well as on information provided to me in my official capacity as DNI, and on my personal evaluation of that information. I have also reviewed and personally considered the information contained in the three supporting classified declarations submitted in this matter.
3. (U) The purpose of this declaration is to formally assert, in my capacity as DNI and head of the U.S. Intelligence Community, the state secrets privilege as well as a statutory privilege under the National Security Act of 1947, as amended, in order to protect the intelligence information, sources, and methods that are implicated in this case. Although I am not named as a defendant in this case, nor is the ODNI, the National Counterterrorism Center, a component of the ODNI, is a named defendant in this case, and, in addition, the plaintiff has sought discovery of national intelligence information. This intelligence information is covered by my privilege assertions, as the disclosure of this information would cause serious damage to the national security of the United States. The information should therefore be excluded from use in this case.
(U) Background on the Director of National Intelligence
4. (U) Congress created the position of the DNI in the Intelligence Reform and Terrorism Prevention Act of 2004, Pub. L. No. 108-458, §§ 1011(a) and 1097, 118 Stat. 3638, 3643-63, 3698-99 (2004) (amending sections 102 through 104 of Title I of the National Security Act of 1947). Subject to the authority, direction, and control of the President, the DNI serves as head of the United States Intelligence Community and as the principal advisor to the President and the National Security Council for intelligence matters related to national security. See 50 U.S.C. § 403(b)(1), (2).
5. (U) The United States Intelligence Community includes the Office of the Director of National Intelligence; the Central Intelligence Agency; the National Security Agency; the Defense Intelligence Agency; the National Geospatial-Intelligence Agency; the National Reconnaissance Office; other offices within the Department of Defense for the collection of specialized national intelligence through reconnaissance programs; the intelligence elements of the military services, the Federal Bureau of Investigation, the Department of Treasury, the Department of Energy, the Drug Enforcement Administration, and the Coast Guard; the Bureau of Intelligence and Research of the Department of State; the elements of the Department of Homeland Security concerned with the analysis of intelligence information; and such other elements of any other department or agency as may be designated by the President, or jointly designated by the DNI and the head of the department or agency concerned, as an element of the Intelligence Community. 50 U.S.C. § 401a(4).
6. (U) The responsibilities and authorities of the DNI are set forth in the National Security Act of 1947, as amended. These responsibilities include ensuring that national
intelligence is provided to the President, heads of the departments and agencies of the Executive Branch, the Chairman of the Joint Chiefs of Staff and senior military commanders, and the Senate and House of Representatives and committees thereof. 50 U.S.C. § 403-1(a)(1). The DNI is charged with establishing the objectives of; determining the requirements and priorities for; and managing and directing the tasking, collection, analysis, production, and dissemination of national intelligence by elements of the Intelligence Community. 50 U.S.C. § 403-1(f)(1)(A)(i) and (ii).
7. (U) In addition, the National Security Act of 1947, as amended, requires that "[t]he Director of National Intelligence shall protect intelligence sources and methods from unauthorized disclosure." 50 U.S.C. § 403-1(i)(1). Consistent with this responsibility, the DNI establishes and implements guidelines for the Intelligence Community for the classification of information under applicable law, Executive Orders, or other Presidential directives and for access to and dissemination of intelligence. 50 U.S.C. § 403-1(i)(2)(A), (B).
8. (U) By virtue of my position as DNI, and unless otherwise directed by the President, I have access to all intelligence related to the national security that is collected by any department, agency, or other entity of the United States. Pursuant to Executive Order No. 13526, 75 Fed. Reg. 707 (Jan. 5, 2010), reprinted in 50 U.S.C.A. 435 note at 215 (2010) ("EO 13526"), the President has authorized me to exercise original TOP SECRET classification authority.
(U) The National Counterterrorism Center
9. (U) In addition to serving as head of the U.S. Intelligence Community, I also serve as the head of ODNI. See 50 U.S.C. § 403-3. NCTC was established by Executive Order 13354
(August 27, 2004, 60 FR 53584, Sept. 1, 2004) and was made an element of the ODNI in the Intelligence Reform and Terrorism Prevention Act of 2004. See 50 U.S.C. § 404o. Among its principal missions, NCTC serves as the primary organization within the United States Government for the analysis and integration of all information related to terrorism and counterterrorism, excepting intelligence pertaining exclusively to domestic terrorists and domestic counterterrorism; ensures that appropriate agencies have access to and receive intelligence needed to accomplish their assigned activities; and serves as the central and shared knowledge bank on known and suspected terrorists and international terror groups, as well as their goals, strategies, capabilities, and networks of contacts and support. 50 U.S.C. § 404o(d). NCTC has broad authority to access all terrorism-related information that may be collected by other Federal agencies, both within and without the Intelligence Community. See Executive Order 13388 (Oct. 25, 2005). The Director of NCTC reports to the DNI on all matters related to (1) NCTC budget and programs; (2) the activities of NCTC's Directorate of Intelligence; and (3) the conduct of intelligence operations implemented by other elements of the Intelligence Community. 50 U.S.C. § 404o(c).
10. (U) Pursuant to its statutory mission, NCTC maintains the Terrorist Identities Datamart Environment (TIDE) as the central and shared knowledge bank on known and suspected terrorists. TIDE includes, to the extent permitted by law, all information the U.S. Government possesses related to the identities of individuals known or appropriately suspected to be or have been engaged in activities constituting, in preparation for, in aid of, or related to terrorism, with the exception of purely domestic terrorism information. TIDE includes a great deal of intelligence information obtained through the activities of the Intelligence Community, often implicating the most sensitive sources and methods of intelligence gathering. Based on the
underlying classification of its contents, the overall classification of TIDE, and the highest level to which its contents may be classified, is TOP SECRET//SCI. Under Executive Order 13526, information is classified TOP SECRET if unauthorized disclosure of the information reasonably could be expected to cause exceptionally grave damage to the national security of the United States. Information is classified SECRET if unauthorized disclosure reasonably could be expected to cause serious damage to the national security of the United States.
(U) Assertion of the State Secrets Privilege
11. (U) After careful and actual personal consideration of the matter, based upon my own knowledge and information obtained in the course of my official duties, including the information contained in the three supporting classified declarations filed herewith, I have determined that the disclosure of certain information, as set forth in this declaration and in greater detail in the classified declarations filed *in camera, ex parte*, would cause serious damage to the national security of the United States and that this information must therefore be protected from disclosure in this case. Thus, as to this information, I formally assert the state secrets privilege.
12. (U) I also invoke and assert a statutory privilege held by the Director of National Intelligence under the National Security Act of 1947, as amended, to "protect intelligence sources and methods from unauthorized disclosure." 50 U.S.C. § 403-1(i)(1). My assertion of this statutory privilege is coextensive with my state secrets privilege assertion.
(U) Description of Information Subject to Claims of Privilege and Harm from Disclosure
13. (U) My assertion of the state secrets and statutory privileges in this case precludes defendants or any other agency from making any response, including through document production or deposition testimony, that would serve to disclose classified information regarding plaintiff or any other individual; the sources, methods, and means by which classified information is collected; and information which would confirm or deny whether information regarding plaintiff or any other individual is in NCTC’s TIDE database.
14. (U) As a matter of policy, the United States can generally neither confirm nor deny allegations concerning intelligence activities, sources, methods, relationships, or targets. If the United States confirms that it is conducting a particular intelligence activity, that it is gathering intelligence from a particular source, or that it has gathered information on a particular person, such activities would be compromised and foreign adversaries and terrorist organizations could use that information to avoid detection. Even confirming that a certain intelligence activity or relationship does not exist, either in general or with respect to specific targets or channels, would harm national security because alerting our adversaries to channels or individuals that are not under surveillance could likewise help them avoid detection. In addition, denying untrue allegations is an untenable practice. If the U.S. Government, for example, were to confirm in certain cases that specific intelligence activities, relationships, or targets do not exist, but then refuse to comment (as it would have to) in a case involving an actual intelligence activity, relationship, or target, a person could easily deduce by comparing such responses that the latter case involved an actual intelligence activity, relationship, or target. Thus, the United States must
refuse to confirm or deny intelligence activities, relationships or targets, regardless of whether or not they exist.
15. (U) NCTC’s TIDE database of known and suspected terrorists remains an effective tool in the U.S. Government's counterterrorism efforts in part because its contents, and the sources, methods and underlying information that form the basis for its contents, are not disclosed outside intelligence and law enforcement channels. To confirm or deny that TIDE contains information regarding plaintiff or any person, or how any such information was gathered and from whom, could reasonably be expected to cause serious damage to national security, and I therefore assert the state secrets and statutory privileges to prevent the defendants from having to make any such statement or produce any potentially responsive documents that would implicate the contents of TIDE, or how its contents are derived.
(U) Support for Attorney General’s Assertion of the State Secrets Privilege
16. (U) By this declaration, I also state my support for the Attorney General’s assertion of the state secrets privilege as to whether plaintiff was or was not the subject of an FBI investigation or intelligence operation; information that could reveal the predicate for an FBI investigation; and information that could reveal particular sources and methods.
17. (U) The Intelligence Community protects information as to whether an individual is or has been the subject of a domestic or international counterterrorism investigation because disclosing that fact would compromise ongoing investigative and national security interests. If a suspect is made aware that he or she is or was the subject of a counterterrorism investigation, the suspect would naturally tend to alter his or her behavior, taking new precautions against
surveillance and changing the magnitude, nature, and/or timing of any terrorism related activity in which he or she is engaged. These modifications would require renewed effort, pull national security resources from other work, and potentially prevent us from learning who the suspect’s confederates are. Conversely, disclosure that an individual is not, or is no longer, the subject of a national security investigation could also cause substantial harm to counterterrorism investigative and intelligence gathering interests. If an individual desires to commit a terrorist act, informing him that he or she is not, or is no longer, under investigation would allow him or her to act with a greatly diminished fear of detection and could even encourage the individual to commit the terrorist act.
18. (U) Even if the subject of an investigation is ultimately found to be a law-abiding citizen with no intention to engage in terrorist activity, disclosure of the fact of an investigation could implicate intelligence sources, methods, and information. Disclosing an investigation would reveal information as to why that individual had been investigated. Just the fact of an investigation could prompt terrorists to review the relationships they had to the subject and result in their taking measures to avoid detection. As above, whenever terrorists cease to utilize a system that has been compromised, the Intelligence Community must divert resources to determine and infiltrate the new system, with a resultant period in which counterterrorism capabilities are diminished and national security imperiled.
19. (U) Thus, to protect intelligence sources and methods and further the national security, I support the Attorney General’s assertion of the state secrets privilege. Disclosure of an individual’s status as a subject of an FBI counterterrorism investigation, whether affirmative or negative, would implicate the intelligence sources and methods and risk the harms described above.
(U) Conclusion
20. (U) In sum, I formally invoke the state secrets privilege and the statutory privilege under the National Security Act of 1947, as amended, to preclude defendants from making any response, including though document production and deposition testimony, that would serve to disclose classified information, or the sources and methods by which such information was derived. The information covered by my privilege assertion includes the information more fully described in the three classified declarations filed herewith. Information of the type discussed in these declarations cannot be disclosed without causing serious damage to the national security of the United States. I also support the Attorney General's assertion of the state secrets privilege as to plaintiff's requests for any FBI investigative files pertaining to her.
I declare under penalty of perjury that the foregoing is true and correct.
Executed this 13th day of March, 2013.
[Signature]
JAMES R. CLAPPER
DIRECTOR OF NATIONAL INTELLIGENCE
|
Chapter Ten
MOTIVATION AND EMOTION
Review of Key Ideas
MOTIVATIONAL THEORIES AND CONCEPTS
1. Compare drive, incentive, and evolutionary approaches to understanding motivation.
1-1. Drive theories are based on the idea that organisms strive to maintain a state of ________________ or physiological equilibrium. For example, organisms are motivated to maintain water balance: when deprived of water they experience thirst. Thirst is a ________________ to return to a state of water equilibrium.
1-2. A drive is a state of tension. According to drive theories organisms are motivated to seek drive or tension ________________.
1-3. Theories that emphasize the pull from the external environment are known as ________________ theories. For example, we may be motivated to eat not as a function of hunger (an internal drive) but as a result of the smell or appearance of food (an external cue). Incentive theories (operate/do not operate) according to the principle of homeostasis.
1-4. From the point of view of evolutionary theory all motivations, such as the needs for affiliation, dominance, achievement, and aggression, occur because they have ________________ value for the species. Organisms with adaptive sets of motivational characteristics are more likely to pass their ___________ on to the next generation.
1-5. Place the name of the theoretical approach described below (drive, incentive, or evolutionary) in the blanks.
______________ Emphasizes homeostasis, the pressure to return to a state of equilibrium.
______________ Actions result from attempts to reduce internal states of tension.
______________ Emphasizes “pull” from the environment (as opposed to “push” from internal states).
Motivations arise as a function of their capacity to enhance reproductive success, to pass genes to the next generation.
Answers: 1-1. homeostasis (balance, equilibrium), drive 1-2. reduction 1-3. incentive, do not operate 1-4. adaptive (survival, reproductive), genes 1-5. drive, drive, incentive, evolutionary.
2. Distinguish between the two major categories of motives found in humans.
2-1. Most theories of motivation distinguish between ________________ motives (e.g., for food, water, sex, warmth) and ________________ motives. Biological needs are generally essential for the ________________ of the group or individual.
2-2. Social motives (e.g., for achievement, autonomy, affiliation) are acquired as a result of people’s experiences. While there are relatively few biological needs, people theoretically may acquire an unlimited number of ________________ needs.
Answers: 2-1. biological, social, survival 2-2. social.
THE MOTIVATION OF HUNGER AND EATING
3. Summarize evidence on the physiological factors implicated in the regulation of hunger.
3-1. Within the brain the major structure implicated in eating behavior is the ________________.
3-2. Researchers used to think that eating was controlled by “on” and “off” centers in the hypothalamus. When the lateral hypothalamus (LH) was destroyed, animals stopped eating, as if hunger had been turned off like a switch. When the ventromedial hypothalamus (VMH) was destroyed, animals (started/stopped) eating.
3-3. Current thinking is that eating seems to be controlled more by (complex neural circuits/simple anatomical centers) that run through the hypothalamus rather than by on-off centers within the hypothalamus. While the LH and VMH are still factors in hunger regulation, researchers now think that another part of the hypothalamus, the paraventricular nucleus or ____________, is more important.
3-4. Much of the food we consume is converted into ________________, a simple sugar that is an important source of energy.
3-5. Based on research findings about glucose, Mayer proposed the theory that there are specialized neurons in the brain, which he called ________________, that function to monitor blood glucose. Lower levels of glucose, for example, are associated with a(an) (increase/decrease) in hunger.
3-6. For cells to extract glucose from the blood, the hormone ________________ must be present. Insulin will produce a(an) (increase/decrease) in the level of sugar in the blood, with the result that the person experiences a(an) (increase/decrease) in the sensation of hunger.
3-7. More recently a regulatory hormone called leptin has been discovered, a hormone produced by (fat cells/neurons) and circulated to the hypothalamus in the bloodstream. Higher levels of the hormone ________________ reflect a higher level of fat in the body, which is associated with a (increase/decrease) in the sensation of hunger.
4. Summarize evidence on how the availability of food, culture, learning, and stress influence hunger.
4-1. Hunger is based not only on a physiological need but on external factors, such as the appearance, tastiness, and availability of food. Thus, some aspects of hunger motivation support the (drive/incentive) approach to motivation. Sometimes we eat, according to this research, simply because food is available and looks and tastes good.
4-2. Although we have some innate taste preferences (e.g., for fat), it is also clear that _________________________ affects what we eat. For example, taste preferences and aversions may be learned by pairing a taste with pleasant or unpleasant experiences, the process of _________________________ conditioning.
4-3. In addition, we are more likely to eat what we see others eating, so food preferences are acquired not only through conditioning but through the process of _________________________ learning.
4-4. Our environments also provide frustrating circumstances that create _________________________, a factor that may also triggers eating in many people. Although stress and increased eating are linked, it’s not clear why the relationship occurs.
Answers: 4-1. incentive 4-2. learning (environment, culture), classical 4-3. observational 4-4. stress.
5. Discuss the factors that contribute to the development of obesity.
5-1. Evolutionary theorists propose that in our ancestral past, when faced with the likelihood of famine, people evolved a capacity to overeat. Overeating, as a hedge against food shortages, had _________________________ value. In the modern world food is no longer scarce, but our tendency to overeat remains.
5-2. It is clear that many factors affect body weight and that some of the most important are genetic. For example, Stunkard et al. (1986) found that adopted children were much more similar in BMI to their (biological/adoptive) parents than to their (biological/adoptive) parents, even though they were brought up by the latter.
5-3. The most striking finding of the Stunkard et al. (1990) study with twins was that (identical/fraternal) twins reared apart were more similar in BMI than (identical/fraternal) twins reared in the same family environment. This research supports the idea that (genetics/environment) plays a major role in body weight.
5-4. The concept of set point may help explain why body weight remains so stable. The theory proposes that each individual has a “natural” body weight that is set, in large part, by the person’s (genetics/environment). The body defends one particular weight.
5-5. According to ________________ theory, individual differences in body weight are due in large part to differences in genetic makeup. This theory asserts that the body actively defends a (wide range/particular) body weight by, for example, increasing hunger or decreasing metabolism.
5-6. Settling-point theory is a bit more optimistic: individuals who make long-term changes in eating or exercise will drift downward to a lower _________________ point without such active resistance. The settling-point view also asserts that this balance is achieved as a result of (a wide variety of/genetic) factors.
5-7. According to the dietary restraint concept, the world is divided into two types of people: unrestrained eaters, who eat as much as they want when they want; and ________________ eaters, who closely monitor their food intake and frequently go hungry.
5-8. While restrained eaters are constantly on guard to control their eating, at times they may lose control and eat to excess. In other words, restraint may be disrupted or ____________, with the result that people overeat. Paradoxically, then, restraint in eating may contribute to obesity.
Answers: 5-1. survival (adaptive) 5-2. biological, adoptive 5-3. identical, fraternal, genetics 5-4. genetics 5-5. set-point, particular 5-6. settling, a wide variety of 5-7. restrained 5-8. disinhibited.
SEXUAL MOTIVATION AND BEHAVIOR
6. Describe the impact of hormones in regulating animal and human sexual behavior.
6-1. The major female sex hormones are called _______________ and the major male sex hormones _______________. Both of these gonadal hormones occur in both sexes. The hormone testosterone is a key androgen.
6-2. While there is no one-to-one relationship, higher levels of the hormone _______________, a key androgen, are related to higher levels of sexual activity in (males only/females only/both sexes).
Answers: 6-1. estrogens, androgens 6-2. testosterone, both sexes.
7. Summarize evidence on the impact of erotic materials, including aggressive pornography, on human sexual behavior.
7-1. What is the relationship between exposure to erotic materials and sexual activity? One fairly dependable finding has been that erotic materials tend to (increase/decrease) the likelihood of sexual activity for a few (hours/weeks) after exposure.
7-2. Another effect involves attitudes. In the Zillman and Bryant studies described, male and female undergraduate subjects exposed to heavy doses of pornography over a period of weeks developed attitudes about sexual practices that were more (liberal/conventional). Subjects also became (more/less) satisfied with their partners appearance and sexual performance.
7-3. In general researchers (have/have not) found a link between exposure to erotic materials and sex crimes. In addition, pornography appears to play a (major/minor) role in the commission of sexual offenses.
7-4. Some laboratory studies, however, have found that pornography depicting violence against women (decreases/increases) men’s tendency to be aggressive toward women. In these studies aggression is defined as willingness to deliver (fake) electric shock to other subjects.
Chapter Ten
MOTIVATION AND EMOTION
Review of Key Ideas
MOTIVATIONAL THEORIES AND CONCEPTS
1. Compare drive, incentive, and evolutionary approaches to understanding motivation.
1-1. Drive theories are based on the idea that organisms strive to maintain a state of ________________ or physiological equilibrium. For example, organisms are motivated to maintain water balance: when deprived of water they experience thirst. Thirst is a ________________ to return to a state of water equilibrium.
1-2. A drive is a state of tension. According to drive theories organisms are motivated to seek drive or tension ________________.
1-3. Theories that emphasize the pull from the external environment are known as ________________ theories. For example, we may be motivated to eat not as a function of hunger (an internal drive) but as a result of the smell or appearance of food (an external cue). Incentive theories (operate/do not operate) according to the principle of homeostasis.
1-4. From the point of view of evolutionary theory all motivations, such as the needs for affiliation, dominance, achievement, and aggression, occur because they have ________________ value for the species. Organisms with adaptive sets of motivational characteristics are more likely to pass their _____________ on to the next generation.
1-5. Place the name of the theoretical approach described below (drive, incentive, or evolutionary) in the blanks.
_____________ Emphasizes homeostasis, the pressure to return to a state of equilibrium.
_____________ Actions result from attempts to reduce internal states of tension.
_____________ Emphasizes “pull” from the environment (as opposed to “push” from internal states).
Motivations arise as a function of their capacity to enhance reproductive success, to pass genes to the next generation.
Answers: 1-1. homeostasis (balance, equilibrium), drive 1-2. reduction 1-3. incentive, do not operate 1-4. adaptive (survival, reproductive), genes 1-5. drive, drive, incentive, evolutionary.
2. Distinguish between the two major categories of motives found in humans.
2-1. Most theories of motivation distinguish between ________________ motives (e.g., for food, water, sex, warmth) and ________________ motives. Biological needs are generally essential for the ________________ of the group or individual.
2-2. Social motives (e.g., for achievement, autonomy, affiliation) are acquired as a result of people’s experiences. While there are relatively few biological needs, people theoretically may acquire an unlimited number of ________________ needs.
Answers: 2-1. biological, social, survival 2-2. social.
THE MOTIVATION OF HUNGER AND EATING
3. Summarize evidence on the physiological factors implicated in the regulation of hunger.
3-1. Within the brain the major structure implicated in eating behavior is the ________________.
3-2. Researchers used to think that eating was controlled by “on” and “off” centers in the hypothalamus. When the lateral hypothalamus (LH) was destroyed, animals stopped eating, as if hunger had been turned off like a switch. When the ventromedial hypothalamus (VMH) was destroyed, animals (started/stopped) eating.
3-3. Current thinking is that eating seems to be controlled more by (complex neural circuits/simple anatomical centers) that run through the hypothalamus rather than by on-off centers within the hypothalamus. While the LH and VMH are still factors in hunger regulation, researchers now think that another part of the hypothalamus, the paraventricular nucleus or ____________, is more important.
3-4. Much of the food we consume is converted into ________________, a simple sugar that is an important source of energy.
3-5. Based on research findings about glucose, Mayer proposed the theory that there are specialized neurons in the brain, which he called ________________, that function to monitor blood glucose. Lower levels of glucose, for example, are associated with a(an) (increase/decrease) in hunger.
3-6. For cells to extract glucose from the blood, the hormone ________________ must be present. Insulin will produce a(an) (increase/decrease) in the level of sugar in the blood, with the result that the person experiences a(an) (increase/decrease) in the sensation of hunger.
3-7. More recently a regulatory hormone called leptin has been discovered, a hormone produced by (fat cells/neurons) and circulated to the hypothalamus in the bloodstream. Higher levels of the hormone ________________ reflect a higher level of fat in the body, which is associated with a (increase/decrease) in the sensation of hunger.
4. Summarize evidence on how the availability of food, culture, learning, and stress influence hunger.
4-1. Hunger is based not only on a physiological need but on external factors, such as the appearance, tastiness, and availability of food. Thus, some aspects of hunger motivation support the (drive/incentive) approach to motivation. Sometimes we eat, according to this research, simply because food is available and looks and tastes good.
4-2. Although we have some innate taste preferences (e.g., for fat), it is also clear that _________________________ affects what we eat. For example, taste preferences and aversions may be learned by pairing a taste with pleasant or unpleasant experiences, the process of _________________________ conditioning.
4-3. In addition, we are more likely to eat what we see others eating, so food preferences are acquired not only through conditioning but through the process of _________________________ learning.
4-4. Our environments also provide frustrating circumstances that create _________________________, a factor that may also triggers eating in many people. Although stress and increased eating are linked, it’s not clear why the relationship occurs.
Answers: 4-1. incentive 4-2. learning (environment, culture), classical 4-3. observational 4-4. stress.
5. Discuss the factors that contribute to the development of obesity.
5-1. Evolutionary theorists propose that in our ancestral past, when faced with the likelihood of famine, people evolved a capacity to overeat. Overeating, as a hedge against food shortages, had _________________________ value. In the modern world food is no longer scarce, but our tendency to overeat remains.
5-2. It is clear that many factors affect body weight and that some of the most important are genetic. For example, Stunkard et al. (1986) found that adopted children were much more similar in BMI to their (biological/adoptive) parents than to their (biological/adoptive) parents, even though they were brought up by the latter.
5-3. The most striking finding of the Stunkard et al. (1990) study with twins was that (identical/fraternal) twins reared apart were more similar in BMI than (identical/fraternal) twins reared in the same family environment. This research supports the idea that (genetics/environment) plays a major role in body weight.
5-4. The concept of set point may help explain why body weight remains so stable. The theory proposes that each individual has a “natural” body weight that is set, in large part, by the person’s (genetics/environment). The body defends one particular weight.
5-5. According to _________________________ theory, individual differences in body weight are due in large part to differences in genetic makeup. This theory asserts that the body actively defends a (wide range/particular) body weight by, for example, increasing hunger or decreasing metabolism.
5-6. Settling-point theory is a bit more optimistic: individuals who make long-term changes in eating or exercise will drift downward to a lower _________________ point without such active resistance. The settling-point view also asserts that this balance is achieved as a result of (a wide variety of/genetic) factors.
5-7. According to the dietary restraint concept, the world is divided into two types of people: unrestrained eaters, who eat as much as they want when they want; and ________________ eaters, who closely monitor their food intake and frequently go hungry.
5-8. While restrained eaters are constantly on guard to control their eating, at times they may lose control and eat to excess. In other words, restraint may be disrupted or ____________, with the result that people overeat. Paradoxically, then, restraint in eating may contribute to obesity.
Answers: 5-1. survival (adaptive) 5-2. biological, adoptive 5-3. identical, fraternal, genetics 5-4. genetics 5-5. set-point, particular 5-6. settling, a wide variety of 5-7. restrained 5-8. disinhibited.
SEXUAL MOTIVATION AND BEHAVIOR
6. Describe the impact of hormones in regulating animal and human sexual behavior.
6-1. The major female sex hormones are called _______________ and the major male sex hormones _______________. Both of these gonadal hormones occur in both sexes. The hormone testosterone is a key androgen.
6-2. While there is no one-to-one relationship, higher levels of the hormone _______________, a key androgen, are related to higher levels of sexual activity in (males only/females only/both sexes).
Answers: 6-1. estrogens, androgens 6-2. testosterone, both sexes.
7. Summarize evidence on the impact of erotic materials, including aggressive pornography, on human sexual behavior.
7-1. What is the relationship between exposure to erotic materials and sexual activity? One fairly dependable finding has been that erotic materials tend to (increase/decrease) the likelihood of sexual activity for a few (hours/weeks) after exposure.
7-2. Another effect involves attitudes. In the Zilliman and Bryant studies described, male and female undergraduate subjects exposed to heavy doses of pornography over a period of weeks developed attitudes about sexual practices that were more (liberal/conventional). Subjects also became (more/less) satisfied with their partners appearance and sexual performance.
7-3. In general researchers (have/have not) found a link between exposure to erotic materials and sex crimes. In addition, pornography appears to play a (major/minor) role in the commission of sexual offenses.
7-4. Some laboratory studies, however, have found that pornography depicting violence against women (decreases/increases) men’s tendency to be aggressive toward women. In these studies aggression is defined as willingness to deliver (fake) electric shock to other subjects.
7-5. In addition, some laboratory studies have found that exposure to aggressive pornography makes sexual coercion or rape seem (less/more) offensive to the participants, a troublesome finding in view of current information about the prevalence of rape.
Answers: 7-1. increase, hours 7-2. liberal, less 7-3. have not, minor 7-4. increases 7-5. less.
8. Discuss parental investment theory and findings on human gender differences in sexual activity.
8-1. Triver’s parental investment theory is the idea that a species’ mating patterns are determined by the investment each sex must make to produce and nurture offspring. Human females are the ones who are pregnant for nine months and subsequently breast feed their offspring. Therefore, according to this analysis, they have a greater ________________ in the child than do males.
8-2. Parental investment theory contends that the sex that makes the smaller investment will compete for mating opportunities, while the sex that makes the larger investment will be more selective of partners. Thus, males of many mammalian species will seek to mate with (as many/as few) females as possible, while females will optimize their reproductive potential by being (selective/unrestricted) in mating.
8-3. In line with predictions from parental investment theory and evolutionary theory in general, several studies have found that in comparison to women, men will show (1) (more/less) interest in sexual activity in general, (2) desire for a greater ________________ of sexual partners, and (3) more willingness to engage in (casual/committed) sex.
Answers: 8-1. investment 8-2. as many, selective 8-3. more, variety (number), casual.
9. Describe the Featured Study on culture and mating preferences.
9-1. According to evolutionary theories, what characteristics do human females look for in a male partner? What do males look for in a female partner?
9-2. More than 10,000 people participated in Buss’s study. The people surveyed were from (the United States/37 different cultures).
9-3. In one or two sentences, summarize the results of Buss’s study.
9-4. What conclusions can reasonably be drawn from Buss’s study? Place a T or F in the blanks.
_____ Some of the differences in mating preferences were universal across cultures.
_____ The data are consistent with evolutionary theories of sexual motivation.
_____ The data may be explained by alternative interpretations that do not derive from evolutionary theory.
One possible alternative explanation of the data is that women value men’s economic resources because their own potential has been restricted.
**Answers:** 9-1. According to the evolutionary theories, women want men who will be able to acquire resources (e.g., have education, money, status, ambition). Men want women who have good breeding potential (e.g., are beautiful, youthful, in good health). 9-2. 37 different cultures 9-3. Results supported predictions from the evolutionary theories: women placed more value than men on finding a partner with good financial prospects; men placed more value than women on the characteristics of youth and physical attractiveness (as summed up in the song Summertime from Gerschwin’s Porgy and Bess, “Oh, your daddy’s rich, and your mama’s good looking.”) 9-4. All of these statements are true or represent reasonable inferences. Some differences were universal, and the data are consistent with evolutionary theories. At the same time there are alternative explanations involving the fact of discrimination against women in virtually all societies.
10. **Summarize evidence on the nature of sexual orientation and on how common homosexuality is.**
10-1. Sexual orientation refers to a person’s preference for emotional and sexual relationships with individuals of the other sex, the same sex, or either sex. Those who prefer relationships with the other sex are termed ________________, with the same sex ________________, and with either sex ________________.
10-2. Because people may have experienced homosexuality in varying degrees, it seems reasonable to consider sexual orientation as a(an) (continuum/all-or-none distinction). In part because of this definitional problem and in part due to prejudice against homosexuals, it is difficult to determine precisely the proportion of homosexuals in the population. A frequently cited statistics is 10%, but recent survey place the proportion somewhere between ________________.
**Answers:** 10-1. heterosexuals (straights), homosexuals (gays or lesbians), bisexuals 10-2. continuum, 5-8%.
11. **Summarize evidence on the determinants of sexual orientation.**
11-1. What factors determine sexual orientation? Psychoanalysts thought the answer involved some aspect of the parent-child relationship. Behaviorists assumed that it was due to the association of same-sex stimuli with sexual arousal. Thus, both psychoanalytic and behavioral theorists proposed (environmental/biological) explanations of homosexuality.
11-2. Extensive research on the upbringing of homosexuals has (supported/not supported) the idea that homosexuality is primarily explainable in terms of environmental factors.
11-3. Recent studies have produced evidence that homosexuality is in part genetic. Which of the following types of studies have supported this conclusion? (Place Y for yes or N for no in the blanks.)
_____ Studies of hormonal differences between heterosexuals and homosexuals.
_____ Studies of twins and adopted children.
_____ Autopsy studies of the hypothalamus.
11-4. Subjects in one of the studies described were gay men who had an identical twin brother, a fraternal twin brother, or an adopted brother. For each of the categories, what percent of the brothers of the subjects were also gay? Place the appropriate percentages in the blanks: 11%, 22%, 52%.
11-5. LeVay (1991) has reported that a cluster of neurons in the anterior ____________________ is (smaller/larger) in gay men than in straight men. Since all of the gay men in this study had died of AIDS, which itself may produce changes in brain structure, these findings should be interpreted with caution. Nonetheless, these data support the idea that there are (environmental/biological) factors that are related to sexual orientation.
11-6. While most studies (have/have not) found a difference between gay and straight men in circulating hormones, many theorists do suspect that hormones in the prenatal environment are a factor. For example, researchers have found that offspring of women exposed, during pregnancy, to treatment with a synthetic ____________________ are more likely to be homosexual.
11-7. While much of the evidence points toward biological factors, the fact that identical twins turn out to share sexual orientation only half of the time suggests that ____________________ factors are involved in some way. What those factors might be remains unknown, however.
11-8. Homosexuality in men and women seems to follow somewhat different courses. Women’s sexual orientation appears to be more malleable, or have greater ____________________, than men’s. For example, women are more likely than men to change their sexual orientation in their adult years.
Answers: 11-1. environmental 11-2. not supported 11-3. N, Y, Y 11-4. 52%, 22%, 11%. Note that a companion study for lesbians found similar results. 11-5. hypothalamus, smaller, biological 11-6. hormone (androgen) 11-7. environmental 11-8. plasticity (fluidity, changeability).
12. Outline the four phases of the human sexual response.
12-1. Write the names of the four phases of the human sexual response in the order in which they occur. (Hint: I made up a mnemonic device that’s hard to forget. The first letter of each phase name produces EPOR, which happens to be ROPE spelled backward.)
(a) ____________________
(b) ____________________
(c) ____________________
(d) ____________________
12-2. In the blanks below write the first letter of each phase name that correctly labels the descriptions below.
_____ Rapid increase in arousal (respiration, heart rate, blood pressure, etc.)
_____ Vasocongestion of blood vessels in sexual organs; lubrication in female
_____ Continued arousal, but at a slower pace
_____ Tightening of the vaginal entrance
____ Pulsating muscular contractions and ejaculation
____ Physiological changes produced by arousal subside.
____ Includes a refractory period for men
Answers: 12-1. (a) excitement (b) plateau (c) orgasm (d) resolution 12-2. E, E, P, P, O, R, R.
ACHIEVEMENT: IN SEARCH OF EXCELLENCE
13. Describe the achievement motive and how it is measured.
13-1. People with high achievement motivation have a need to:
a. master difficult challenges
b. outperform others
c. meet high standards of excellence
d. all of the above
13-2. To measure need for achievement, researchers ask subjects to tell stories about a series of pictures (e.g., a man holding a violin). The test using pictures in this manner is known as the Thematic Apperception Test or __________.
Answers: 13-1. d 13-2. TAT.
14. Discuss how individual differences in the need for achievement influence behavior.
14-1. People who score high on need for achievement tend to differ from those who score low in the following ways (True/False):
_____ They work harder and persist longer.
_____ They are better able to handle negative feedback about performance.
_____ They seek immediate gratification and sacrifice future goals.
_____ They seek competitive, entrepreneurial occupations.
_____ They select tasks of intermediate (not too hard, not too easy) difficulty.
Answers: 14-1. T, T, F, T, T.
15. Explain how situational factors influence affect achievement strivings.
15-1. According to Atkinson’s elaboration of McClelland’s views, achievement-oriented behavior is determined not only by (1) achievement motivation but by (2) the ________________ that success will occur and (3) the ________________ value of success.
15-2. As the difficulty of a task increases, the ________________ of success at the task decreases. At the same time, success at harder tasks may be more satisfying, so the ________________ value of the task is likely to increase. When both the incentive value and probability of success are weighed together, people with a high need for achievement would tend to select tasks of (extreme/moderate) difficulty.
Answers: 15-1. probability, incentive 15-2. probability, incentive, moderate.
THE ELEMENTS OF EMOTIONAL EXPERIENCE
16. Describe the cognitive component of emotion.
16-1. The word cognition refers to thoughts, beliefs, or conscious experience. When faced with an ugly-looking insect (or, for some people, the edge of a cliff or making a speech in public), you might say to yourself, “This is terrifying (or maybe disgusting).” This thought or cognition has an evaluative aspect: we assess our emotions as pleasant or unpleasant. Thus, one component of emotion is the thinking or ________________ component, which includes ________________ in terms of pleasantness-unpleasantness.
Answers: 16-1. cognitive, evaluation.
17. Describe the physiological and neural bases of emotion.
17-1. The second component of emotion is the ________________ component, primarily actions of the ________________ nervous system (responsible for flight or fight). Your encounter with the insect might be accompanied by changes in heart rate, breathing, or blood pressure—or by increased electrical conductivity of the skin, known as the ________________ skin response (GSR).
17-2. Lie detectors don’t actually detect lies, they detect ________________ reflected by changes in heart rate, respiration, and GSR. Emotion does not necessarily reflect lying: some people can lie without showing emotional arousal and others show arousal when asked incriminating questions. Advocates claim that polygraphs are about 85% to 90% accurate; recent research (supports/does not support) this claim. In most courtrooms polygraph results (are/are not) considered reliable enough to be used as evidence.
17-3. Recent evidence suggests that the brain structure known as the ________________ plays a central role in emotion. For example, research has found that animals that have their amygdalas destroyed cannot learn classically conditioned ________________ responses.
17-4. The amygdala doesn’t process emotion by itself but is at the core of a complex set of neural circuits. According to LeDoux, sensory information relating to fear arrives at the thalamus and from there is relayed along two pathways, to the nearby ________________ and also to areas in the ________________.
17-5. LeDoux’s theory includes that idea that the amygdala processes information extremely rapidly, which has clear ________________ value for the organism in threatening situations. The cortex responds more slowly but in greater detail and relays potentially moderating information to the amygdala. While the hub of this vigilance system seems to be the ________________, both pathways are useful in assessing threat.
17-6. We step into an elevator and are immediately terrified, reflecting the pathway centered in the _______________________. After thinking about the situation for a while we calm down, a reaction likely to involve the ______________________.
Answers: 17-1. physiological, autonomic, galvanic 17-2. emotion (autonomic arousal), does not support, are not 17-3. amygdala, fear 17-4. amygdala, cortex 17-5. survival (adaptive), amygdala 17-6. amygdala, cortex.
18. Discuss how emotions are reflected in facial expressions and explain the facial feedback hypothesis.
18-1. We communicate emotions not only verbally but _______________________, through our postures, gestures, and, especially, in our facial ________________________.
18-2. Ekman and Friesen found that there are _________ fundamental facial expressions of emotion: happiness, sadness, anger, fear, surprise, and disgust.
18-3. According to some researchers facial expressions not only reflect emotions but help create them. This viewpoint, known as the ________________________ hypothesis, asserts that facial muscles send signals to the brain that help produce the subjective experience of emotion. For example, turning up the corners of your mouth and crinkling your eyes will tend to make you feel ________________________.
Answers: 18-1. nonverbally (through body language), expressions 18-2. six 18-3. facial-feedback, happy.
19. Discuss cross-cultural similarities and variations in emotional experience.
19-1. Ekman and Friesen asked people in different cultures to label the emotion shown on photographs of faces. What did they find?
19-2. While there are considerable similarities in emotional expression across cultures, there are also striking differences. For example, word labels for sadness, anxiety, and remorse (occur/do not occur) in all cultures.
19-3. There are also cultural differences governing when people express particular emotions. For example, what emotions are you “supposed” to show at a funeral, or when watching a sporting event? The unwritten rules that regulate our display of emotion, known as ________________________ rules, vary considerably across cultures.
Answers: 19-1. People from very different cultures show considerable agreement in labeling facial expressions. 19-2. do not occur 19-3. display.
20. Compare and contrast the James-Lange and Cannon-Bard theories of emotion and explain how Schachter reconciled these conflicting views in his two-factor theory.
20-1. Suppose you saw a rat in your room (and assume that you are afraid of rats). Why would you be afraid? One would think that the process would be as follows: first you would be consciously aware of your fear of the rat, then you would experience the autonomic or visceral arousal that accompanies fear. The James-Lange theory reverses this process: we first experience the (visceral arousal/conscious fear) and then we experience (visceral arousal/conscious fear).
20-2. According to the James-Lange theory, then, fear and other emotions occur not as a result of different conscious experiences but as a result of different patterns of ________________ activation.
20-3. The Cannon-Bard theory argued that a subcortical structure in the brain (they thought it was the thalamus) simultaneously sends signals to both the cortex and the autonomic nervous system. According to this theory:
a. conscious fear would precede autonomic arousal
b. autonomic arousal would precede conscious fear
c. autonomic arousal and conscious fear would occur at the same time
20-4. According to Cannon-Bard, emotion originates in:
a. subcortical structures
b. the autonomic nervous system
c. conscious awareness
20-5. The Cannon-Bard theory contends that different emotions (e.g., fear, joy, love, anger) are accompanied by:
a. different patterns of autonomic arousal
b. nearly identical patterns of autonomic arousal
c. neither of the above
20-6. Schachter’s two-factor view is similar to the James-Lange theory in that (visceral arousal/conscious experience) is thought to precede the mental awareness of an emotion. The theory is similar to the Cannon-Bard theory in that (general/differentiated) autonomic arousal is assumed to account for a wide variety of emotions.
20-7. According to Schachter, when we experience general arousal we experience difference emotions as a result of inferences we make from cues in the environment. Hence, the two factors in Schachter’s theory are _________________ (roughly the same for all emotions) and _________________ (people’s interpretation of the arousal based on the situation).
20-8. To review the three theories, label each of the following with the name of a theory:
(a) ___________________ The subjective experience of emotion is caused by different patterns of autonomic arousal.
(b) ___________________ Emotions cannot be distinguished on the basis of a autonomic arousal; general autonomic arousal causes one to look for an explanation or label.
(c) ________________ Love is accompanied by a different autonomic pattern from hate.
(d) ________________ The subjective experience of emotion is caused by two factors, arousal and cognition.
(e) ________________ Emotions originate in subcortical brain structures; different emotions produce almost identical patterns of autonomic arousal.
(f) ________________ Ralph observes that his heart pounds and that he becomes a little out of breath at times. He also notices that these signs of arousal occur whenever Mary is around, so he figures that he must be in love.
Answers: 20-1. visceral arousal, conscious fear 20-2. autonomic (visceral) 20-3. c 20-4. a 20-5. b 20-6. visceral arousal, general 20-7. arousal, cognition 20-8. (a) James-Lange (b) Schachter’s two-factor (c) James-Lange (d) Schachter’s two-factor (e) Cannon-Bard (f) Schachter’s two-factor.
21. Summarize the evolutionary perspective on emotion.
21-1. By preparing an organism for aggression and defense, the emotion of anger helps an organism survive. From an evolutionary perspective, all emotions developed because of the ________________ value they have for a species.
21-2. Evolutionary theorists view emotions primarily as a group of (innate/learned) reactions that have been passed on because of their survival value. They also believe that emotions originate in subcortical areas, parts of the brain that evolved before the cortical structures associated with higher mental processes. In the view of evolutionary theorists, emotion evolved before thought and is largely (dependent on/independent of) thought.
21-3. How many basic, inherited emotions are there? The evolutionary writers assume that the wide range of emotions we experience are blends or variations in intensity of approximately ________________ innate or prewired primary emotions.
Answers: 21-1. survival (adaptive) 21-2. innate, independent of 21-3. eight to ten.
REFLECTING ON THE CHAPTER’S THEMES
22. Explain how this chapter highlighted five of the text’s unifying themes.
22-1. Five of the text’s organizing themes were prominent in this chapter. Indicate which themes fit the following examples by writing the appropriate abbreviations in the blanks below: C for cultural contexts, SH for sociohistorical context, T for theoretical diversity, HE for heredity and environment, and MC for multiple causation.
(a) Achievement behavior is affected by achievement motivation, the likelihood of success, the likelihood of failure, and so on. _____
(b) Display rules in a culture tell us when and where to express an emotion. _____
(c) Changing attitudes about homosexuality have produced more research on sexual orientation; in turn, data from the research has affected societal attitudes.
(d) Body weight seems to be influenced by set point, blood glucose, and inherited metabolism. It is also affected by eating habits and acquired tastes, which vary across cultures. ____, ____, and ____.
(e) The James-Lange theory proposed that different emotions reflected different patterns of physiological arousal; Cannon-Bard theory assumed that emotions originate in subcortical structures; Schachter viewed emotion as a combination of physiological arousal and cognition ____.
Answers: 22-1. (a) MC (b) C (c) SH (d) HE, MC, C (e) T.
PERSONAL APPLICATION • EXPLORING THE INGREDIENTS OF HAPPINESS
23. Summarize information on factors that do not predict happiness.
23-1. Indicate whether each of the following statements is true or false.
(a) _____ There is very little correlation between income and happiness.
(b) _____ Younger people tend to be happier than older people.
(c) _____ People who have children tend to be happier than those without children.
(d) _____ People with high IQ scores tend to be happier than those with low IQ scores.
(e) _____ There is a negligible correlation between physical attractiveness and happiness.
23-2. List five factors discussed in your text that have little or no relationship to happiness.
Answers: 23-1. (a) T, (b) F, (c) F, (d) F, (e) T 23-2. money (income), age, parenthood (either having or not having children), intelligence, physical attractiveness.
24. Summarize information on factors that are moderately or strongly correlated with happiness.
24-1. Indicate whether each of the following statements is true or false.
(a) _____ One of the strongest predictors of happiness is good health.
(b) _____ Social support and friendship groups are moderately related to happiness.
(c) _____ Religious people tend to be somewhat happier than nonreligious people.
(d) _____ Marital status is strongly related to happiness; married people tend to be happier than single people.
(e) _____ Job satisfaction tends to be strongly related to general happiness; people who like their jobs tend to be happy.
(f) _____ Differences in personality have a negligible relationship to happiness; introverts, on the average, are just as happy as extraverts.
24-2. List three factors that are moderately correlated with happiness and three that are strongly correlated.
Answers: 24-1. (a) F (People adapt, so there is only a moderate relationship between health and happiness.), (b) T, (c) T, (d) T, (e) T, (f) F (People who are extraverted, optimistic, and have high self-esteem tend to be happier.)
24-2. Moderately related: health, social activity (friendship), and religion. Strongly related: marriage, work (job satisfaction), and personality.
25. Explain three conclusions that can be drawn about the dynamics of happiness.
25-1. One conclusion about happiness is that the objective realities of a situation are less important than our ______________ reactions to it.
25-2. In addition, the extent of our happiness depends on the comparison group. Generally, people compare themselves to others who are similar in some dimension, such as friends or neighbors. In the final analysis, our happiness is relative to the ________________ to which we compare ourselves.
25-3. A third conclusion is that our baseline for judging pleasantness and unpleasantness constantly changes. When good things happen, we shift our baselines (what we feel we “need” or want) upward; when bad things happen, we shift down. In other words, people _________________ to changing circumstances by changing their baselines of comparison. This process is termed ______________ adaptation.
Answers: 25-1. subjective 25-2. group (people) 25-3. adapt (adjust), hedonic.
CRITICAL THINKING APPLICATION • ANALYZING ARGUMENTS: MAKING SENSE OUT OF CONTROVERSY
26. Describe the key elements in arguments.
26-1. In logic, an argument is a series of statements that claims to prove something (whether it does or not). Arguments are comprised of two major parts, a conclusion and one or more premises. The _______________ are statements intended to present evidence or proof. The _______________ supposedly derives from or is proved by the premises.
26-2. Consider this logical argument: “Any field of study that uses the scientific method is a science. Psychology uses the scientific method. Thus, psychology is a science.” Label the parts of the argument below (C for conclusion and P for premise).
_____ Any field of study that uses the scientific method is a science.
_____ Psychology uses the scientific method.
_____ Thus, psychology is a science.
Answers: 26-1. premises, conclusion 26-2. P, P, C. Note that this is an example of a valid argument. One may or may not agree with the premises (e.g., they may define science differently), but the conclusion logically follows from the premises.
27. Explain some common fallacies that often show up in arguments.
27-1. Read over the section on common logical fallacies described in your text. Then match the examples with the appropriate terms. (Suggestion: Use the abbreviations in parentheses for matching. Note that there are five fallacies and nine examples; some fallacies are used more than once.)
irrelevant reasons (IR)
circular reasoning (CR)
slippery slope (SS)
weak analogies (WA)
false dichotomy (FD)
(a) Trouble sleeping causes great difficulty in our lives because insomnia is a major problem for people. (Hint: Is the conclusion different from the premise?)
(b) People with insomnia should use the herb melatonin because insomnia is an enormous problem in our country. (Hint: Is the premise really related to the conclusion?)
(c) Vitamin C is extremely effective in slowing the aging process. I know it is effective because I have taken it for many years.
(d) Vitamin C is extremely effective in slowing the aging process. Obviously, the reason I take Vitamin C is that it works to reduce aging.
(e) An argument from the 1960s: If we don’t stop communism in Vietnam now, it will spread next to Laos, then to Cambodia, and then to the entire Southeast Asian Peninsula.
(f) An argument from the 1990s: We can fight in the Balkans now, or we can prepare for World War III. (Hint: Are these our only choices?)
(g) From 2003: You are either with us, or you are with the terrorists.
(h) From 2005: We can either fight them in Iraq, or we will have to fight them here.
(i) Ralph bought a mixmaster on a Tuesday in Peoria and it lasted a long time. If I buy a mixmaster on a Tuesday in Peoria, it should also last a long time.
Answers: 27-1. (a) CR. The premise and conclusion are the same. (b) IR. Insomnia may be a problem but that does not lead to the conclusion that the herb melatonin should be taken. (c) IR. The conclusion, the first statement in this example, is not related to the premise. Taking it for many years is not related to the conclusion that it is effective. (d) CR. The conclusion, the first statement, is simply a restatement of the premise. (e) SS. The argument is that if you allow one thing to happen, then a series of other things will inevitably happen. In fact, there may be no necessary connection between the hypothetical events. (f) FD. The choice seems to be between the two options, but, logically, these are not the only choices. We could also do both, or neither. (g) FD. We may not be with either. (h) FD. The implication is that there are only two choices. (i) WA. While the situations share some elements in common, this does not mean that they share all elements or that some elements cause others.
Review of Key Terms
Achievement motive Estrogens Motivation
Affiliation motive Galvanic skin response (GSR) Obesity
Androgens Glucose Orgasm
Argument Glucostats Polygraph
Assumptions Hedonic Adaptation Premises
Bisexuals Heterosexuals Refractory period
Body Mass Index (BMI) Homeostasis Set point theory
Collectivism Homosexuals Settling-point theory
Display rules Incentive Sexual orientation
Drive Individualism Subjective well-being
Emotion Lie detector Vasocongestion
1. Goal-directed behavior that may be affected by needs, wants, interests, desires, and incentives.
2. Cultural norms that regulate the expression of emotions.
3. One or more premises that are used to provide support for a conclusion.
4. The reasons presented in an argument to persuade someone that a conclusion is true.
5. Premises in an argument which are assumed but for which no proof or evidence is offered.
6. A measure of weight that controls for variations in height; weight in kilograms divided by height in meters, squared.
7. Refers to cultures that put group goals ahead of personal goals and defines a person’s identity in terms of the groups to which a person belongs.
8. Blood sugar.
9. Neurons that are sensitive to glucose.
10. A hormone secreted by the pancreas needed for extracting glucose from the blood.
11. Refers to cultures that put personal goals ahead of group goals and defines a person’s identity in terms of the personal attributes.
12. The theoretical natural point of stability in body weight.
13. The condition of being overweight.
14. The principal class of female sex hormones.
15. The principal class of male sex hormones.
16. A state of balance or equilibrium in the body.
17. An internal state of tension that motivates an organism to reduce tension.
18. Engorgement of the blood vessels during the human sexual response.
19. An external goal that motivates behavior.
20. A time following orgasm during which males are unresponsive to sexual stimulation.
21. Whether a person prefers emotional-sexual relationships with members of the same sex, the other sex, or either sex.
22. People who seek emotional-sexual relationships with members of the same sex.
23. People who seek emotional-sexual relationships with members of the other sex.
24. People who seek emotional-sexual relationships with members of either sex.
25. The view that body weight is determined by a wide variety of factors and that the body does not defend a particular point.
26. Individuals’ personal perceptions of their overall happiness and life satisfaction.
27. The need to master difficult challenges and to excel in competition with others.
28. An increase in the electrical conductivity of the skin related to an increase in sweat gland activity.
29. A reaction that includes cognitive, physiological, and behavioral components.
30. The technical name for the “lie detector.”
31. The informal name for polygraph, an apparatus that monitors physiological aspects of arousal (e.g., heart rate, GSR).
**Answers:** 1. motivation 2. display rules 3. argument 4. premises 5. assumptions 6. body mass index (BMI) 7. collectivism 8. glucose 9. glucostats 10. hedonic adaptation 11. individualism 12. set point theory 13. obesity 14. estrogens 15. androgens 16. homeostasis 17. drive 18. vasocongestion 19. incentive 20. refractory period 21. sexual orientation 22. homosexuals 23. heterosexuals 24. bisexuals 25. settling-point theory 26. subjective well-being 27. achievement motive 28. galvanic skin response (GSR) 29. emotion 30. polygraph 31. lie detector.
---
**Review of Key People**
| Name | Description |
|-----------------------------|-----------------------------------------------------------------------------|
| David Buss | Proposed that emotions arise in subcortical areas of the brain. |
| Walter Cannon | Prominent evolutionary theorist who explored, among many other topics, gender differences in human mate preferences. |
| Ekman & Wallace Friesen | Proposed the two-factor theory of emotion. |
| William James | Proposed that the amygdala serves as a “hub” of rapid emotional response, especially to sensory input involving threat. |
| Joseph LeDoux | Did the ground-breaking work on the physiology of the human sexual response. |
| William Masters & Virginia Johnson | Is responsible for most of the early research on achievement motivation. |
| Stanley Schachter | In a series of cross-cultural studies found that people can identify six or so basic emotions from facial expressions. |
| David McClelland | Thought that emotion arose from one’s perception of variations in autonomic arousal. |
**Answers:** 1. Cannon 2. Buss 3. Schachter 4. LeDoux 5. Masters & Johnson 6. McClelland 7. Ekman & Friesen 8. James.
1. Which of the following are most likely to be similar in BMI (body mass)?
a. identical twins brought up in different family environments
b. fraternal twins brought up in the same family environment
c. parents and their adopted children
d. adopted siblings brought up in the same family environment
2. The subjective feeling of hunger is influenced by:
a. the amount of glucose in the bloodstream
b. secretion of insulin by the pancreas
c. external cues, including odor and appearance of food
d. all of the above
3. What is the effect of insulin on blood glucose?
a. Glucose level increases.
b. Glucose level decreases.
c. Glucose changes to free fatty acids.
d. CCK increases.
4. According to this theory, the sex that makes the larger investment in offspring (bearing, nursing, etc.) will be more selective of partners than the sex that makes the smaller investment.
a. adaptation level theory
b. parental investment theory
c. investment differentiation theory
d. social learning theory
5. Which of the following theories proposes that the body actively tries to defend a precise body weight by adjusting metabolism and hunger?
a. set-point theory
b. settling-point theory
c. adaptation level theory
d. parental investment theory
6. Which of the following would best reflect the James-Lange theory of emotion?
a. Thinking about the cause of general autonomic arousal produces different emotions.
b. Different patterns of autonomic activation produce different consciously experienced emotions.
c. Emotion originates in subcortical structures, which send signals to both the cortex and autonomic nervous system.
d. Cognitive awareness of emotion precedes autonomic arousal.
7. The fact that identical twins are more likely to share sexual orientation than fraternal twins suggests that sexual orientation is:
a. due to the environment
b. in part genetic
c. largely chemical
d. primarily hormonal
8. Cultural norms that indicate which facial expressions of emotion are appropriate on what occasions are termed:
a. display rules
b. parental investments
c. investment differentiations
d. social scripts
9. According to the evolutionary theories, men seek as partners women who:
a. are similar to them in important attitudes
b. have a good sense of humor
c. are beautiful, youthful, and in good health
d. have good financial prospects
10. What test is generally used to measure need for achievement?
a. the TAT
b. the GSR
c. the Rorschach
d. the MMPI
11. Evidence regarding facial expression in different cultures suggests that:
a. two-factor theory accounts for nonverbal behavior
b. facial expression of emotion is to some extent innate
c. emotions originate in the cortex
d. learning is the major factor in explaining basic facial expressions
12. Which of the following proposed that emotion arises from one’s perception or interpretation of autonomic arousal?
a. Schachter
b. Cannon-Bard
c. LeDoux
d. McClelland
13. Which of the following theories asserts that thinking or cognition plays a relatively small role in emotion?
a. two-factor theory
b. James-Lange theory
c. achievement theory
d. evolutionary theory
14. Of the following, which has been found to be most strongly associated with happiness?
a. physical attractiveness
b. health
c. job satisfaction
d. general intelligence
15. Someone exhorts people to take action against company policy, as follows: “We can oppose these changes, or we can live out our lives in poverty.” This type of argument reflects which of the following fallacies?
a. slippery slope
b. weak analogy
c. false dichotomy
d. circular reasoning
Answers: 1. a 2. d 3. b 4. b 5. a 6. b 7. b 8. a 9. c 10. a 11. b 12. a 13. d 14. c 15. c.
InfoTrac Keywords
Bisexuals
Collectivism
Display rules
Set point theory
Subjective well-being
CHAPTER TEN
|
Dr Michael Repacholi, WHO Co-ordinator, Radiation and Environmental Health Unit, welcomed the participants to the 10th Anniversary Meeting of the International Advisory Committee (IAC) of the WHO International Electromagnetic Fields Project. Dr Bernard Veyret (France) was elected chairman and Dr. Qinghua He (China) was elected vice chairman. Dr Michael Repacholi then presented the Project’s organization and update of activities. This year, there were nearly 80 attendees at the IAC. New countries and entities included Bahrain, Greece, Lebanon, the Palestinian Authority and Portugal and new representatives of several member countries have joined the EMF Project since last year.
**Report on WHO Static Fields Health Risk Assessment**
Dr Eric van Rongen presented a report on the WHO Health Risk Assessment of static fields. The EHC was to include both Static and Extremely Low Frequency (ELF) Fields, but a review of the literature revealed an extensive database for static fields themselves and it was decided to develop separate EHCs for ELF and Static Fields. A task group meeting for Static Fields EHC was held in WHO, Geneva on 6-10 December 2004. Follow-up work including scientific editing, reference checking and language editing has been finalized. Translation of the summary and recommendations in various languages is in progress.
**Report on Progress report on WHO ELF Fields Health Risk Assessment**
Dr Emilie van Deventer presented a progress report on the WHO ELF Fields Health Risk Assessment, outlining the background to WHO EHC, existing and proposed EHCs on EMF, the EHC process, progress with the draft of EHC and current timetable. A task group meeting for the ELF EHC will be held at WHO, Geneva, 3-7 October 2005 and its publication is expected in early 2006. A draft of the ELF EHC is available for IAC members to download from the EMF Project website. The draft contents is organized by disease category including the following chapters; Summary and recommendations, sources, measurements and exposures, internal dosimetry, biophysical mechanisms, neurobehavioral responses, neuroendocrine system, neurodegenerative disorders, cardiovascular disorders, immune system and haematology, reproduction and development, cancer, health risk assessment and protective measures.
**Report on NIR activities from collaborating institutions and international organizations**
Dr Repacholi then invited reports from WHO collaborating institutions and international agencies which emphasized research programs under way.
International Electrotechnical Commission (IEC)
Dr Michel Bourdages reported that IEC/TC106, a Technical Committee concerned with Standards for Measurement and Calculation Methods for the assessment of EMF associated with human exposure, was developing a procedure for measuring the Specific Absorption Rate (SAR) in the human head from wireless communication devices in the frequency range of 300 MHz to 3 GHz. TC 106 is developing standards on measurement and calculation methods of physical quantities specified in exposure standards (electric field strength, magnetic flux density and power density) for purposes of compliance. These standards will be general and applicable to any exposure limits; however, TC 106 does not have the mandate to establish exposure limits. In the low frequency range, TC 106 will develop tools to assess the effect of a non-uniform field and calculation method of induced currents using more realistic 2D and 3D human model. In the high frequency range, TC 106 is developing standards on measurement methods of electromagnetic fields and Specific Absorption Rate (SAR), calculation methods of induced currents and SAR, and procedures to test compliance with exposure limits (conformity assessment). Additional information can be found on the IEC Internet site (www.iec.ch).
International Committee for Electromagnetic Safety (ICES)
Dr Ralf Bodemann reported that a major goal of ICES/IEEE standardization is to facilitate international standards harmonization, (e.g. closer cooperation with ICNIRP) in an open, transparent, and consensus oriented process where participation of all interested parties is welcome. IEEE draft safety levels with respect to human exposure to radio frequency electromagnetic fields, 3 kHz to 300 GHz (C95.1-2005). The basic restrictions in the present draft of the revised RF standard are in agreement with those of ICNIRP. MPE values for general public exposure up to 100 GHz harmonize with ICNIRP’s reference values. Other current activities include a Recommended Practice for an RF safety program.
International Commission on Non-Ionizing Radiation Protection (ICNIRP)
Dr Paolo Vecchia reported that ICNIRP conducts activities in the whole range of the frequency spectrum. The term of the former Commission expired at the end of May, 2004. Dr Vecchia presented the composition of the new Commission and of its Standing Committees. Present activities of ICNIRP in the area of EMF include a comprehensive review of the literature on RF fields (physics and dosimetry, biological studies, epidemiology), a revision of guidelines on static magnetic fields, a revision of guidelines on low frequency electric and magnetic fields (up to 100 kHz), and a statement on health issues related to emerging technologies. A revision of guidelines on RF electromagnetic fields (100 kHz – 300 GHz) is also planned, with a lower priority.
Radiation Protection Division of the UK Health Protection Agency (HPA-RPD)
On 1 April 2005 the National Radiological Protection Board merged with the Health Protection Agency (HPA) forming its new Radiation Protection Division (RPD). In May 2004, NRPB recommended that the UK adopt the guidelines recommended by the International Commission on Non-Ionizing Radiation Protection (ICNIRP) on exposure to EMFs. Dr. Alastair McKinlay summarized ongoing studies at HPA-RPD. Studies include "Effects of exposure to RF fields on spatial learning processes in mice", "Theoretical dosimetry studies" and "Experimental dosimetry studies". Some recent EMF-related publications from NRPB/HPA-RPD can be found at
www.hpa.org.uk/radiation/publications/documents_of_nrpb/abstracts/
www.hpa.org.uk/radiation/publications/w_series_reports/
He also summarized activities of the Independent Advisory Group on Non-Ionising Radiation Protection (AGNIR) whose ongoing program of work includes "Exposure to static magnetic fields", "Ultrasound and infrasound" and "Radiofrequency radiation and Power frequency electromagnetic fields". Further information on AGNIR can be found at:
www.hpa.org.uk/radiation/advisory_groups/agnir/index.htm#prog
Australian Radiation Protection and Nuclear Safety Agency (ARPANSA)
Dr Colin Roy reported that the government continues to provide funds for the EME Program until 2009. This program supports research and provides information to the public on health issues associated with RF-EMF. The EME program is coordinated by the Committee on Electromagnetic Energy Public Health Issues (CEMPHI) and is run by ARPANSA. The Australian component of the Interphone study, which is supported by this program, has been completed and submitted to scientific journals. Fact sheets in the ARPANSA CEMPHI EME series are available on the ARPANSA web site at: www.arpansa.gov.au/eme_pubs.htm
National Institute of Environmental Studies (NIES)
Dr Tomohiro Saito reported that the paper of a Japanese case-control study on childhood leukemia in relation to residential EMF-ELF exposure was accepted for publication in *Int J Cancer*. The Institute is also conducting assessments of individual exposure from domestic appliances.
Report on national concerns and key issues
Participating countries were asked to provide written reports on national activities over the past year, including current research programs, development or update of standards, and level of public concern associated with EMF. The national reports submitted are available at http://www.who.int/peh-emf/project/mapnatreps/en/index.html.
Standards
Following lunch, Dr Emilie van Deventer presented an updated draft of the WHO Framework for Developing Health-Based EMF Standards. She explained that WHO performs health risk assessment but does not develop EMF standards. ICNIRP, a full partner in the EMF Project, develops international standards. The EMF Project facilitates international consensus on standards. Dr Deventer outlined reasons for setting up a standards framework, and discussed the key elements of EMF standard setting.
Dr. Tom McManus then presented the draft WHO Model EMF Legislation. The document comprises the following three elements:
- A model act to enable an authority to initiate regulations and statutes that limit the exposure of its population to electromagnetic fields in the frequency range 0Hz to 300 GHz.
- A model regulation, which sets out in detail the scope, application, exposure limits and compliance procedures that are permitted under the act to limit a population’s exposure to electromagnetic fields.
- An explanatory memorandum describing the approach to the act and its regulations
He outlined the model act and model regulation including the following topics: purpose, objectives, scope, definitions and principles, exposure limits, compliance procedures, enforcement, record keeping and information.
Dr Michael Repacholi presented an update on the Precautionary Framework. In his presentation, he modified the presentation title into Framework to Guide Public Health Policy in Areas of Scientific Uncertainty. This framework aims to encourage the development of reasonable and realistic options. It includes consideration of cost-effectiveness instead of cost-benefit because of the difficulty in defining benefits in areas of scientific uncertainty and called for iterative evaluations of policies and broad stakeholder participation. The framework will be finalized at the upcoming workshop at Ottawa, July 2005 and completed for EMF at the end of 2005.
Occupational Exposure to EMF Fields
Dr Michael Repacholi presented the WHO/NIOSH document. The US National Institute of Occupational Safety and Health (NIOSH) is assisting in drafting a report "Managing EMF Workplace Exposures". The report will consist of 9 chapters including introduction (scope, purpose, audience and motivation), physical characteristics and interaction mechanisms, exposure measurements, occupational exposure guidelines on EMFs, description of typical exposure situations, guidelines for reducing occupational exposures, management plan/strategy/approach, responsibility and conclusions. The first draft will be distributed by the end of 2005 and reviewed before finalization and printing.
Dr K Hansson Mild presented the EU Directive, Directive 2004/40/EC of the European parliament of 29 April 2004, on the minimum health and safety requirements regarding the exposure of workers to the risks arising from physical agents (electromagnetic fields). By 2008, it will be necessary in Europe to introduce measures protecting workers from the risks associated with electromagnetic fields, owing to their effects on the health and safety of workers. However, the long-term effects are not addressed in this Directive. These measures are intended not only to ensure the health and safety of each worker on an individual basis, but also to create a minimum basis of protection for all workers.
Meeting adjourned on June 13 at 17:40
-------------------------------------------------
Meeting reconvened on June 14 at 09:00
Summary of WHO sponsored workshop
Dr Michael Repacholi reported on the Istanbul meeting on "Children and EMF" which was held 9-11 June 2004. There have been suggestions that exposure of young children to EMF may be detrimental to their health, especially during the development and maturation of the central nervous system, immune system and other critical organs. In addition children are exposed to EMF for a much greater part of their lifespan than adults. Use of mobile telephones by young children has been a concern expressed by the Stewart Committee report in the United Kingdom and others. The purpose of this workshop was to evaluate available information and summarize what conclusions can be made and what research is still needed to fill gaps in knowledge about any health concerns related to children's exposure to EMF. The Workshop concluded that children do not appear to be more sensitive, but there is insufficient evidence available to make firm conclusions. Focused research needed. ICNIRP guidelines appear to be protective of children (large safety factors in public limits). Speakers presentations can be found at: http://www.who.int/peh-emf/meetings/children_turkey_june2004/en/index.html.
Dr K Hansson Mild summarized the Prague meeting on "electrical hypersensitivity" which was held 25-27 October 2004. Sensitivity to EMF has been given the general name "Electromagnetic Hypersensitivity" or EHS. It comprises nervous system symptoms like headache, fatigue, stress, sleep disturbances, skin symptoms like prickling, burning sensations and rashes, pain and ache in muscles and many other health problems. EHS is sometimes a disabling problem for the affected persons, while the level of EMF in their neighborhood is usually no greater than is encountered in normal living environments. Their exposures are generally several orders of magnitude under the limits in internationally accepted standards. The aim of the conference was to review current state of knowledge, provide a forum for discussion among the conference participants and propose ways forward on this issue. The Workshop concluded that EHS is characterized by a variety of non-specific symptoms that differ from individual to individual. The symptoms are certainly real and can vary widely in their severity. For some individuals the symptoms can change their lifestyle. The term "Idiopathic Environmental Intolerance (IEI) with attribution to EMF" was proposed by the working group to replace EHS since the latter implies
that a causal relationship has been established between the reported symptoms and EMF. The provocation studies indicate that IEI individuals cannot detect EMF exposure, a result which makes no difference when compared with the non-IEI individuals. By and large, well controlled and conducted double-blind studies have shown that symptoms do not seem to be correlated with EMF exposure. Speakers presentations can be found at: http://www.who.int/peh-emf/meetings/hypersensitivity_prague2004/en/index.html.
Dr. Yury Grigoriev reported on the Moscow meeting on "Mobile Communication and Health: medical, biological, and social problems" which was held 20-22 September 2004. There has been a need to fully characterize exposure levels around base stations, to check compliance with national and international exposure standards, and to assess possible health risks of public exposure. These issues were debated within the conference and experience on the handling of base stations in some of the participating countries was also discussed. The conference conclusions were as follows: The level of safety of electromagnetic sources should be evaluated with reference to accepted, science-based standards; Large discrepancies exist between Russian and international standards that justify actions towards harmonization; In view of the continuous development of telecommunications, it is recommended that further research be promoted and international collaboration and information exchange be encouraged; In setting research needs and priorities, reference should be made to WHO's research agenda; active contribution of Russian scientists to the periodical update of such agenda is sought. Further information can be found at: http://www.who.int/peh-emf/meetings/archive/moscow04/en/index.html.
Dr. Peter Gajsek reported on the Slovenia meeting "From Bioeffects to Legislation-International Conference on Electromagnetic Fields" which was held 8-9 November 2004. The aim of the conference was to provide an answer to our information based society's most commonly asked question: Do current EMF standards provide sufficient protection against EMF exposure? This question is particularly important since some new EU member states and candidate Members of the EU use lower limit values in their standards and legislation in the field of EMF. However, there has been a strong move to use precautionary measures in the face of uncertainties in the science. Unfortunately, some countries have seen fit to use the precautionary principle in a way that undermines the science on which the guidelines on exposure limits are based. The speakers discussed the sound scientific background of the EMF guidelines and provided advice to the governmental representatives on how to manage the EMF issue. In addition, a meeting on the different models for EMF standards in new EU member states and candidate Members of the EU and their possible harmonization including review of the current research activities in those countries was organized. Conference conclusions were: An assessment of the scientific evidence to date suggests that no adverse health consequences have been established at exposure levels below current international ICNIRP guidelines. National authorities in the EU, particularly in the new EU member states and candidate Members of the EU should protect their citizens and workers by adopting international guidelines or use the WHO framework for developing EMF standards for limiting exposure from EMF sources and encouraging compliance with these standards. Additional precautionary measures can be adopted, provided they do not undermine the science-based guidelines. The measures could address aspects such as emission limits or technical measures to reduce fields from the EMF sources, but should not modify exposure. Speakers presentations can be found at http://www.jrc.cec.eu.int/emf-net/events.cfm?yearevent=2004.
Research
Dr Bernard Veyret presented a research review of the past year. He reviewed studies published in peer reviewed journals at all frequency ranges except for epidemiological studies and studies on hypersensitivity. He concluded that most of the WHO recommendations in the WHO research agenda are being addressed. Improvement in the quality of RF-EMF exposure systems is continuing. About half of laboratory investigations are replication studies. Studies on therapeutic effects of EMF are increasing.
Dr Michael Repacholi reported on the WHO Research Agenda (http://www.who.int/peh-emf/research/agenda/en/index.html) which was set up to identify gaps in knowledge and formulate research areas that will help inform health risk assessment of EMF fields. Over the past year, research recommendations have followed the Children's Workshop and Static Fields EHC. Research needs for ELF will be updated during the Task Group meeting in October 2005. The RF research agenda was updated in 2003 and in 2004 following the Children's Workshop. The next update will follow the RF Task Group meeting in 2006-7.
Dr Repacholi also mentioned the French-Russian study. Studies conducted in the former Soviet Union suggested that microwave irradiation of rats disrupted the antigenic structure of brain tissue. The studies formed a basis for the Soviet microwave standard and still do for the Russian and Chinese standards. Replication of the studies will be carried jointly by Russian and French researchers commencing mid 2005 and take a year to complete and a protocol of the replication studies has been agreed and funding obtained. WHO provides oversight to study.
Dr Anders Ahlbom reported on the mobile phone cohort study which was set up to determine if a full cohort study could be carried to identify any adverse health consequences of using a mobile phone. A full cohort study is now being planned. The study aims to establish a cohort of 250,000 mobile phone users, aged 18 and above, to characterize mobile phone use repeatedly, with traffic data and questionnaires, to obtain baseline information on health status and confounders, to follow the cohort, initially for five years, and collect health information through registries and questionnaire. Five countries (Denmark, Finland, Germany, Sweden and the United Kingdom) are involved in the study.
Dr Elizabeth Cardis gave an update of the INTERPHONE study. She also proposed a new cohort study, "INTERPHONE kids", for children using mobile phones. A pilot study needs to be developed and implemented prior to a full cohort being conducted.
**Review of Fact Sheets**
Following lunch, Dr Chiyoji Ohkubo gave a review of the fact and information sheets. The WHO International EMF Project has published many fact sheets and information sheets regarding electromagnetic fields and public health. Currently, simple and easy to read information is provided through two formats. The Facts Sheets provide a list of facts only and are formally approved at the Director General's level. Information Sheets contain both facts and general recommendations for national authorities and are approved at the Director's level. Archives of WHO fact sheets on electromagnetic fields and public health have already published into 15 different languages. Over the past year, fact sheets have been translated into Slovenian, Greek, Arabic and Spanish. Information sheets have been published on the Project web site over the past year. These include Effects of EMF on the Environment, Intermediate Frequencies (IF) and Microwave Ovens. These information sheets can be found in several different languages, Arabic, Japanese and Spanish.
Several new fact sheets, resulting from activities over the past year, are being compiled and will be placed on the web site shortly, including Medical Response to RF Overexposure, Children and EMF, Electrical Hypersensitivity and Mobile Phone Base Stations and Wireless Networks (to be published following the workshop in June 2005).
**Communication activities**
Dr, Emilie van Deventer gave an update of the WHO publications and web site. Six scientific papers were published over the past year (Droit de l'environnement dans la pratique, 8, 708-724, 2004, Progress in Biophysics and Molecular Biology, 87, 355-363, 2005, Pediatrics; to be published August 2005 and Risk Analysis; to be published August 2005). Other publications include the Risk handbook being translated into Dutch, French, German, Italian, Japanese,
Russian and Spanish and are on the WHO EMF Project web site. The EMF Project pamphlet has been replaced by a new version which can be also downloaded from the web site. Resulting from activities over the past year, information regarding national contact and national activities of each member states are updated on the web site.
Dr. Colin Roy reported on a new wireless communications brochure for local authorities, that would update the WHO EURO brochure dated 1999,. The draft document includes the following: Summary, What are EMF & Radiation, Background to public concern, RFR and health, Regulation aspects, What can a local authority do about RFR, Common sources of public exposure, Typical exposure from these sources, Recommendations, Q & A and a Technical annex.
Dr Chiyoji Ohkubo reported on the WHO Research Database (http://www.who.int/peh-emf/research/database/en/index.html). The database has been assembled as a service to the research community. Its purpose is to inform researchers worldwide about projects relevant to WHO’s EMF Research Agenda that still need to be conducted or those that are in progress. In cooperation with WHO, FGF (The Research Association for Radio Applications, Germany) regular updates are made to the research database. The database is divided into 8 categories. He explained the major study types (Dosimetry, Epidemiology, Animal study, Cellular study and Human provocation study). As of 27 May 2005, 1199 studies have been submitted to the WHO database, and, 611 studies have already published.
Dr Heinz-Günter Neuse from FGF also gave a report on the Research Database. One hundred and sixty one studies are declared as “ongoing”. When FGF started its investigation in March 2004 to find out whether all studies marked with “ongoing” are still ongoing, the database contained 894 studies and 170 were marked with “ongoing”. First FGF asked Funding Agencies to update database. Secondly, FGF asked the Principal Investigators. Up to the end of 2004, FGF could clarify around 100 studies. FGF could not get information on 70 studies.
Dr Dina Simunic gave an update of the Worldwide Standards Database (http://www.who.int/docstore/peh-emf/EMFStandards/who-0102/Worldmap5.htm). The EMF project has compiled an EMF worldwide standards data base. Over 50 countries in all continents have already added their national table. Three quarters of all the countries have introduced the ICNIRP guidelines (1998). Each country were requested to add their original documents on EMF standards, and possibly with English translation, to the WHO web site. Approximately half of the countries in the data base have information in the form of an EMF handbook, reports, pamphlets or fact sheets.
**Administrative business**
Finally, Dr Michael Repacholi reported on administrative business. Nine future meetings will be expected by the end of 2005. Several booklets are planned for the coming year. Other future activities include distance leaning programs and drafting RF EHC chapter.
Finally, he explained the current status of funding. There is a small reserve of funds which will soon be depleted. A concerted funding drive is needed to complete the Project activities already started.
The next IAC meeting will be held in June 2006 at a time to be determined.
|
Mixed effects of effluents from a wastewater treatment plant on river ecosystem metabolism: subsidy or stress?
IBON ARISTI*, DANIEL VON SCHILLER†, MAITE ARROITA*, DAMIÀ BARCELÓ†,‡, LÍDIA PONSAT͆, MARIA J. GARCÍA-GALÁN†, SERGI SABATER†,§, ARTURO ELOSEGI* AND VICENÇ ACUNA†
*Faculty of Science and Technology, The University of the Basque Country, Bilbao, Spain
†Catalan Institute for Water Research (ICRA), Girona, Spain
‡Department of Environmental Chemistry, IIQAB-CSIC, Barcelona, Spain
§Institute of Aquatic Ecology, University of Girona, Girona, Spain
SUMMARY
1. The effluents of wastewater treatment plants (WWTPs) include a complex mixture of nutrients and pollutants. Nutrients can subsidise autotrophic and heterotrophic organisms, while toxic pollutants can act as stressors, depending, for instance, on their concentration and interactions in the environment. Hence, it is difficult to predict the overall effect of WWTP effluents on river ecosystem functioning.
2. We assessed the effects of WWTP effluents on river biofilms and ecosystem metabolism in one river segment upstream from a WWTP and three segments downstream from the WWTP and following a pollution gradient.
3. The photosynthetic capacity and enzymatic activity of biofilms showed no change, with the exception of leucine aminopeptidase, which followed the pollution gradient most likely driven by changes in organic matter availability. The effluent produced mixed effects on ecosystem-scale metabolism. It promoted respiration (subsidy effect), probably as a consequence of enhanced availability of organic matter. On the other hand, and despite enhanced nutrient concentrations, photosynthesis–irradiance relationships showed that the effluent partly decoupled primary production from light availability, thus suggesting a stress effect.
4. Overall, WWTP effluents can alter the balance between autotrophic and heterotrophic processes and produce spatial discontinuities in ecosystem functioning along rivers as a consequence of the mixed contribution of stressors and subsidisers.
Keywords: ecosystem functioning, metabolism, photosynthesis versus irradiance curve, pollution, subsidy–stress effect
Introduction
Pollution from point sources such as wastewater treatment plants (WWTPs) is a common impact on river ecosystems (Bernhardt & Palmer, 2007; Grant et al., 2012), especially in conurbations (United Nations Population Division, 2006). For example, more than 2500 WWTPs have been put into operation over the last three decades in Spain (Serrano, 2007). As WWTPs do not remove all contaminants from sewage waters (Rodríguez-Mozaz et al., 2015), their effluents contribute a complex mixture of contaminants to freshwater ecosystems (Ternes, 1998; Petrovic et al., 2002; Kolpin et al., 2004; Gros, Petrović & Barceló, 2007; Merseburger et al., 2009). WWTPs release nutrients and organic matter (Martí et al., 2004), together with emerging contaminants such as pharmaceuticals and personal care products (Kuster et al., 2008; Ginebreda et al., 2010). Therefore, WWTPs contribute both assimilable contaminants such as dissolved nutrients and organic matter, which subsidise biological activity (at least up to a threshold beyond which they can suppress it), and toxic contaminants, which are deleterious...
to organisms and tend to suppress biological activity (Odum, Finn & Franz, 1979). However, most previous studies of the effects of WWTP effluents on ecosystem processes have only considered their subsidy effects (Martí et al., 2004; Merseburger, Martí & Sabater, 2005; Gücker, Brauns & Pusch, 2006; Ribot et al., 2012).
When in excess, assimilable substances entering fresh waters via WWTP effluents can impair water quality, alter the structure of biological communities, cause harmful algal blooms and affect ecosystem functioning (Smith, 2003; Sutton et al., 2011). These substances promote the biomass and activity of both primary producers (algae, macrophytes) and microbial heterotrophs (bacteria, fungi), which are able to use dissolved nutrients and organic matter (Stelzer, Heffernan & Likens, 2003). Moreover, their effects can transmit upwards to other trophic levels (Hart & Robinson, 1990) and eventually affect the entire ecosystem (Woodcock & Huryn, 2005; Izagirre et al., 2008; Bernot et al., 2010; Cabrini et al., 2013). Functioning of freshwater ecosystems can respond linearly to the concentration of assimilable contaminants such as nutrients (Yates et al., 2013; Silva-Junior et al., 2014), but hump-shaped responses have also been observed (Clapcott et al., 2011; Woodward et al., 2012). The toxic contaminants entering fresh waters via WWTP effluents can have direct detrimental effects on aquatic life (Hernando et al., 2006; de Castro-Catala et al., 2014), especially when they occur in mixtures (Cleuvers, 2003). Toxic contaminants reduce the abundance, affect the composition of biofilms (Wilson et al., 2003; Ponsatí et al., In revision) and invertebrate communities (Muñoz et al., 2009; Alexander et al., 2013; Clements, Cadmus & Brinkman, 2013) and can also affect the rates of ecosystem processes (Bundschuh et al., 2009; Moreirinha et al., 2011; Rosi-Marshall et al., 2013). Autotrophic processes seem to be more sensitive to WWTP pollutants than heterotrophic processes (Proia et al., 2013; Corcoll et al., 2014), but the reasons behind these differences are still far from clear.
Consequently, and depending of their mixed composition and the resulting concentrations on rivers, WWTP effluents can act either as a subsidy or a stress for the receiving ecosystem (Cardinale, Bier & Kwan, 2012). Furthermore, the potential response to contaminants differs between groups of organisms, and ecological interactions add a level of complexity (Segner, Schmitt-Jansen & Sabater, 2014) as, for instance, when the detrimental effects on some organisms promote the activity of others by releasing them from competition or predation (e.g. Alexander et al., 2013). Therefore, the response to pollution can differ from the scale of individual components such as biofilm to the scale of the whole ecosystem, as already shown for other environmental pressures such as flow regulation (Aristi et al., 2014; Ponsatí et al., 2014).
We examined whether WWTP effluents were a subsidy or a stress for river ecosystem functioning by comparing one upstream river segment with three downstream segments in a gradient of nutrient and toxic concentrations. We hypothesised: (i) that WWTP effluents affect autotrophic and heterotrophic metabolism differently; (ii) that effects decrease downstream as contaminants such as nutrients and toxic pollutants (of which we used pharmaceuticals as a proxy) decrease following natural attenuation processes; and (iii) that the downstream trajectories differ between autotrophic and heterotrophic metabolism because of their different responses to the subsidy–stress effects of WWTP effluents.
**Methods**
**Study design**
The study was conducted in the Segre River, a tributary of the Ebro River in the Oriental Pyrenees (NE Iberian Peninsula). At the study site (UTM X: 411856 and UTM Y: 4698346, 31N/ETRS 89), the Segre drains an area of 287 km$^2$, with a rain/snow-fed flow regime. The river runs through a gravel bed meandering channel across a broad valley mainly covered with native forests but also with some pastures and small agricultural fields. Near the town of Puigcerdà, it receives the effluent from a WWTP that treats sewage from c. 30 000 population equivalents.
We compared a control reach (CR) upstream from the WWTP effluent with a 4000-m-long impact reach downstream (IR). In the latter, we selected three impact segments at increasing distances from the WWTP effluent: 500–1500 m (IR1), 1500–2500 m (IR2) and 2500–4500 m (IR3). Hereafter, we refer to all of them (control plus impacts) as segments for simplicity, and use the term reach only when making overall comparisons between conditions upstream and downstream from the WWTP. Acuña et al. (2015) showed that dilution and self-purification reduce the total concentration of pharmaceuticals by 37% along the impact segments.
**Environmental measurements**
Above-canopy global radiation (GLR) data were obtained from the meteorological station of the Catalan
Meteorological Service (Das, Catalan Meteorological Service, located at c. 5 km from the studied reach). Radiation reaching the streambed was estimated by filtering the series of data of global radiation by light interception coefficients calculated by the Hemiview canopy analysis software (version 2.1; Dynamax Inc., Houston, TX, U.S.A.). Hemiview was used to perform image analysis of hemispherical photography determining the gap fraction, contributions of direct and diffuse solar radiation from each sky direction, site factors and leaf area index (LAI). Hemispherical photographs of the canopy were taken during the study period (9–10 October 2012) and every 50 m in all the study reaches, with a high-resolution digital camera (Nikon D-70s; NIKON Corporation, Tokyo, Japan) fitted to a 180° fisheye (Fisheye-NIKKOR 8 mm; NIKON Corporation). Water velocity and discharge were measured at the end of each river reach, according to the methods of Gore and Hamilton (1996) using an acoustic Doppler velocity meter (FlowTracker Handheld-ADV®, SonTek, San Diego, CA, U.S.A.).
Water temperature, conductivity and pH were measured with hand-held probes (WTW multiline 3310; YSI ProODO handled; YSI Inc., Yellow Springs, OH, U.S.A.) at the end of each river segment at noon and midnight. Water samples were collected in parallel, filtered through fibreglass filters (Whatman GF/F 0.7 μm nominal pore size; Whatman International Ltd., Maidstone, UK) and frozen at −20 °C until analysis. Ammonium concentration was analysed by ion chromatography using a DIONEXI C5000 (Dionex Corporation, Sunnyvale, CA, U.S.A.), phosphate by colorimetry using an Alliance-AMS Smartchem 140 spectrophotometer (AMS, Frepillon, France) and DOC by a Shimadzu TOC-V CSH analyzer (Shimadzu Corporation, Kyoto, Japan). For suspended particulate organic matter (SPOM), three water samples (each 2 L) were filtered through pre-ashed and pre-weighed Whatman GF/F filters. Filters were frozen for transport, and once in the laboratory, they were dried (70 °C, 72 h), weighed, ashed (500 °C, 5 h) and re-weighed to estimate ash-free dry mass (AFDM).
Ten pharmaceuticals belonging to different therapeutic families were measured as a proxy of the concentration of other contaminants within each river segment, from samples collected in parallel to those for nutrients, filtered through nylon filters (0.2-μm mesh; Whatman, Maidstone, U.K.) and kept at −20 °C until analysis. Analysis of pharmaceuticals was performed following the fully automated on-line methodology described in detail by García-Galán et al. (unpublished manuscript available from the author on request). Briefly, 5 mL of surface water was loaded on the on-line chromatographic system (Thermo Scientific EQuan™, Franklin, MA, U.S.A.) consisting of two quaternary pumps and 2 LC columns, one for pre-concentration of the sample and the second for chromatographic separation. The sample was further eluted by means of the mobile phase into the coupled mass spectrometer (TSQ Vantage triple quadrupole; Thermo Scientific). Chromatographic separation was achieved using a Thermo Scientific Hypersil Gold™ (50 × 2.1 mm, 1.9 μm particle size) column. Target compounds were analysed under dual negative/positive electrospray ionisation in multiple reaction monitoring (MRM) mode, monitoring two transitions between the precursor ion and the most abundant fragment ions for each compound. Recoveries of the compounds ranged between 62% and 183% (sulfamethoxazole and ibuprofen, respectively), whereas limits of detection ranged from 0.81 to 7.86 ng L⁻¹ (sulfamethoxazole and venlafaxine, respectively).
**Benthic organic matter and biofilm characteristics**
Five Surber net (0.09 m², 0.2 mm mesh size) samples for benthic organic matter (BOM) were taken at random from each segment, the material was frozen for transport, and once in the laboratory, it was dried (70 °C, 72 h) and ashed (500 °C, 5 h) to calculate AFDM. Chlorophyll-\(a\) (Chl-\(a\)) samples were obtained from the upper exposed part of cobbles. From each cobble, a surface of 2–3 cm² was scraped with a knife and pooled together to obtain a mixed sampling area of 9–18 cm² according to the available biomass. Five replicates were taken in each river segment. Then, samples were immediately frozen (−20 °C) until analysis. In the laboratory, Chl-\(a\) was extracted with 90% v/v acetone overnight at 4 °C and quantified spectrophotometrically (Shimadzu UV1800) after filtration (Whatman GF/C 1.2 μm) following Jeffrey & Humphrey (1975).
Biofilm functioning was measured on colonised artificial substrata. Unglazed ceramic tiles of 1.25 × 1.25 cm were glued in groups of 110 units onto flat 20 × 20 cm bricks, and 3 flat bricks per segment incubated at a depth of 30 cm in the field during 6 weeks (30 August 2012 to 10 October 2012) to allow for biofilm colonisation. On 9–10 October, ceramic tiles from each of three flat bricks were sampled to measure photosynthetic and respiration capacity and enzymatic activities.
Photosynthetic capacity measurements [effective quantum yield (\(Y_{\text{eff}}\)), maximum photosynthetic capacity (\(Y_{\text{max}}\)), photochemical quenching (PQ) and non-photochemical quenching (NPQ)] were determined in the field by Diving-PAM (pulse amplitude modulated) underwater fluorometer (Heinz Walz, Effeltrich, Germany). Ceramic tiles were placed in individual glass vials, filled with 4 mL of stream water and kept for 20 min in the dark at river temperature to obtain the maximum Chl-α fluorescence ($F_0$) and later exposed to natural light to measure the fluorescence yield ($Y_{\text{eff}}$ and $Y_{\text{max}}$) and quenching (PQ and NPQ) (Genty, Briantais & Baker, 1989). $Y_{\text{eff}}$ and $Y_{\text{max}}$ were, respectively, used as indicators of photosynthetic efficiency and maximal photosynthetic capacity of algal community. NPQ was used as an indicator of the algal capacity to dissipate the excess light during stress conditions (Corcoll et al., 2011).
The respiratory capacity (electron transport system, ETS) of the biofilm was determined by the reduction of the electron transport acceptor INT (2-(p-iodophenyl)-3-(p-nitrophenyl)-5-phenyl tetrazolium chloride) to INT-formazan (iodonitrotetrazolium formazan) (Blenkinsopp & Lock, 1990). Ceramic tiles were placed in individual glass vials with 4 mL of filtered stream water (Whatman Nylon Membrane 0.2-μm mesh) and kept in the dark at 20 °C. For an INT solution blank, an additional tile was taken and fixed with 4% formaldehyde. Incubations were carried out with the addition of 3 mL of 0.02% INT solution for 8 h in the dark with continuous shaking. Samples were frozen at −20 °C after solution removal. Once in the laboratory, INT was extracted with cold methanol for 1 h at 4 °C in the dark. The extract was filtered (Whatman GF/C) and quantified spectrophotometrically at 480 nm with a standard solution of 0–60 μg L$^{-1}$ of INT-formazan (Sigma-Aldrich, St Louis, MO).
We measured activities of three selected extracellular enzymes: alkaline phosphatase (AP, an enzyme linked to phosphorus acquisition), β-glucosidase (BG, involved in the degradation of small organic compounds) and leucine aminopeptidase (LAP, linked to the use of peptides and proteins as a source of nitrogen). Activities were determined using substrate analogues of MUF (methylumbelliferyl) and AMC (aminomethylcoumarin), [4-MUF-phosphatase (zP); 4-MUF-B-D-glucosidase (βG); and l-leucin aminomethylcoumarin (LAP) from Sigma-Aldrich]. Ceramic tiles and MUF/AMC substrate blank were placed in individual glass vials with 4 mL of filtered stream water (Whatman Nylon Membrane 0.2-μm mesh) and incubated with 0.120 mL of each substrate (0.3 mmol L$^{-1}$ to ensure substrate saturation (Romani & Sabater, 1999). Incubation was carried out in the dark with continuous shaking for 1 h at 20 °C. Two blanks of filtered stream water were also incubated. After addition of 4 mL of 0.05 m glycine buffer, pH 10.4, samples were frozen at −20 °C. Once in the laboratory, samples and standard calibrating solutions of MUF and AMC were thawed and quantified by spectrofluorometry (Fluorescence Spectrophotometer F-7000, Hitachi, Tokyo, Japan; Romani & Sabater, 1999).
**River ecosystem metabolism**
Metabolism was calculated from diel dissolved oxygen (DO) changes by the open-system method with either one or two stations (Odum, 1956; Reichert, Uehlinger & Acuña, 2009). We chose the best method (single-station or two-station) to estimate ecosystem metabolism in each segment following Reichert et al. (2009): we compared the ratio of flow velocity to reaeration coefficient (v : k) with segment length and used the single-station method in reaches longer than three times the v : k ratio and the two-station method in shorter reaches. Thus, we used the single-station method for segments CR and IR1, and the two-station method for IR2 and IR3. DO was measured at 10-min intervals for 20 days (from 21 September to 10 October 2012) at the upstream and downstream ends of each river segment with optical oxygen probes (YSI 6150 connected to YSI 600 OMS; YSI Inc., Yellow Springs, OH, U.S.A.) from which 10 days under base flow conditions were used. The reaeration coefficient was determined using slug additions of mixed tracer solutions (Jin et al., 2012). Solutions of propane-saturated water were prepared in the laboratory by filling hermetic 20-L plastic tanks with 10 L of distilled water and 10 L of 99% pure propane gas (Linde Industrial Gases, Barcelona, Spain). The solutions were prepared a few days before the additions and shaken to allow sufficient time for propane to dissolve into the water. A total of three slug additions were performed: the first covering IR3, the second covering IR1 and IR2 and the third covering CR. For each slug addition, two of the propane-saturated water solutions were added *in situ* to 60-L containers filled with a solution of 40 L of stream water with a measured amount of conservative solute tracer (chloride as NaCl). Immediately after mixing, the solutions were added into the stream channel at c. 400 m upstream from the first sampling point to allow for complete lateral mixing. The breakthrough curves of chloride were followed at each station using a hand-held conductivity meter (WTW, Weilheim in Oberbayern, Germany). Five replicate water samples were collected at the conductivity peak using 60-mL plastic syringes fitted with stopcocks. After adding 30 mL of air to each syringe, these were shaken for ~10 min to allow equilibration of the propane gas into the air space. The air space was then collected in pre-evacuated 20-mL glass
vials, which were stored at 4 °C until analysis on a gas chromatograph (ThermoFisher Scientific, San Jose, CA). The reaeration coefficient was calculated using the decline in conductivity-corrected propane concentrations between sampling stations as described by Jin et al. (2012). Nominal travel time of water was calculated by measuring the time between the peaks of the breakthrough curves at the upstream and downstream stations (Hubbard et al., 1982). Ecosystem respiration (ER) was calculated as the sum of net metabolism rate during the dark period and respiration values during the light period, these being calculated as the linear interpolation between the net metabolism rate values of sunrise and sunset of the nights before and after the day of interest. Net ecosystem metabolism (NEM) was calculated as sum of net metabolism rates during the whole day and gross primary production (GPP) as the difference between NEM and ER.
Photosynthesis–Irradiance relationships
To evaluate the possible subsidy or stress effect at the ecosystem level, we analysed the relationship between primary production and irradiance reaching the streambed (P-I). For each river segment, GPP and GLR values from 6 days were fitted to linear and hyperbolic tangent functions by nonlinear regression (STATISTICA, version 8; StatSoft Inc., Tulsa, OK, U.S.A.), the hyperbolic tangent function including or excluding temperature dependence:
\[
GPP = P_{MAX} \cdot \tanh \left( \frac{\alpha \cdot I}{P_{MAX}} \right) \sigma^{T-20}
\]
where \(P_{MAX}\) is light-saturated photosynthesis, \(\alpha\) is the initial slope of the P-I curve, \(I\) is the GLR reaching the streambed, \(\sigma\) is the temperature dependence coefficient, and \(T\) is temperature. The half-saturation light intensity (\(I_k\)) was calculated as \(P_{MAX}/\alpha\) (Henley, 1993). Selection of the best model (linear or hyperbolic) for each one of the river segments and days was based on the \(r^2\) value of the fitted models.
Data analysis
Load of transported nutrients and pharmaceutical compounds was calculated by multiplying concentration by discharge, and attenuation was calculated per unit of distance by calculating the reduction of concentrations in the studied reach. Normality of all variables was initially checked with the Kolmogrov–Smirnov test, and variables were log-transformed when necessary. Differences of measured variables among sites were analysed by means of generalised least-squares (GLS) models that incorporate spatial structure directly into model residuals (\(N = 8\) for physical and chemical measurements; \(N = 12–20\) for biofilm measurements; and \(N = 40\) for metabolic measurements). Pearson moment correlation analysis was used with the averaged values of each segment to identify the direction and strength of the relationships between variables (\(N = 4\)), and between variables and distance at the end of the river segments. This last type of correlation was performed in two ways, either including CR reach values or excluding them. Normality was tested with the residuals of the models by the Shapiro test. The significance of different compar-
---
**Table 1** Water physicochemical characteristics for each river segment (mean ± SD)
| | CR | IR1 | IR2 | IR3 |
|------------------------|--------|--------|--------|--------|
| Discharge (m³ s⁻¹) | 0.29 ± 0.03 | 0.50 ± 0.17 | 0.64 ± 0.03* | 0.83 ± 0.24* |
| Velocity (m s⁻¹) | 0.18 ± 0.06 | 0.20 ± 0.08 | 0.38 ± 0.14 | 0.33 ± 0.08 |
| Depth (m) | 0.14 ± 0.02 | 0.15 ± 0.01 | 0.19 ± 0.05 | 0.23 ± 0.01* |
| Width (m) | 11.90 ± 0.85 | 10.25 ± 2.47 | 9.45 ± 1.34 | 10.70 ± 0.42 |
| GLR (MJ m⁻² day⁻¹) | 4.62 ± 0.82 | 9.52 ± 1.69* | 11.93 ± 2.12* | 14.34 ± 2.54* |
| LAI | 2.52 ± 0.83 | 1.76 ± 0.55 | 0.71 ± 0.42* | 0.72 ± 0.16* |
| \(K_{20}\) (day⁻¹) | 32.67 | 28.79 | 29.76 | 34.45 |
| Temperature (°C) | 13.58 ± 1.41 | 13.80 ± 1.10 | 13.49 ± 0.87 | 13.60 ± 0.86 |
| pH | 8.54 ± 0.39 | 8.63 ± 0.01 | 8.55 ± 0.12 | 8.65 ± 0.25 |
| Conductivity (µS cm⁻¹) | 180.90 ± 0.85 | 225.75 ± 13.79* | 214.5 ± 2.12* | 207.75 ± 7.42* |
| Ammonium (mg L⁻¹) | 0.012 ± 0.001 | 1.92 ± 1.03* | 0.90 ± 0.41* | 0.37 ± 0.33* |
| Phosphate (mg L⁻¹) | 0.039 ± 0.001 | 0.292 ± 0.111* | 0.200 ± 0.020* | 0.182 ± 0.004* |
| DOC (mg L⁻¹) | 2.54 ± 0.15 | 3.67 ± 0.41* | 3.14 ± 0.34* | 2.79 ± 0.16 |
| SPOM (mg L⁻¹) | 2.90 ± 0.08 | 4.48 ± 0.51* | 3.04 ± 0.08 | 3.02 ± 0.34 |
GLR, global radiation reaching the streambed; LAI, leaf area index; \(K_{20}\), reaeration coefficients corrected with temperature; DOC, dissolved organic carbon; SPOM, suspended particulate organic carbon.
*Significant difference (\(P < 0.05\)) in comparison with CR site.
issons was tested by ANOVA. All analyses were considered significant at $P < 0.05$ and were performed with the R software (version 3.1.1; R Development Core Team, Vienna, Austria).
Results
Environmental measurements
Discharge and irradiance increased and LAI decreased along the study reaches (Table 1), but water velocity, depth, channel width, water temperature and pH did not change significantly. Conductivity increased 25% from CR to IR1, while ammonium increased 160-fold (0.01–1.9 mg L$^{-1}$) and phosphate 7.5-fold (0.04–0.3 mg L$^{-1}$; Table 1). These three variables decreased further downstream (Table 1). The decrease in ammonium was a result of attenuation processes and not only of dilution or dispersion, as its load increased from 3.48 mg s$^{-1}$ in CR to 960 mg s$^{-1}$ in IR1 and then decreased to 576 and 307 mg s$^{-1}$ in IR2 and IR3, respectively. On the other hand, the WWTP effluent increased the phosphate load from 11.3 mg s$^{-1}$ in CR to 146 mg s$^{-1}$ in IR1; however, it remained steady further downstream (128 and 151 mg s$^{-1}$), indicating no phosphate attenuation along the impact reach.
Carbamazepine (2.49 ng L$^{-1}$), ibuprofen (14.42 ng L$^{-1}$) and sulfamethoxazole (0.95 ng L$^{-1}$) were the only pharmaceuticals found in CR. All pharmaceuticals analysed showed significant increases from CR to IR1 (Fig. 1), as well as a progressive decrease from IR1 to IR3. In fact, ibuprofen and sulfamethoxazole returned to values not significantly different than those in CR. The decrease in diclofenac, ibuprofen, sulfadiazine and venlafaxine concentrations was the result of natural attenuation, as shown by reduced loads along the impact reach. For example, diclofenac load reduction from IR1 to IR3 was of 0.59% km$^{-1}$, whereas venlafaxine load reduction was of 0.41% km$^{-1}$. In contrast, the loads of carbamazepine, diazepam, sulfamethoxazole, sulfaipyridine and venlafaxine remained steady, and that of sulfamethazine increased downstream.
Dissolved organic carbon values averaged 2.5 mg L$^{-1}$ in the CR river segment, increased to 3.7 mg L$^{-1}$ at IR1 and decreased to 2.8 mg L$^{-1}$ at IR3. As in the case of phosphate, no clear attenuation could be detected, as the loads transported by the river were 736 mg s$^{-1}$ in CR, increased to 1835 mg s$^{-1}$ in IR1, and to 2010 mg s$^{-1}$ in IR2 and 2316 mg s$^{-1}$ in IR3. Similarly, SPOM values averaged 2.9 mg L$^{-1}$ in the CR river segment, increased to 4.5 mg L$^{-1}$ at IR1 and decreased to 3.0 mg L$^{-1}$ at IR3, although there were no clear changes in SPOM loads along the impact reach. Both DOC and SPOM concentrations increased significantly from CR to IR1 and then
| Table 2 Benthic organic matter and biofilm characteristics in each river segment (mean ± SD) |
|-----------------------------------------------|-----------------|-----------------|-----------------|-----------------|
| | CR | IR1 | IR2 | IR3 |
| BOM (g m$^{-2}$) | 26.95 ± 11.99 | 138.99 ± 202.36 | 68.56 ± 48.51 | 72.79 ± 55.85 |
| Chl-a (µg cm$^{-2}$) | 1.24 ± 0.24 | 4.20 ± 1.89* | 6.16 ± 1.71* | 9.61 ± 5.83* |
| $Y_{\text{max}}$ | 0.65 ± 0.05 | 0.64 ± 0.06 | 0.57 ± 0.12 | 0.57 ± 0.08 |
| $Y_{\text{eff}}$ | 0.62 ± 0.01 | 0.56 ± 0.03 | 0.53 ± 0.11 | 0.53 ± 0.10 |
| PQ | 0.83 ± 0.08 | 0.89 ± 0.05 | 0.87 ± 0.06 | 0.89 ± 0.02 |
| NPQ | 0.13 ± 0.01 | 0.20 ± 0.05 | 0.19 ± 0.09 | 0.15 ± 0.06 |
| ETS (µg cm$^{-2}$ h$^{-1}$) | 22.48 ± 2.61 | 18.95 ± 4.62 | 17.65 ± 4.61 | 18.00 ± 1.77 |
| AP (nmol cm$^{-2}$ h$^{-1}$) | 65.85 ± 10.99 | 51.28 ± 17.49 | 45.83 ± 19.83 | 46.45 ± 16.23 |
| BG (nmol cm$^{-2}$ h$^{-1}$) | 59.88 ± 6.27 | 50.31 ± 20.82 | 116.76 ± 62.58 | 48.83 ± 32.11 |
| LAP (nmol cm$^{-2}$ h$^{-1}$) | 66.00 ± 19.34 | 106.92 ± 10.77* | 87.73 ± 10.83 | 84.25 ± 11.35 |
BOM, benthic organic matter; Chl-a, chlorophyll-a; Ymax, maximum photosynthetic capacity; Yeff, effective quantum yield; PQ, photochemical quenching; NPQ, non-photochemical quenching; ETS, electron transport system; AP, alkaline phosphatase; BG, β-glucosidase; LAP: leucine aminopeptidase.
*Significant difference ($P < 0.05$) in comparison with CR site.
decreased linearly with distance from the WWTP ($R^2 > 0.51$, $P < 0.05$), until they approached pre-disturbance values (Table 1).
**Benthic organic matter and biofilm characteristics**
Benthic organic matter and Chl-$a$ concentration showed contrasting responses to the WWTP effluent. BOM values averaged 26.9 g AFDM m$^{-2}$ at the CR river segment, 139.0 g AFDM m$^{-2}$ at IR1, 68.6 g AFDM m$^{-2}$ at IR2 and 72.8 g AFDM m$^{-2}$ at IR3 (Table 2), but values were not statistically significantly different from those at CR. Chl-$a$ values in the CR segment averaged 1.2 $\mu$g cm$^{-2}$ and showed a progressive increase downstream up to 9.6 $\mu$g cm$^{-2}$ at IR3 (linear regression with distance, $R^2 = 0.62$, $P < 0.0001$). BOM was positively correlated with conductivity, ammonium and phosphate, and Chl-$a$ with discharge and GLR ($R^2 > 0.90$, $P < 0.05$).
$Y_{\text{max}}$ and $Y_{\text{eff}}$ averaged 0.6 in CR and did not change downstream (Table 2). PQ showed high values (>0.8) in all segments with no significant changes, while the NPQ increased c. 50% from CR to IR1, with a subsequent decrease until IR3. The ETS showed almost no spatial changes, with values around 20 $\mu$g cm$^{-1}$ h$^{-1}$ in all river segments. AP activity averaged 65.8 nmol MUF cm$^{-2}$ h$^{-1}$ in CR and decreased in the impact reach from 51.3 nmol MUF cm$^{-2}$ h$^{-1}$ at IR1 to 46.5 nmol MUF cm$^{-2}$ h$^{-1}$ at IR3. BG activity values averaged 59.9 nmol MUF cm$^{-2}$ h$^{-1}$ in CR and reached 116.8 nmol cm$^{-2}$ h$^{-1}$ in IR2. Finally, the LAP activity averaged 66.0 nmol cm$^{-2}$ h$^{-1}$ in CR, increased significantly to 106.9 nmol cm$^{-2}$ h$^{-1}$ at IR1 and decreased downstream reaching 84.3 nmol cm$^{-2}$ h$^{-1}$ at IR3. NPQ was positively correlated with conductivity and ammonium, whereas LAP was positively correlated with conductivity, ammonium, DOC and BOM ($R^2 > 0.75$ $P < 0.05$).
**River ecosystem metabolism**
Ecosystem metabolism followed contrasting longitudinal patterns. There was an almost threefold increase in ER

**Fig. 2** Daily metabolic rates (mean ± SD) in each river segment. Positive values represent gross primary production (GPP) and negative values ecosystem respiration (ER). The * symbol indicates significant difference ($P < 0.05$) in comparison with CR site.

**Fig. 3** Daily gross primary production (GPP) in relation to the received total GLR. Values from 10 days are shown for each river segment.
from CR to IR1 (from 3.1 to 8.8 g O$_2$ m$^{-2}$ day$^{-1}$; Fig. 2, Table 3) and a decrease along the impact reach down to 6.6 g O$_2$ m$^{-2}$ day$^{-1}$ at IR3, a value still two times higher than the control. Overall, ER was significantly higher in the impact reach than in the CR, and the decrease down-
---
**Table 3** Metabolism parameters (mean ± SD) for each river segment
| | CR | IR1 | IR2 | IR3 |
|------------------|--------|--------|--------|--------|
| GPP (g O$_2$ m$^{-2}$ day$^{-1}$) | 0.54 ± 0.15 | 0.70 ± 0.25 | 1.24 ± 0.38* | 2.30 ± 0.64* |
| ER (g O$_2$ m$^{-2}$ day$^{-1}$) | −3.11 ± 0.16 | −8.79 ± 1.11* | −7.46 ± 1.85* | −6.56 ± 0.98* |
| NEM (g O$_2$ m$^{-2}$ day$^{-1}$) | −2.57 ± 0.23 | −8.09 ± 0.87* | −6.22 ± 1.87* | −4.6 ± 1.41* |
| P/R | 0.17 ± 0.05 | 0.08 ± 0.02* | 0.18 ± 0.06 | 0.36 ± 0.14* |
GPP, gross primary production; ER, ecosystem respiration; NEM, net ecosystem metabolism; P/R, production to respiration ratio.
*Significant difference ($P < 0.05$) in comparison with CR site.
stream of the WWTP was also significant (linear regression with distance, $R^2 = 0.29$ $P = 0.002$). ER was not correlated to DOC or SPOM, but it was to ammonium ($R^2 = 0.99$ $P = 0.001$), phosphates ($R^2 = 0.98$ $P = 0.003$), pharmaceuticals ($R^2 = 0.99$ $P = 0.002$) and BOM ($R^2 = 0.91$ $P = 0.043$). GPP averaged 0.5 g O$_2$ m$^{-2}$ day$^{-1}$ in CR (Table 3), did not differ between CR and IR1, but then increased significantly to 1.24 in IR2 and 2.3 in IR3 (Fig. 2) following the increase on the light availability ($R^2 = 0.51$ $P < 0.0001$) (Fig. 3). All river segments were heterotrophic, with NEM values averaging $-2.6$ O$_2$ m$^{-2}$ day$^{-1}$ in CR, increasing to $-8.1$ O$_2$ m$^{-2}$ day$^{-1}$ in IR1 and then decreasing downstream to $-4.3$ O$_2$ m$^{-2}$ day$^{-1}$ in IR3. NEM was significantly higher in all impact segments than in CR. The P/R ratio averaged 0.17 in CR, decreased significantly in IR1 with values averaging 0.08, then returned to 0.18 in IR2 and finally increased significantly to 0.36 in IR3. NEM was positively correlated to ammonium ($R^2 = 0.94$ $P = 0.032$) and DOC ($R^2 = 0.94$ $P = 0.032$), whereas P/R showed no significant correlation with any measured variable. No significant correlations were found for measurements at biofilm and ecosystem level.
**Photosynthesis–Irradiance relationship**
P-I relationships were strongly affected by the discharge of the WWTP effluent (Fig. 4). The initial slope was lowest at IR1, but by IR2 it returned to values similar to CR, and by IR3 the initial slope was even higher (Table 4). The shape of the P-I curves also changed, following a linear equation at IR1, whereas the hyperbolic equation offered a better fit at the rest of the segments (Table 4). $I_K$ increased in the impact reach, but the difference was only statistically significant in IR3. The hyperbolic equations showing a better fit to the data of CR, IR2, IR3 included temperature as explanatory variable, which improved the fit to the data showing hysteresis; thus, for the same light availability, GPP was lower during the morning than during the afternoon.
**Discussion**
The discharge of the WWTP effluent caused a large increase in the concentration of all measured contaminants: nutrients, dissolved and suspended organic matter, and pharmaceutical products. The contaminants below the effluent did not produce evident signs of eutrophication such as anoxia or algal blooms, common in highly polluted rivers (Smith, 2003; Brack et al., 2007). Nevertheless, the ammonium concentration in IR1 was

Table 4 Production–Irradiance (P-I) relationships and calculated parameters (mean ± SD) for each river segment
| Selected model | $r^2$ | Initial slopes | Light saturation ($I_K$) (W m$^{-2}$) |
|----------------|-----------|----------------|--------------------------------------|
| CR | Hyperbolic + Temperature | 0.85 ± 0.15 | $5.72 \times 10^{-5} \pm 4.08 \times 10^{-5}$ | 113.39 ± 62.09 |
| IR1 | Linear | 0.69 ± 0.10 | $5.17 \times 10^{-6} \pm 7.53 \times 10^{-7}$* | – |
| IR2 | Hyperbolic + Temperature | 0.60 ± 0.25 | $5.70 \times 10^{-5} \pm 4.87 \times 10^{-5}$ | 260.30 ± 211.89 |
| IR3 | Hyperbolic + Temperature | 0.82 ± 0.10 | $6.25 \times 10^{-5} \pm 4.74 \times 10^{-5}$ | 245.78 ± 65.83* |
*Significant difference ($P < 0.05$) in comparison with CR site.
high enough to cause potential toxic effects on stream invertebrates and to impair litter decomposition rates (Baldy et al., 2002; Maltby et al., 2002). On the other hand, the concentration of pharmaceutical compounds such as diclofenac was similar to levels commonly found downstream of WWTP effluent discharges, which may approach 100 ng L$^{-1}$ (Vieno & Sillanpää, 2014). The lowest concentrations of diclofenac producing toxic effects seem to range between 10 and 1000 ng L$^{-1}$, depending on the species, exposure duration and endpoints used (Vieno & Sillanpää, 2014). As the observed concentrations in our study near the WWTP effluent discharge (50 ng L$^{-1}$) are within this range of toxic concentrations, we could expect some toxic effects. Furthermore, toxic effects have been reported in Mediterranean rivers at concentrations just four times higher (220 ng L$^{-1}$ for diclofenac in average) than those measured in this study, resulting in changes in algal and macroinvertebrate communities (Muñoz et al., 2009; Ginebreda et al., 2010). Finally, similar effects on NPQ from pharmaceuticals have been reported in the Mediterranean basins (Ponsatí et al., In revision), with diclofenac values ranging from 1 to 61 ng L$^{-1}$.
The concentration of both assimilable and toxic contaminants decreased downstream of the WWTP effluent discharge. The decrease in ammonium concentration was a consequence of attenuation, not simple dilution, as shown by reduced loads. Ammonium is a highly reactive nutrient that is readily nitrified or taken up by the biota (Martí et al., 2004), and thus often shows downstream attenuation (vonSchiller et al., 2008). In contrast, attenuation of phosphate and organic matter (both dissolved and suspended) was less intense. The rate at which different nutrients are retained seems to be highly variable and depends, among others factors, on which is the limiting nutrient in each system (Newbold et al., 1982). For instance, Elósegui et al. (1995) showed the load of phosphate and ammonium to decrease at a similar rate below a point input of raw sewage, whereas Merseburger et al. (2005) reported a higher decrease in ammonium than in phosphate concentration downstream of a WWTP effluent. Pharmaceutical compounds showed contrasting trends: attenuation was observed for diclofenac, ibuprofen, sulfadiazine and verapamil, but not for carbamazepine, diazepam, sulfamethoxazole, sul-fapyridine, venlafaxine and sulfamethazine. The observed attenuations in terms of load reduction were similar to that reported at the same site (Acuña et al., 2015) and those from other systems (Writer et al., 2012). Mean relative attenuation for ibuprofen was of 61 ± 10%, and for diclofenac of 12 ± 26% (Corominas et al., In revision).
The differences in biofilm variables between the study reaches suggested that the WWTP effluent was acting more as a subsidy than as a stressor. In general, toxicants and other stressors reduce biofilm biomass and photosynthetic efficiency (Tili & Montuelle, 2011; Corcoll et al., 2015). Nevertheless, patterns are often complicated by nonlinear responses such as hormesis (Calabrese, 2005), reduced sensitivity to toxics under enhanced nutrient concentrations (Guasch et al., 2004), adaptation of communities to past toxicity (Pesce et al., 2011) or interaction between light history and sensitivity to toxicity (Bonnineau et al., 2012). In our case, Chl-a concentrations were largely unaffected by the WWTP effluent and showed instead a progressive downstream increase, most likely caused by the higher light availability as a consequence of reduced shading (Roberts, Sabater & Beardall, 2004). BOM, on the other hand, showed a fivefold increase after the WWTP effluent input, followed by a reduction downstream to values three times higher than the control in IR3. Photosynthetic efficiency and enzyme activities also showed little effect of the WWTP. A clear exception was NPQ, which was 54% higher at IR1 than at CR. NPQ has been reported to increase as a response to toxicity in order to protect the photosynthetic apparatus from excess light that cannot be used for photosynthesis (Juneau et al., 2001; Geoffroy et al., 2003). Similarly, LAP activity increased below the discharge of the WWTP effluent and decreased further downstream closely matching the pollution pattern, probably as a result of higher abundance of organic nitrogen along this gradient (Proia et al., 2013). Overall, WWTP effluents seem to have promoted biological activity of the biofilm, rather than reducing it.
At the ecosystem level, respiration was also subsidised, following a pattern similar to that of organic matter. Although the low number of river segments analysed limits the statistical power of correlation analyses, ER was mostly related to BOM, indicating the likely coupling between both variables along the river, as has been described elsewhere (e.g. Young & Huryn, 1999; Acuña et al., 2004). ER has been directly related to anthropogenic inputs of nutrients and organic matter (Yates et al., 2013; Silva-Junior et al., 2014), thereby overriding the negative effects of toxic contaminants such as pharmaceuticals (e.g. Rosi-Marshall et al., 2013). GPP was also affected by the WWTP effluent, but showed a constant increase further downstream, which suggests that light was the primary driver of this variable in the studied river. Although GPP has often been linked to nutrient status (e.g. Gücker et al., 2006), this relationship only holds when irradiance is not limiting (Artigas et al., 2013). Nevertheless, just below the WWTP effluent (IR1), GPP was depressed with respect to the values expected according to the available irradiance, as shown by the slope and shape of P-I curves, therefore suggesting a stress. As a result of the relative suppression of GPP and the enhancement of ER, there was also a strong decrease in NEM below the WWTP effluent, which recovered downstream because of the reduction of the relative suppression of GPP by toxic pollutants, the increase in light availability and the decrease of ER along the river segment. Overall, stress effects were only observed for autotrophic processes at both ecosystem and biofilm scales, but only one of the measured biofilm metrics (NPQ) actually reflected the stress effects. This lack of coherence among biofilm metrics on autotrophic processes might be caused by acquired tolerances of the autotrophic community, as reported by Corcoll et al. (2014) in reaches I1 and I2. In regard to heterotrophic processes, subsidy effects were observed at both biofilm and ecosystem scales.
In conclusion, we found ample evidence of WWTP effluents acting as a subsidy, but more limited evidence of them acting as a stressor. Measurements at the biofilm and at the ecosystem level are complementary and mainly differ in their response to subsidy and stress. Most biofilm variables suggested the WWTP effluents acted as a subsidy, whereas at the ecosystem level ER was subsidised, but GPP showed some stress effects as it became partially decoupled from the available light. The complementary response detected at the biofilm and the ecosystem scales stresses the need to study both in order to fully understand the impact of WWTP effluents on river ecosystems.
Acknowledgments
This research was supported by the Spanish Ministry of Economy and Competitiveness and FEDER foundings through the SCARCE Consolider-Ingenio CSD2009-00065 and ABSTRACT CGL2012-35848 projects, and the European Communities 7th Framework Programme Funding under Grant agreement no. 603629-ENV-2013-6.2.1-Globaqua. We especially thank the people who assisted in the field and in the laboratory. A thorough review of the manuscript by Prof. Roger I. Jones and two unknown reviewers is deeply appreciated too. We also want to acknowledge financial support in terms of predoctoral grants from the University of the Basque Country (I. Aristi), the Basque Government (M. Arroita), as well as a post-doctoral grant ‘Juan de la Cierva’ (jci-2010-06397) (D. von Schiller) and a Marie Curie European Reintegration Grant (PERG07-GA-2010-259219) (V. Acuña). This work was partly supported by the Basque Government (Consolidated Research Group: Stream Ecology 7-CA-18/10).
References
Acuña V., Giorgi A., Muñoz I., Uehlinger U. & Sabater S. (2004) Flow extremes and benthic organic matter shape the metabolism of a headwater Mediterranean stream. Freshwater Biology, 49, 960–971.
Acuña V., von Schiller D., García-Galan M.J., Rodriguez-Mozaz S., Corominas L., Petrovic M. et al. (2015) Occurrence and in-stream attenuation of wastewater-derived pharmaceuticals in Iberian rivers. Science of the Total Environment, 503–504, 133–141.
Alexander A.C., Luis A.T., Culp J.M., Baird D.J. & Cessna A.J. (2013) Can nutrients mask community responses to insecticide mixtures? Ecotoxicology, 22, 1085–1100.
Aristi I., Arroita M., Larrañaga A., Ponsatí L., Sabater S., von Schiller D. et al. (2014) Flow regulation by dams affects ecosystem metabolism in Mediterranean rivers. Freshwater Biology, 59, 1816–1829.
Artigas J., García-Berthou E., Bauer D.E., Castro M.I., Cochero J., Colautti D.C. et al. (2013) Global pressures, specific responses: effects of nutrient enrichment in streams from different biomes. Environmental Research Letters, 8, 1–13.
Baldy V., Chauvet E., Charcosset J.Y. & Gessner M.O. (2002) Microbial dynamics associated with leaves decomposing in the mainstream and floodplain pond of a large river. Aquatic Microbial Ecology, 28, 25–36.
Bernhardt E.S. & Palmer M.A. (2007) Restoring streams in an urbanized world. *Freshwater Biology*, **52**, 738–751.
Bernot M.J., Sobota D.J., Hall R.O., Mulholland P.J., Dodds W.K., Webster J.R. *et al.* (2010) Inter-regional comparison of land-use effects on stream metabolism. *Freshwater Biology*, **55**, 1874–1890.
Blenkinsopp S.A. & Lock M.A. (1990) The measurement of electron transport system activity in river biofilms. *Water Research*, **24**, 441–445.
Bonmineau C., Gallardo Sague I., Urrea G. & Guasch H. (2012) Light history modulates antioxidant and photosynthetic responses of biofilms to both natural (light) and chemical (herbicides) stressors. *Ecotoxicology*, **21**, 1208–1224.
Brack W., Klamer H.J.C., López de Alda M. & Barceló D. (2007) Effect-directed analysis of key toxicants in European river basins. A review. *Environmental Science and Pollution Research*, **14**, 30–38.
Bundschuh M., Hahn T., Gessner M.O. & Schulz R. (2009) Antibiotics as a chemical stressor affecting an aquatic decomposer-detritivore system. *Environmental Toxicology and Chemistry/SETAC*, **28**, 197–203.
Cabrini R., Canobbio S., Sartori L., Fornaroli R. & Mezzanotte V. (2013) Leaf packs in impaired streams: the influence of leaf type and environmental gradients on breakdown rate and invertebrate assemblage composition. *Water, Air, & Soil Pollution*, **224**, 1967–1979.
Calabrese E.J. (2005) Paradigm lost, paradigm found: the re-emergence of hormesis as a fundamental dose response model in the toxicological sciences. *Environmental Pollution*, **138**, 378–411.
Cardinale B.J., Bier R. & Kwan C. (2012) Effects of TiO2 nanoparticles on the growth and metabolism of three species of freshwater algae. *Journal of Nanoparticle Research*, **14**, 913.
de Castro-Catala N., Muñoz I., Armendariz L., Campos B., Barcelo D., Lopez-Doval J. *et al.* (2014) Invertebrate community responses to emerging water pollutants in Iberian river basins. *The Science of the Total Environment*, **503–504**, 142–150.
Clapcott J., Young R., Goodwin E., Leathwick J. & Kelly D. (2011) *Relationships between Multiple Land-Use Pressures and Individual and Combined Indicators of Stream Ecological Integrity*. NZ Department of Conservation, DOC Research and Development Series 365. Wellington, New Zealand.
Clements W.H., Cadmus P. & Brinkman S.F. (2013) Responses of aquatic insects to Cu and Zn in stream microcosms: understanding differences between single species tests and field responses. *Environmental Science & Technology*, **47**, 7506–7513.
Cleuvers M. (2003) Aquatic ecotoxicity of pharmaceuticals including the assessment of combination effects. *Toxicology Letters*, **142**, 185–194.
Corcoll N., Acuna V., Barcelo D., Casellas M., Guasch H., Huerta B. *et al.* (2014) Pollution-induced community tolerance to non-steroidal anti-inflammatory drugs (NSAIDs) in fluvial biofilm communities affected by WWTP effluents. *Chemosphere*, **112**, 185–193.
Corcoll N., Bonet B., Leira M. & Guasch H. (2011) Chl-a fluorescence parameters as biomarkers of metal toxicity in fluvial biofilms: an experimental study. *Hydrobiologia*, **673**, 119–136.
Corcoll N., Casellas M., Huerta B., Guasch H., Acuña V., Rodríguez-Mozaz S. *et al.* (2015) Effects of flow intermittency and pharmaceutical exposure on the structure and metabolism of stream biofilms. *Science of the Total Environment*, **503–504**, 159–170.
Elósegui A., Arana X., Basaguren A. & Pozo J. (1995) Self-purification processes in a medium-sized stream. *Environmental Management*, **19**, 931–939.
Genty B., Briantais J.M. & Baker N.R. (1989) The relationship between the quantum yield of photosynthetic electron transport and quenching of chlorophyll fluorescence. *Biochimica et Biophysica Acta (BBA)-General Subjects*, **990**, 87–92.
Geoffroy L., Dewez D., Vernet G. & Popovic R. (2003) Oxyfluorfen toxic effect on *S. obliquus* evaluated by different photosynthetic and enzymatic biomarkers. *Archives of Environmental Contamination and Toxicology*, **45**, 445–452.
Ginebreda A., Muñoz I., López de Alda M., Brix R., López-Doval J. & Barceló D. (2010) Environmental risk assessment of pharmaceuticals in rivers: relationships between hazard indexes and aquatic macroinvertebrate diversity indexes in the Llobregat River (NE Spain). *Environment International*, **36**, 153–162.
Gore J.A. & Hamilton S.W. (1996) Comparison of flow-related habitat evaluations downstream of low-head weirs on small and large fluvial ecosystems. *Regulated Rivers: Research & Management*, **12**, 459–469.
Grant S.B., Saphores J.D., Feldman D.L., Hamilton A.J., Fletcher T.D., Cook P.L.M. *et al.* (2012) Taking the “waste” out of “wastewater” for human water security and ecosystem sustainability. *Science*, **337**, 681–686.
Gros M., Petrović M. & Barceló D. (2007) Wastewater treatment plants as a pathway for aquatic contamination by pharmaceuticals in the Ebro river basin (northeast Spain). *Environmental Toxicology and Chemistry/SETAC*, **26**, 1553–1562.
Guasch H., Navarro E., Serra A. & Sabater S. (2004) Phosphate limitation influences the sensitivity to copper in periphytic algae. *Freshwater Biology*, **49**, 463–473.
Gücker B., Brauns M. & Pusch M.T. (2006) Effects of wastewater treatment plant discharge on ecosystem structure and function of lowland streams. *Journal of the North American Benthological Society*, **25**, 313–329.
Hart D.D. & Robinson C.T. (1990) Resource limitation in a stream community: phosphorus enrichment effects on periphyton and grazers. *Ecology*, **71**, 1494–1502.
Henley W.J. (1993) Measurement and interpretation of photosynthetic light-response curves in algae in the context
of photoinhibition and diel changes. *Journal of Phycology*, 29, 729–739.
Hernando M.D., Mezcua M., Fernández-Alba A.R. & Barceló D. (2006) Environmental risk assessment of pharmaceutical residues in wastewater effluents, surface waters and sediments. *Talanta*, 69, 334–342.
Hubbard E.F., Kilpatric F.A., Martens L.A. & Wilson J.F. Jr (1982) Measurements of time of travel and dispersion in streams by dye tracing: US Geological Survey Techniques of Water-Resources Investigation, book 3 (Applications of Hydraulics), chap. A9.
Izagirre O., Agirre U., Bermejo M., Pozo J. & Elosegi A. (2008) Environmental controls of whole-stream metabolism identified from continuous monitoring of Basque streams. *Journal of the North American Benthological Society*, 27, 252–268.
Jeffrey S.W. & Humphrey G.F. (1975) New spectrophotometric equations for determining chlorophylls a, b, cl and c2 in higher plants, algae and natural phytoplankton. *Biochemie und Physiologie der Pflanzen*, 167, 191–194.
Jin H.S., White D.S., Ramsey J.B. & Kipphut G.W. (2012) Mixed tracer injection method to measure reaeration coefficients in small streams. *Water, Air, & Soil Pollution*, 223, 5297–5306.
Juneau P., Dewez D., Matsui S., Kim S.G. & Popovic R. (2001) Evaluation of different algal species sensitivity to mercury and metolachlor by PAM-fluorometry. *Chemosphere*, 45, 589–598.
Kolpin D.W., Skopec M., Meyer M.T., Furlong E.T. & Zaugg S.D. (2004) Urban contribution of pharmaceuticals and other organic wastewater contaminants to streams during differing flow conditions. *Science of the Total Environment*, 328, 119–130.
Kuster M., López de Alda M.J., Hernando M.D., Petrovic M., Martín-Alonso J. & Barceló D. (2008) Analysis and occurrence of pharmaceuticals, estrogens, progestogens and polar pesticides in sewage treatment plant effluents, river water and drinking water in the Llobregat river basin (Barcelona, Spain). *Journal of Hydrology*, 358, 112–123.
Maltby L., Clayton S.A., Wood R.M. & McLoughlin N. (2002) Evolution of the *Gammarus pulex* in situ feeding assay as a biomonitor of water quality: robustness, responsiveness, and relevance. *Environmental Toxicology and Chemistry*, 21, 361–368.
Martí E., Aumatell J., Godé L., Poch M. & Sabater F. (2004) Nutrient retention efficiency in streams receiving inputs from wastewater treatment plants. *Journal of Environmental Quality*, 33, 285–293.
Merseburger G., Martí E. & Sabater F. (2005) Net changes in nutrient concentrations below a point source input in two streams draining catchments with contrasting land uses. *Science of the Total Environment*, 347, 217–229.
Merseburger G., Martí E., Sabater F. & Ortiz J.D. (2009) *Effects of Agricultural Runoff Versus Point Sources on the Biogeochemical Processes of Receiving Stream Ecosystems*. Agricultural Runoff, Coastal Engineering and Flooding.
Moreirinha C., Duarte S., Pascoal C. & Cassio F. (2011) Effects of cadmium and phenanthrene mixtures on aquatic fungi and microbially mediated leaf litter decomposition. *Archives of Environmental Contamination and Toxicology*, 61, 211–219.
Muñoz I., López-Doval J.C., Ricart M., Villagrasa M., Brix R., Geiszinger A. et al. (2009) Bridging levels of pharmaceuticals in river water with biological community structure in the Llobregat River basin (northeast Spain). *Environmental Toxicology and Chemistry/SETAC*, 28, 2706–2714.
Newbold J.D., O’Neill R.V., Elwood J.W. & Van Winkle W. (1982) Nutrient spiralling in streams: implications for nutrient limitation and invertebrate activity. *The American Naturalist*, 120, 628–652.
Odum E.P., Finn J.T. & Franz E.H. (1979) Perturbation theory and the subsidy-stress gradient. *BioScience*, 29, 349–352.
Odum H.T. (1956) Primary productivity in flowing waters. *Limnology & Oceanography: Fluids & Environments*, 1, 102–117.
Pesce S., Morin S., Lissalde S., Montuelle B. & Mazzella N. (2011) Combining polar organic chemical integrative samplers (POCIS) with toxicity testing to evaluate pesticide mixture effects on natural phototrophic biofilms. *Environmental Pollution*, 159, 735–741.
Petrovic M., Sole M., Lopez de Alda M.J. & Barceló D. (2002) Endocrine disruptors in sewage treatment plants, receiving waters, and sediments: integration of chemical analysis and biological effects on feral carp. *Environmental Toxicology*, 21, 2146–2156.
Ponsati L., Acuña V., Aristi I., Arroita M., García-Berthou E., von Sciller D. et al. (2014) Biofilm responses to flow regulation by dams in Mediterranean rivers. *River Research and Applications*, doi:10.1002/rra.2807.
Proia L., Osorio V., Soley S., Köck-Schulmeyer M., Pérez S., Barceló D. et al. (2013) Effects of pesticides and pharmaceuticals on biofilms in a highly impacted river. *Environmental Pollution*, 178, 220–228.
Reichert P., Uehlinger U. & Acuña V. (2009) Estimating stream metabolism from oxygen concentrations: effect of spatial heterogeneity. *Journal of Geophysical Research*, 114, G03016.
Ribot M., Martí E., von Schiller D., Sabater F., Daims H. & Battin T.J. (2012) Nitrogen processing and the role of epilithic biofilms downstream of a wastewater treatment plant. *Freshwater Science*, 31, 1057–1069.
Roberts S., Sabater S. & Beardall J. (2004) Benthic microbial colonization in streams of differing riparian cover and light availability. *Journal of Phycology*, 40, 1004–1012.
Rodriguez-Mozaz S., Ricart M., Köck-Schulmeyer M., Guasch H., Bonnineau C., Proia L. et al. (2015) Pharmaceuticals and pesticides in reclaimed water: efficiency
assessment of a microfiltration–reverse osmosis (MF–RO) pilot plant. *Journal of Hazardous Materials*, **282**, 165–173.
Romani A.M. & Sabater S. (1999) Epilithic ectoenzyme activity in a nutrient-rich Mediterranean river. *Aquatic Sciences*, **61**, 1–11.
Rosi-Marshall E.J., Kincaid D.W., Bechtold H.A., Royer T.V., Rojas M. & Kelly J.J. (2013) Pharmaceuticals suppress algal growth and microbial respiration and alter bacterial communities in stream biofilms. *Ecological Applications*, **23**, 583–593.
vonSchiller D., Martí E., Riera J.L., Ribot M., Marks J.C. & Sabater F. (2008) Influence of land use on stream ecosystem function in a Mediterranean catchment. *Freshwater Biology*, **53**, 2600–2612.
Segner H., Schmitt-Jansen M. & Sabater S. (2014) Assessing the impact of multiple stressors on aquatic biota: the receptor’s side matters. *Environmental Science and Technology*, **48**, 7690–7696.
Serrano A. (2007) Plan Nacional de Calidad de las Aguas 2007-2015. *Ambienta*, **69**, 6–13.
Silva-Junior E.F., Moulton T.P., Boéchat I.G. & Gücker B. (2014) Leaf decomposition and ecosystem metabolism as functional indicators of land use impacts on tropical streams. *Ecological Indicators*, **36**, 195–204.
Smith V.H. (2003) Eutrophication of freshwater and coastal marine ecosystems a global problem. *Environmental Science and Pollution Research*, **10**, 126–139.
Stelzer R.S., Heffernan J. & Likens G.E. (2003) The influence of dissolved nutrients and particulate organic matter quality on microbial respiration and biomass in a forest stream. *Freshwater Biology*, **48**, 1925–1937.
Sutton M.A., Oenema O., Erisman J.W., Leip A., van Grinsven H. & Winiwarter W. (2011) Too much of a good thing. *Nature*, **472**, 159–161.
Ternes T.A. (1998) Occurrence of drugs in German sewage treatment plants and rivers. *Water Research*, **32**, 3245–3260.
Tlili A. & Montuelle B. (2011) Microbial pollution-induced community tolerance. In: *Tolerance to Environmental Contaminants* (Eds C. Amiard-Trinquet, P.S. Rainbow & M. Roméo), (Taylor and Francis group), New York (2011), pp. 85–108. CRC Press.
United Nations Population Division (2006) *World Urbanization Prospects: The 2005 Revision*. United Nations, New York.
Vieno N. & Sillanpää M. (2014) Fate of diclofenac in municipal wastewater treatment plants – a review. *Environment International*, **69**, 28–39.
Wilson B.A., Smith V.H., de Noyelles F. & Larive C.K. (2003) Effects of three pharmaceutical and personal care products on natural freshwater algal assemblages. *Environmental Science & Technology*, **37**, 1713–1719.
Woodcock T.S. & Huryn A.D. (2005) Leaf litter processing and invertebrate assemblages along a pollution gradient in a Maine (USA) headwater stream. *Environmental Pollution*, **134**, 363–375.
Woodward G., Gessner M.O., Giller P.S., Gulis V., Hladyz S., Lecerf A. et al. (2012) Continental-scale effects of nutrient pollution on stream ecosystem functioning. *Science*, **336**, 1438–1440.
Writer J.H., Ryan J.N., Keefe S.H. & Barber L.B. (2012) Fate of 4-nonylphenol and 17β-estradiol in the Redwood River of Minnesota. *Environmental Science & Technology*, **46**, 860–868.
Yates A.G., Brua R.B., Culp J.M. & Chambers P.A. (2013) Multi-scaled drivers of rural prairie stream metabolism along human activity gradients. *Freshwater Biology*, **58**, 675–689.
Young R.G. & Huryn A.D. (1999) Effects of land use on stream metabolism and organic matter turnover. *Ecological Applications*, **9**, 1359–1376.
*(Manuscript accepted 10 March 2015)*
|
Chapter 8
Research in Auditory Training
Peter J. Blamey and Joseph I. Alcántara
University of Melbourne
Abstract
The Nature of Auditory Training
- Definitions of Auditory Training
- Aims of Auditory Training
- The Need for Auditory Training
- Potential Recipients of Auditory Training
- Skills and Knowledge Required for Auditory Speech Perception
- Analogies with Other Types of Training
- Sports Training
- Arithmetic Teaching
- Musical Training
A Taxonomy of Current Auditory Training Practices
- Analytic Training
- Synthetic Training
- Pragmatic Training
- Eclectic Training
Issues in Auditory Training Research
- Effectiveness of Training
- Assessment
- Generalization
- Retention
- Comparison of Training Methods
- Cost-Effectiveness
- Modality of Training
Methodology for Auditory Training Research
- Mathematical Model
- Illustration of the Mathematical Model
- Controls
- Evaluation
Group Designs
Single-Subject Designs
Factors Influencing Effectiveness
Hearing Levels
Device Used
Auditory Experience
Environment
Personality
Three Studies of Analytic and Synthetic Training
Summary
Speech perception and communication can improve as a result of experience, and auditory training is one way of providing experiences that may be beneficial. One of the most important factors influencing the effectiveness of auditory training is the amount of experience the client already has. Other factors include the severity of the hearing loss, the sensory device used, the environment, personal qualities of the client and clinician, the type of training, and the type of evaluation used. Despite a long history of clinical practice, the effects of these factors have been investigated in few controlled studies. Even in special cases where training has an obvious role, such as adults using cochlear implants, there has been little objective comparison of alternative training methods. One reason for this is the difficulty of carrying out definitive experiments that measure changes in performance over time in the presence of many confounding variables. These variables may also help to explain the apparently contradictory results that can be found in the literature on auditory training and in the diverse points of view expressed by practicing clinicians. Issues and methods appropriate for research in auditory training among adult clients are discussed with reference to the needs of modern clinical practice.
THE NATURE OF AUDITORY TRAINING
The purpose of this chapter is not to provide an auditory training manual, but to introduce the basic ideas behind training techniques and indicate how the effectiveness of auditory training might be improved by research. In keeping with the general aims of this monograph, the discussion relates mainly to the auditory training of adults with adventitious hearing loss. Occasional reference is made to tactile training to illustrate research methods because the authors' experiences of training studies have been gained primarily with a tactile speech processor used by children and adults.
Definitions of Auditory Training
Before the existence of wearable electronic hearing aids, the two terms "aural rehabilitation" and "auditory training" were almost synonymous because the additional procedures now included in audiological rehabilitation were available only in a rudimentary form, with the exception of lipreading training. The first electronic hearing aids were called "auditory trainers" because they were too bulky to be carried around as an aid to everyday communication. Hearing aids
are no longer used only for training so that it is no longer appropriate for the measurement of hearing and the fitting of hearing aids to be considered a part of auditory training. In a modern model of audiological rehabilitation (e.g., see Alpiner & McCarthy, 1987, p. 10) auditory training is a relatively small component of the global rehabilitation process. Within this modern framework, it is possible to question the role of auditory training: Is it effective? Is it necessary? BUT, before these questions can be tackled, we must determine what remains within the scope of auditory training. Various definitions of auditory training have appeared in the literature:
A development and/or improvement of the ability to discriminate various properties of speech and non-speech signals, such as loudness, pitch, and rhythm. (Goldstein, 1939)
Teaching the child or adult who is hard-of-hearing to take full advantage of sound cues. (Carhart, 1960, p. 373)
A systematic procedure designed to increase the amount of information that a person’s hearing contributes to his total perception. (Sanders, 1971, p. 205)
Creation of special auditory communication conditions in which teachers and audiologists help hearing-impaired children acquire many of the auditory speech perception abilities that normally hearing children acquire naturally without their intervention. (Erber, 1982)
Goldstein chose a narrow definition emphasizing discrimination without requiring sounds to be used in a meaningful way. Carhart emphasized the use of teaching to increase the information received. Sanders’ definition is similar to Carhart’s, but broader because of the use of the word “procedure” rather than “teaching.” The fitting of a hearing aid is a systematic procedure that would fall within Sanders’ definition but not Carhart’s. Erber’s definition covers most of audiological rehabilitation. The definition used in this chapter is: *Auditory training is the use of instruction, drill, or practice designed to increase the amount of information that hearing contributes to a person’s total perception*. This definition is sufficiently precise to allow the questions raised above to be posed in an unambiguous fashion, and not so restrictive that it excludes procedures of current interest. It should be understood that the effectiveness of auditory training is not independent of other factors in the client’s environment. For example, the creation of special communication conditions, as mentioned by Erber, may be necessary to achieve the maximum benefit from auditory training. The environment and characteristics of the client will be considered as important factors influencing training, but not as parts of the training.
**Aims of Auditory Training**
Increasing the information that hearing contributes to perception is a very general aim. Auditory training programs usually have more specific aims, chosen after considering the specific requirements of the client. A person who has never heard before will need to develop more basic hearing skills than an adult
with a moderate hearing loss of short duration. These basic skills might include discrimination of pairs of vowels or consonants, or identification of items from a small list of words. A person with a hearing-impairment who wishes to play a musical instrument might benefit from exercises in the discrimination and recognition of short sequences of notes with different rhythm and melody patterns. A person fitted with a new hearing aid may benefit from instruction and practice in recognizing sounds through the aid. This practice might take the form of a conversation with the audiologist in which different sounds are produced and discussed. Alternatively, a more structured approach may be used in which sounds are introduced systematically or the new hearing aid user is trained in the recognition of sentences. Instruction and practice in the use of context to derive meaning may be appropriate for more experienced hearing aid users. Each example above falls within our definition of auditory training with the general aim of increasing the information obtained from hearing, but the specific aims are different.
The selection of appropriate short-term goals is an important step in achieving an effective auditory training program. Several methods for matching the specific goals of the training to the needs of the client have been suggested. For example, Alpiner and McCarthy (1987) reviewed a large number of questionnaire-based assessment scales, Garstecki (1981) advocated an initial evaluation to identify "baseline" conditions under which the client achieved a high level of speech recognition, and Lubinsky (1986) proposed the use of an information-processing model of speech perception to determine the stages of processing that limit the client's use of hearing.
**The Need for Auditory Training**
It has often been claimed that humans are predisposed to learn without formal training (Lenneberg, 1967) and it is important to acknowledge the role of everyday experience in learning and maintaining skills. Whether people have normal or impaired hearing, their auditory skills are the product of their past experience and depend on the usual demands that they place on their hearing. It is "untrained" learning like this that enables normally-hearing children to acquire auditory skills initially. Even so, it must also be acknowledged that some experiences and environments encourage learning more effectively than others. The special conditions provided by the mother-child relationship in infancy (Brown, 1977) might be regarded as a naturally occurring auditory training program and many programs for hearing-impaired children have been modelled on this relationship (Ling, 1984).
The use of auditory training with adults does not deny the influence of untrained learning. Instead, it is the function of training to increase the learning rate and/or to raise the final level of performance above the level achievable with untrained learning. The stable level of auditory information processing that can be achieved through untrained learning depends on several factors. Those
factors will be discussed below. Some of them may be altered by auditory training to allow further learning to occur.
**Potential Recipients of Auditory Training**
Potential clients fall into three categories: people who wish to learn additional auditory skills, such as a new language or a musical instrument; people whose hearing condition has changed recently; and people whose auditory skills are stable but inadequate to meet their communication needs. For the first group, training is required if the client’s usual experience does not include sufficient exposure to the new auditory signals for untrained learning to take place. For the second group, changes might be negative such as a rapid loss of hearing, or positive such as the fitting of an improved hearing aid. Some clients, such as cochlear implant recipients, will have a clear need for training in order to become familiar with an auditory signal that is quite different from what they have been accustomed to. At the Royal Victorian Eye and Ear Hospital Cochlear Implant Clinic, all postlinguistically deafened adult implant users undergo a 10-week postoperative rehabilitation course that includes auditory training (Brown, Dowell, Martin, & Mecklenburg, 1990). Depending on the client’s level of performance, the materials and aims of training vary from recognition of phonemes and closed sets of words, to recognition of open-set sentences over the telephone. In other situations, such as clinics dealing with mild hearing losses, auditory training may be required only for a minority of clients, or may be limited to pragmatic training.
Potential clients in the third group are most likely to be those whose hearing condition has been stable for several years, and whose hearing impairment is severe. The majority of this group will probably be elderly people. Crandell, Henoch, and Dunkerson (1991) have recently reviewed the literature on speech perception and aging. They point out that the physiological origins of speech recognition difficulties in this group are uncertain, and elderly listeners with similar audiometric configurations may demonstrate widely varying degrees of speech recognition performance in adverse listening conditions. These variations are presumed to be the result of multiple factors, including the loss of pure-tone sensitivity, deficits in suprathreshold auditory processing, and/or cognitive processing. Crandell et al. (1991) suggest that new cost-effective clinical procedures are required to identify the mechanisms responsible for speech-recognition difficulties so that appropriate amplification systems and rehabilitation programs can be provided. Whether auditory training is an appropriate rehabilitation strategy for this group is unresolved (Ross, 1987). In his review, Ross was careful not to overlook the social aspects of participation in a training program. These aspects include opportunities to meet other people with similar problems, to provide mutual support, and to build self-confidence. The interaction of psychosocial factors and communication effectiveness may be particularly important for elderly people with hearing impairments.
Skills and Knowledge Required for Auditory Speech Perception
It has become common to classify auditory skills into a hierarchy of levels according to the stimuli presented and the response required (e.g., Carhart, 1960; Erber, 1982; Ling, 1991). These levels include detection of sound, discrimination between sounds, identification of sounds from a closed set, recognition of sounds in a wider context (open-set), and comprehension of the meaning of speech. This hierarchy has been used to develop auditory training programs assuming that the client should progress systematically from lower to higher levels. Obviously, the lower-level skills are necessary for auditory speech communication and this is the justification for their inclusion in a training program. However, these lower-level skills are not sufficient for the understanding of speech. For example, it may be possible to learn to detect, discriminate, and recognize environmental sounds without learning skills that will help to understand speech. Some training programs have been criticized for their emphasis on learning the gross properties of environmental sounds, particularly if a high degree of performance is required before starting on speech sounds (Rodel, 1985). In addition to the necessary auditory skills, the understanding of speech requires knowledge of the language and of the world. The phonological, syntactic, and semantic structure of the language must be familiar to the listener in order to understand the message. The listener does not need to have studied grammar, in fact, such theoretical knowledge may not be useful in speech perception at all. The listener does need to recognize which sound combinations form valid words, what sequences of words form valid utterances or sentences, and what the individual words mean. It is a defensible point of view that this knowledge is so interwoven with the processing of auditory signals that they must be learned together. Just as the study of grammar may not be helpful, the learning of individual phonemes or other non-meaningful speech components may not help speech perception much. Auditory and cognitive processing may be so interdependent that they cannot be separated without reducing the effectiveness of the training. This is one of the central issues of auditory training research.
In considering auditory training for adults with acquired hearing loss, it may be assumed that the clients have already learned speech perception skills during childhood. The interdependent auditory and cognitive processes have been learned, but the auditory input has been changed by the hearing loss and its subsequent treatment. In some cases, the cognitive processes may also be affected by disuse or degeneration over long periods of time, or by central damage in some etiologies. Even though the cognitive processing is usually intact it may be necessary for this processing to be engaged for the training of new auditory skills to be effective.
Analogies With Other Types of Training
In most fields of human endeavor it is acknowledged that learning occurs and that practice helps to improve performance. A large body of research into learning and teaching exists and it is appropriate to consider whether the conclusions might be applicable to auditory training. Bode and Oyer (1970) summarized the suggestions of Wolfle regarding the application of learning theory to auditory training:
First, the distribution of practice should be suitable for the task to be learned. Second, active participation by the learner is superior to passive receptivity. Third, practice material should be varied so that the learner can adapt to realistic variation and so that his motivation during drill is improved. Fourth, accurate performance records need to be maintained in order to evaluate progress and effects of training. Fifth, the most useful single contribution of learning theory is the provision for immediate knowledge given to learners regarding their performance. (p. 840)
One may assume that these guidelines are applied in most auditory training programs for clinical and research purposes. It is also instructive to consider specific examples and their degree of similarity (or otherwise) to auditory training.
Sports training. To become proficient at most sports, it is necessary to practice. Without practice, performance becomes poorer. These statements are self-evident when applied to a sport like tennis, but do they apply to speech perception? One difference between sport and speech perception is in the opportunity for practice. Practice of speech perception is possible at most times and in most places. One requires a partner, but the tennis court and other facilities are not necessary. From this point of view, speech perception is more like walking than tennis. Everyday living conditions include many opportunities for practicing speech perception and walking so most people perform very well without specific training. What happens if these everyday practice opportunities are not available? A complete lack of auditory speech perception practice is rare, requiring total deafness, a profound hearing loss that is not aided, or social isolation. Post-operative scores for cochlear implant patients suggest a slow degradation in performance over the time before implantation. Mean scores for recognition of words in open-set sentences 3 months postoperatively decrease by about 0.5% for each year of profound deafness before implantation (Blamey et al., 1992). The mechanism for this degradation may be a physical degeneration of nerve cells in the damaged cochlea. Degradation of performance in a sport or physical activity through lack of practice appears to occur much more rapidly, and the mechanism is more likely to do with muscles than nerves. At this point, our analogy starts to break down. It is possible to maintain muscle tone and build strength by means of exercise without actually playing tennis. It is not clear whether the performance of auditory nerves in speech perception can be improved without actually perceiving speech. The reader should not infer from this that sports training is solely concerned with physical fitness. Recent research emphasizes that there is a cognitive component involved in all complex movements (Fitts & Posner, 1967), and overall performance on a task can be improved by independent training of this cognitive component (Minas, 1980). The cognitive components include knowledge of the sequence of actions required (Minas,
1980) and the detailed motions of all relevant parts of the body (Newell & Walter, 1981). Auditory training may be similar to the cognitive component of sports training, especially in the light of the strong link between speech production and perception that has been postulated in the motor theory of speech perception (Liberman, Cooper, Shankweiler, & Studdert-Kennedy, 1967; Liberman & Mattingly, 1985). Production and perception have also been shown to be interdependent in training studies such as the one reported by Novelli-Olmstead and Ling (1984). In summary, there are points of similarity between sports training and auditory training, but the differences require one to be cautious in generalizing from one to the other.
*Arithmetic teaching.* Unlike sports, numeracy and mental arithmetic are not physical activities requiring strength or muscle tone. Like sports, they are not everyday activities and so they require explicit training. Learning arithmetic usually proceeds by stages. One learns some numbers. One learns to count and then to add, subtract, multiply, and divide small numbers. Finally one learns rules that allow generalization to arbitrary sets of *numbers*. In analogy with speech perception: numbers are nouns, operators are verbs, algebra corresponds to grammar, and equations correspond to meaningful statements. As in the learning of speech and language, the first symbols and statements are learned by repetition and general rules come later. These stages of learning correspond loosely to the preoperational, concrete operational, and formal operational stages of Piaget’s theory of intellectual development (Bell, 1978). Addition and multiplication tables are over-learned through drill and repetition. Once over-learned, this knowledge degrades slowly compared to the degradation of physical skills. Thus, the effect of practice on arithmetic skills is a closer analogy to auditory training than sports training. There is no logical necessity to over-learn arithmetic tables because they can be deduced from a small number of axioms. The advantage in over-learning is in speed of calculation and automaticity of the response (Fitts & Posner, 1967, pp. 123-124). Similarly, a message could be derived from a string of phonemes using phonological, grammatical, and semantic rules. However, it is likely to be much more efficient to over-learn sequences of sounds corresponding to common constructions. Over-learning is the result of frequent repetition. Indirect evidence for this type of processing comes from the fact that frequently occurring words are recognized faster and more accurately than infrequent words in speech perception (Howse, 1957). For an acquired hearing loss, over-learned associations between acoustic inputs and their meanings will be disturbed. For mild hearing losses, the disruption may not be sufficient to destroy the over-learned associations. For severe hearing losses, associations may be lost altogether if the phonemes are inaudible, or if sounds with different meanings become indistinguishable. In such cases, it is the function of auditory training and untrained learning to replace the previously over-learned associations with new ones that more closely reflect the listener’s altered hearing.
Theories of learning and instruction have been applied extensively to the teaching of mathematics. Bell (1978) refers to the works of Jean Piaget, J.P. Guilford, Robert Gagné, Jerome Bruner, David Ausubel, B.F. Skinner, and others. The author discusses the relevance of these works to specific tasks, such as teaching mathematics in secondary schools. Most of Bell’s discussion could be applied to the teaching of language also, however auditory training is not language training. The difference lies in the very important sensory component involved in auditory speech perception which is so basic that it is usually assumed to be fully developed before formal mathematical or language teaching take place. In short, there is a body of theory and experience that can be derived from the literature on the training of cognitive skills, but the researcher must be aware of significant differences between auditory speech perception and other cognitive skills.
Musical training. Musical training actually falls within our definition and illustrates that normally-hearing people can learn from auditory training. After musical training, people may identify musical intervals and chords, complex rhythms, and other characteristics that may have been meaningless to them as untrained listeners. Trained musicians also perform better than untrained listeners on some psychoacoustic tasks (Beckett & Haggard, 1973). Similarly, untrained listeners often improve their performance on psychoacoustic tasks over hours of practice (Cuddy, 1968). These examples indicate that discrimination and identification of sounds can be improved by training, and some habilitationists have suggested that these improvements will lead to improvements in speech perception. Examples of auditory training methods that incorporate the use of music include the Verbotonal System (Guberina, 1972) and an application of the Kodaly music program to train the perception and production of prosodic aspects of speech (Dawson, 1989).
A TAXONOMY
OF CURRENT AUDITORY TRAINING PRACTICES
Auditory training methods vary widely because trainers have their own ideas on the effectiveness and importance of training procedures, materials, and styles. It is possible, however, to group training methods into broad categories. This is a necessary step before research can attempt to establish the relative merits of existing training methods.
Analytic Training
A training strategy is said to be “analytic” if it involves breaking speech into smaller components which are trained separately. These components may be syllables, phonemes, or segments of speech that share particular segmental or suprasegmental features. For example, individual sessions might concentrate on the differences between vowels with long and short duration, consonants that are voiced or voiceless, or short sentences with different stress patterns. Usually, the training is carried out using tasks requiring identification of a closed set of
items. Two of the attractions of analytic training are the relatively small number of speech features to be trained, and the simplicity of designing and evaluating closed-set identification tasks. The method is based on the premise that comprehension of speech depends on the identification of the component features and phonemes in the message. It is also assumed that ability to recognize speech features and phonemes in isolation carries over into connected discourse. Analytic training tends to concentrate on acoustic cues rather than the meaning of speech, and it progresses from basic cues to more subtle or complex distinctions. Sometimes this is described as a “bottom-up” approach.
**Synthetic Training**
“Synthetic” training focuses attention on more global aspects of the speech such as the meaning, syntax, and context of the message. Training materials typically include meaningful sentences, phrases, or words and the emphasis is on understanding. Synthetic training is related to “top-down” models of speech perception which postulate that the listener synthesizes possible messages from the context and syntax. The listener then chooses between possible messages by comparing them with the incoming acoustic signal. It is unnecessary to analyse the incoming acoustic signal into small components because larger chunks can be compared with the synthesized words, phrases, or sentences. Synthetic training may include sentence perception (Durity, 1982), word-learning procedures (Brooks & Frost, 1983), question and answer procedures (Erber, 1984), and the speech tracking technique (De Filippo & Scott, 1978). The speech tracking technique may include some analytic training components when non-contextual cues are used to elicit a verbatim response from the client.
**Pragmatic Training**
In “pragmatic” training, the listener is instructed about how to obtain information necessary for communication by changing the conditions under which the interaction takes place. The aim is to obtain the maximum benefit from existing hearing skills, instead of improving the skills themselves. Factors that a listener with a hearing loss can control to influence the effectiveness of communication include: (a) the level of the signal – it may be controlled by adjusting the hearing aid gain, by adjusting the distance between speaker and listener, or by asking the speaker to talk at a different level; (b) the signal-to-noise ratio – it may be increased by moving closer to the speaker, by standing in the most advantageous position relative to the noise source and the speaker, or by moving to a quieter location; and (c) the context and the complexity of the message – they can be controlled by asking questions and using appropriate repair strategies. Practice in controlling the context and complexity of conversations would obviously be a useful adjunct to synthetic auditory training programs which emphasize the use of contextual and structural information for perception. QUEST?AR (Erber, 1984) and speech tracking (De Filippo & Scott, 1978) are synthetic training techniques that have built-in opportunities for the client to practice some pragmatic skills. Awareness and control of loudness and signal-to-noise ratio will also be relevant to an analytic program since the available acoustic cues depend on these factors.
**Eclectic Training**
In practice, auditory training programs tend to combine elements of all three strategies described above. One justification for this “eclectic” approach is that each strategy trains a different aspect of speech perception. Secondly, in the absence of clear information about which type of training is best suited to a particular situation, it may be best to incorporate all of them into an auditory training program. Although an eclectic program does cover both of the above situations, research into the effectiveness of different training approaches for particular classes of clients is required to find the most efficient combination of strategies.
**ISSUES IN AUDITORY TRAINING RESEARCH**
Unfortunately, much previous research related to auditory training does not make it possible to evaluate its effectiveness directly. For example, all studies of tactile speech processors require training, but rarely do they compare trained and untrained users or different training strategies. This is understandable because usually their objective has been to improve the design of tactile processors, however, the opportunity to assess training separately from the combined device-plus-training effect has often been missed. Consequently, it is very difficult to evaluate the relative effectiveness of devices because the amount and type of training given to the subjects is usually different (Blamey & Cowan, 1992). Research into hearing aids and cochlear implants involves similar considerations concerning training effects. In these cases, the training effects are usually less obvious if the subjects are postlinguistically deafened adults who achieve a reasonable level of speech recognition with relatively little experience. Even so, substantial learning does take place over time (Dowell, Mecklenburg, & Clark, 1986), and this can be affected by training. It would be feasible and advantageous to design and complete studies that would address some of the issues discussed below.
**Effectiveness of Training**
The three main issues involved in establishing the effectiveness of training are: (a) how to evaluate an improvement, (b) whether the improvement reflects a significant change in the listener’s perception of the real world, and (c) how long the effects of training are retained.
*Assessment.* An auditory training program should start with a comprehensive assessment of the auditory skills of the client for two reasons. Firstly, the initial level of performance must be established accurately for comparison with performance during and after training. The initial and final evaluations must include
materials that are representative of realistic situations so that reliable inferences about the client's performance outside the test situation can be made. Secondly, the assessment must identify specific areas of weakness in the client's performance so that the training can concentrate on these areas to achieve the maximum improvement. Usually, this will require testing of closed-set materials or structured open-set materials such as monosyllabic words (Boothroyd, 1968), or the SPIN sentences (Kalikow, Stevens, & Elliot, 1977). During training, progress is monitored by shorter assessments that need to be related closely to the training materials, and may form part of the practice provided to the client. These measures ensure that the training tasks remain appropriate and help to maintain motivation. Ongoing assessments also provide valuable information if the rate of progress is not uniform. If learning occurs rapidly at first and then slows down, it may be more cost-effective to halt training before a plateau in performance is reached. Learning sometimes progresses in a series of jumps rather than smoothly. In this case, the jumps may indicate that several processes are being learned together and specific training of these individual processes might produce the jumps in performance more quickly. For example, Alcántara, Whitford, Blamey, Cowan, & Clark (1990) reported step-like changes in recognition of speech features among children who received specific training with an electrotactile speech processing device combined with hearing aids.
**Generalization.** It is essential to demonstrate that auditory training will improve communication in real-life situations as well as under artificial test conditions. This involves a wide variety of evaluation materials, speakers different from the person(s) who carried out the training, and testing under conditions likely to be encountered outside the clinic or laboratory. Often, improvements in auditory skills are demonstrated in controlled conditions such as those usually encountered during training sessions. Only rarely has it been demonstrated that these changes may be accompanied by improvements under more realistic conditions (e.g., Walden, Erdman, Montgomery, Schwartz, & Prosek, 1981). There is little information about the factors that determine whether skills learned under one set of conditions will carry over to another set of conditions. Probably, this will depend on the similarity of the tasks and the establishment of a link between them, although the human brain is very good at recognizing patterns or rules and applying them to new situations. Much research remains to be carried out into the factors that promote generalization of skills. Knowledge of these factors will be important in choosing effective training conditions and strategies. Closely related to generalization is the question of how to select appropriate training programs, materials, and goals to meet the everyday needs of individual clients. Some possible approaches were discussed in the section on aims of auditory training, but these procedures would all benefit from further research in order to enhance their diagnostic and prognostic powers. Self-report scales and questionnaires might also be administered pre- and post-training to indicate possible generalization effects of training.
**Retention.** When new skills are learned under special training conditions,
there is no guarantee that they will be retained. If new skills are used regularly in everyday conversation, it is likely that they will be maintained or improved, but if training is discontinued and generalization does not occur, the skills are likely to be lost. Degradation of speech perception skills in adults with postlinguistic hearing loss occurs relatively slowly (Blamey et al., 1992), but these skills have been over-learned and practiced for many years. Presumably, some of the same skills continue to be practiced after the onset of deafness through the mechanisms of speechreading, speech production, reading, and so on. Skills learned through training may not be as well established and may be forgotten more rapidly. Few studies have investigated whether the effects of auditory training persist beyond the end of the training program. Rubinstein and Boothroyd (1987) found that gains made as a result of auditory training were not lost in the month immediately following the end of training.
**Comparison of Training Methods**
Comparisons of training schemes should take into account the learning rate, as well as the overall improvements in performance. If two training methods achieve the same result but one takes longer, it must be judged that one method is less efficient than the other. The most common method of assessing training schemes is to train with each scheme for a fixed amount of time and measure the improvement. This is not completely straightforward, however. For example, the improvement measured will depend on the client’s initial level of skill and the rate of improvement may slow down during training. The improvement will depend on the skill that is evaluated as well as the training method. For example, an analytic method may be more effective than a synthetic method for training recognition of consonants in nonsense syllables. On the other hand, the synthetic method may produce a larger increase in sentence recognition. Thus, comparisons of training methods must be made on the basis of consistent evaluation materials. Some training methods do not lend themselves to this kind of comparison conveniently. For example, the word-learning procedure used by Brooks and Frost (1983) involved training a subset of words until a criterion level was achieved before moving on to another subset of words. A method like this can produce steps in performance, with faster increases for new materials and slower increases as the criterion is approached, particularly if the criteria are difficult to reach. In research using step-wise training methods, it may be advisable to incorporate many steps in order to average out the uneven learning rate.
**Cost-Effectiveness**
In clinical contexts, training costs are important as well as effectiveness. Costs can be reduced by using group training, trainers with a minimum level of qualification, automated or self-training methods, or longer training sessions less frequently. These variations may influence the effectiveness of the training as well as its cost. Usually, the cost is spread uniformly, but the learning rate
reduces over time. Thus, cost-effectiveness decreases as a function of time. Inevitably there comes a cut-off time beyond which training is no longer worthwhile. More effective training leads to higher levels of auditory performance in the same amount of time. Determination of the cut-off time will depend on factors that are not directly related to the training. Such factors include the amount the client or the clinic wish to spend and the social costs of not providing training. Cost-effectiveness also depends on the retention of skills. It should not be assumed that skills trained beyond the level maintained during everyday conversation will persist indefinitely.
**Modality of Training**
Some proponents of auditory training insist that the auditory signal should be used without lipreading. Others feel that the visual signal does not detract from the training effect of the auditory signal so both should be used together, particularly if the client normally relies on lipreading as well as audition. This question has given rise to widely differing philosophies in the education of deaf children, ranging from auditory/verbal approaches that rely entirely on audition, to the use of specially formulated visual supplements as in cued speech (Cornett, 1967). Many approaches have helped children overcome the problems of hearing-impairment, but no clear indication of the best combination of modalities for training has emerged. This question is also relevant to the auditory training of adults with postlinguistic hearing loss. In the case of adults who are capable of some communication without lipreading, as in telephone use, some training without lipreading seems advisable.
**METHODOLOGY FOR AUDITORY TRAINING RESEARCH**
The history of auditory training indicates a useful role in at least some cases of adult rehabilitation, but there have been few direct evaluations of its effectiveness. One reason for the lack of objective research in all types of training is the inherent difficulty in obtaining reliable data. Another reason is the large number of factors that influence effectiveness. Methods of overcoming these difficulties in auditory training research are discussed below.
**Mathematical Model**
The amount of training, the amount of learning, the degree of generalization, and the degree of retention of skills are inter-related. A mathematical formulation is helpful in crystallizing these relationships and testing hypotheses. Equation 1 is an example of a linear differential equation having many of the qualitative properties discussed above:
\[
(d/dt)S_i = (\Sigma_j g_{ij}P_j) - fS_i
\]
where \((d/dt)S_i\) is the rate of change (derivative with respect to time) of an auditory skill, \(S_i\), and different subscripts denote different
auditory skills such as the recognition of words in sentences or identification of voicing in initial consonants.
$P_j$ is the amount of practice per unit time on an auditory task denoted by the subscript $j$, such as 6 hr of general conversation per day, or 1 hr of vowel identification training per week.
$g_{ij}$ describes the learning effect of practice $P_j$ on auditory skill $S_i$.
$f$ describes the rate at which skills decrease in the absence of practice (forgetting rate).
$\Sigma_j$ denotes a sum over all types of practice (all values of $j$).
The equation states that the rate at which the skill increases is equal to the sum of the effectiveness of each type of practice minus the rate at which the skill is lost without practice. Several aspects of this model should be noted: Firstly, the model can describe explicitly the effects of untrained learning by including the client’s everyday experience as one of the types of practice. This would be necessary in the case of a young child, for example. However, in the case of an adult with stable untrained performance, the effect of everyday experience will be the same as the everyday forgetting rate, so both these terms could be omitted. Secondly, the $g_{ij}$ coefficients provide a measure of generalization by specifying the effect of any given type of practice ($P_j$) on any given type of skill ($S_i$). For example, if generalization from practice $P_j$ to skill $S_i$ does not occur, this means that $g_{ij}$ is 0. Thirdly, it is assumed that the probability that a unit of skill will be lost is constant over time so that the forgetting rate, $fS_i$, is proportional to the level of that skill. Often, $fS_i$ is quite small (e.g., 0.5% per year in the study of Blamey et al., 1992) and may be ignored in short-term experiments.
Assuming that $g_{ij}$, $P_j$, and $f$ are all independent of time, integration of Equation 1 gives the skill $S_i$ as a function of time, $t$:
$$S_i = \left(1 - e^{-ft} - T\right) \left(\Sigma_j g_{ij} P_j\right)/f$$
where $T$ is a constant representing the time at which $S_i$ was 0.
This equation represents a curve that rises with decreasing slope until an asymptote is reached at $S_i = (\Sigma_j g_{ij} P_j)/f$. Equations 1 and 2 are formulations of a “linear” model that has been used extensively in mathematical learning theory following the work of Hull (1943) and others. The situations studied included discrimination learning, free verbal recall, rote serial learning, imitation learning in children, learning of miniature logic systems by children, aspects of second language acquisition, optimum teaching procedures, and many others. The most detailed studies were carried out over short time spans and usually replaced the time variable with the number of experimental trials of the skill to be learned. Atkinson, Bower, and Crothers (1965) discuss the history and theoretical basis of learning models as well as a number of examples, including
experimental data.
The model provides an empirical description of observations in clinical situations and training studies. No theoretical justification will be put forward here, other than that the model is sufficiently flexible to fit many situations. Probably the most serious difficulty in applying the model is in defining the "level of skill." To a first approximation, "level of skill" may be a score on an appropriate speech perception test, but this leads to a problem once the score reaches 100%. The model imposes no upper limit on the level of skill and so will not provide good predictions for tests where performance is close to 100%. A related difficulty is that the level of skill and the score on a speech test need not be linearly related. For example, an increase from 10% to 20% may not represent the same amount of learning as an increase from 20% to 30%. To minimize such problems, the speech tests used to quantify the levels of skill should be chosen to vary over a range that does not lie close to the limits of 0% and 100%. Alternative approaches might be to use a non-linear transformation between the test score and the skill level, to use a more open-ended measure such as speech tracking rate, or to keep scores within a small range by varying an additional parameter such as signal-to-noise ratio.
Illustration of the Mathematical Model
To provide examples, we have applied the model to the clinical data displayed in Figure 1. In each panel of the figure, speech tracking rate, in words per minute (wpm), is displayed as a function of the number of training sessions. Client A was a postlinguistically deafened adult cochlear implant recipient who had used the implant for over 3 years before the data collection began. She was trained for 20 sessions spread fairly irregularly over 49 months. In the first two sessions, the measured tracking rate using the implant without lipreading, $S_1$, was about 18 wpm. We will assume that the client's performance was stable before training was introduced so that
$$\frac{d}{dt} S_1 = g_{10} P_0 - f S_1 = 0 \quad (3)$$
or $S_1 = (g_{10} P_0)/f = 18$ wpm
where $P_0$ represents the amount of practice per unit time arising from everyday experience.
$g_{10}$ represents the effectiveness of everyday experience on the speech tracking task using the implant without lipreading.
After training is introduced,
$$\frac{d}{dt} S_1 = g_{10} P_0 + g_{11} P_1 - f S_1 \quad (4)$$
where $P_1$ is the amount of training per unit time with the implant.
$g_{11}$ represents the effectiveness of this training on the speech tracking task.
The solid line in Figure 1 represents the integral of Equation 4 that best fits the data. This was determined by adjusting the values of the constants $f$, $T$, and $(\sum_j g_0 P_j)/f$ in Equation 2 to minimize the sum of squared differences between the data and the fitted values. The values of these constants give us the asymptotic skill level of 48 wpm, and the forgetting rate of 0.11 per month or 0.26 per session. Using Equation 3 and the value of the asymptote, we can also calculate $g_H = 8.1 \text{ wpm/session}$.
The data for a client with a congenital profound hearing impairment who was evaluated using lipreading-alone for 53 sessions is also shown in Figure 1. The same individual used an electrotactile speech processor (Cowan, Alcántara, Blamey, & Clark, 1988), a hearing aid, and lipreading together for an additional 36 sessions. These data were not collected concurrently and session 0 in the
**Figure 1.** Experimental data and fitted learning curves derived from speech tracking training sessions for two profoundly hearing-impaired adult clients.
combined condition corresponds to session 48 in the lipreading-alone condition. Table 1 summarizes the parameters derived from the mathematical model for each case. The analysis gives us far more insight into the performance of these clients than could have been obtained by considering only the raw data. For example, the effectiveness coefficients indicate that the training was much more effective with the implant than in either of the other two conditions. This possibly reflects the amount of information available in the different training modalities, but it could also be caused by individual differences between the two clients. Although training was most effective with the implant, performance at the point of asymptote was best for client B in the combined condition. This is due to a higher initial performance and to the higher number of training sessions per month. Client B’s initial performance in the combined condition reflects the level maintained by everyday experience and includes some contribution from the earlier training with lipreading-alone. These effects could have been separated if the data collection had included an evaluation of the tracking rate in the combined condition at the beginning of the training with lipreading-alone. The forgetting rate in each condition is also of interest. They are all of the same order of magnitude, but much larger than the value inferred from the results of Blamey et al. (1992) which was only 0.004 per month. This suggests that the information or skills learned during speech tracking are forgotten much faster than the skills and information used in the perception of open-set sentences. One possible explanation of this may be that the largest improvements in speech tracking come from familiarity with the context (the story being read) and/or the individual speaker. This detailed information is unlikely to be retained for as long as the general skills required for speech perception which are maintained by everyday use. Thus, we are led to question whether speech tracking had any influence on these more general skills. Unfortunately, the present data do not make it possible to answer this question because an evaluation with test materials other than those specifically trained was not performed. Danz and Binnie (1983) addressed the question in a study with normally-hearing subjects. They found no effect of auditory-visual tracking training on perception of sentences, although there was a small improvement in consonant recognition. Obviously, one cannot draw general conclusions based on data obtained from only two clients. However, modelling may be fruitful in providing a common framework for the interpretation of results from more extensive training studies.
Controls
At first glance, it appears that a training study should involve just an initial evaluation, a training phase, and a final evaluation to see whether any improvement occurred. This approach is not really adequate to demonstrate that the training caused the improvement. For example, naive subjects often perform better on the second exposure to the test conditions and procedures. Training studies are particularly prone to misinterpretation of this kind because the subjects are in a state of change. They may have been recently deafened, or recently
Table 1
Mathematical Model Parameters Derived
From the Speech Tracking Training Data Presented in Figure 1
| Client: | A | B | B |
|---------|---|---|---|
| Condition | Implant | Lipreading | Lipreading + Hear. aid + Tactile |
| No. of sessions | 20 | 53 | 36 |
| Duration (months) | 49 | 18 | 28 |
| P, sessions per month | 0.41 | 2.94 | 1.29 |
| Initial level (wpm) | 18 | 9.3 | 39 |
| Asymptote (wpm) | 48 | 26 | 65 |
| f, Forgetting rate (per session) | 0.26 | 0.09 | 0.13 |
| f, Forgetting rate (per month) | 0.11 | 0.25 | 0.17 |
| g, Effectiveness (wpm per session) | 8.1 | 1.5 | 3.5 |
fitted with a new hearing aid and their speech perception skills may be expected to change in response to everyday experience as well as to training. For these reasons, it is prudent to include a “no-training” control group to establish a baseline performance with which the training condition can be compared. If two or more training strategies are being compared and the difference between the strategies is the only variable of interest, a no-training control may be unnecessary. Several studies of this type can be found in the literature. For example, Walden et al. (1981) compared performance for two groups of subjects, one of which received extra consonant training. Rubinstein and Boothroyd (1987) compared the performance of two groups of subjects who received different training in an experimental design that included two no-training phases as well as the training phase. Single-subject designs should also include controls such as a no-training or other-training treatment condition in order to establish a baseline condition. Single-subject designs are more complex than group designs in this respect because the results may depend on the order in which the training conditions are applied.
Evaluation
Because training studies are concerned with the rate of change of skills, they must involve several evaluations separated by intervals of time during which the training or no-training treatments are applied. The initial and final evaluations need to produce highly reliable estimates of performance because the effectiveness of training will be measured by a difference in scores. This requirement is especially important for short-term studies in which the improvement due to training may be small, and for studies where different training strategies are compared with one another. To illustrate the nature of this problem, consider
a group experimental design in which the performance of group A who receive training, is compared to the performance of group B who do not receive training. Assume a test that yields a distribution of individual scores with a standard deviation of 10%. Table 2 shows the standard deviations of the distributions of group scores and differences. Note that the distribution of differences between two scores has double the variance of the distribution for a single score. To demonstrate that learning for group A is greater than zero with 95% confidence level, the change in performance must be at least two standard deviations away from zero. In other words, post-training performance must be at least 14% higher than pre-training performance. Similarly, the amount of learning for group A must be 20% higher than for group B to demonstrate a significant training effect. The requirement for such a large training effect can be overcome to some extent by using larger numbers of items in the evaluation in order to reduce the test score variability, by using a larger number of subjects to reduce the variability of group mean improvements, and by training for a period of time that is sufficiently long to produce a large improvement in test scores. All of these conditions involve increased time for the researcher and the clients, making research into training an expensive exercise.
**Group Designs**
Group designs are useful for several reasons. Firstly, a control group allows for the possibility that subjects’ evaluation scores may change in the absence of training. Secondly, other variables may be controlled. For instance, the evaluation materials may have been equated for difficulty in normative studies, but it is prudent to balance the order of presentation across subjects within groups.
**Table 2**
Standard Deviations of Distributions of Scores in the Analysis of a Group Design to Evaluate a Training Effect
| Quality Evaluated | Standard Deviation |
|--------------------------------------------------------|--------------------|
| Individual subject score | 10% |
| Initial group mean score | 5% |
| Final group mean score | 5% |
| Group learning effect (final – initial group score) | 7% |
| Training effect (group A – group B learning effect) | 10% |
*Note.* A group size of 4 subjects is assumed. It is assumed that subjects are perfectly matched so that subject variations do not increase the group standard deviations above the level arising from test-retest variation of the evaluation used. In practice, individual differences will increase the standard deviations of group mean scores.
to control for any differences between them. Similarly, groups may be matched to minimize the effects of extraneous variables such as age, degree of hearing loss, or hearing aid used. It is often desirable to evaluate variables explicitly, rather than using a balanced group design. In this case, the groups may be deliberately selected with differences in these factors so that any interactions between the selected variable and the training condition can be observed. Some factors such as hearing loss can only be evaluated in group experiments because they cannot be controlled in a single-subject design. Group designs also reduce the possibility that an unrepresentative result will be obtained from a small number of subjects. Larger numbers of subjects help to reduce the variability of mean learning rates and improvements calculated from the data. It should be noted that an equal reduction in variability can sometimes be attained by increasing the number of items in the evaluation test and by repeating the evaluation several times. The latter options are preferable in many situations because the time spent on training is not increased, and training time is usually much greater than the evaluation time.
**Single-Subject Designs**
A single-subject design does not require matched groups of control and experimental subjects. On the other hand, the single subject must be evaluated in a multiple-phase experiment if improvements are to be attributed to training unambiguously. For instance, an improvement observed in a 1 month period may be due to training, or to other events such as a resolution of personal problems, reduction of a conductive hearing loss, or everyday auditory experience. To eliminate these possibilities, a non-training phase is desirable, as well as the evaluation of variables likely to be affected by uncontrolled factors (such as hearing thresholds). A non-training phase is a time when evaluations are carried out before, after, or in between training phases in order to establish whether performance is stable in the absence of training. A demonstration that auditory speech perception improved during training but not at other times, and that untrained skills did not change simultaneously is much more convincing evidence of the specific effects of training than an uncontrolled study. Order effects can be important. The improvement in the first phase is often greater than in following phases because motivation is high and there is more to learn. In later phases, the client may already have reached a higher level of competence, and learning further skills may be more difficult. Consequently, several interleaved training and non-training phases may be preferred. An alternative that also adds to the generality of results is to combine single-subject and group methodologies with the order of phases balanced across subjects. This design was used by Alcántara, Cowan, Blamey, and Clark (1990).
**Factors Influencing Effectiveness**
The effectiveness of auditory training depends on extrinsic factors as well as those directly controlled by the clinician. These factors may produce a wide
variation of results within groups of clients, and make it more difficult to obtain reliable measures of learning rates. These factors need to be considered in designing research studies. Also, they must be controlled, as much as possible, by selecting homogeneous groups.
*Hearing levels.* Severe hearing losses restrict the cues available and impose an upper limit on the level of auditory skills that may be learned. In terms of the suggested mathematical model, the $g_{ij}$ (gain) coefficients will tend to be smaller for greater hearing losses. Coefficients corresponding to skills $S_i$ that require the greatest auditory sensitivity may be zero, indicating that no amount of training would improve these skills. Other skills that do not require such high levels of auditory sensitivity may be improved. Minor hearing losses may also render training ineffective because a high level of auditory skill is likely to be attained through everyday conversational experience. Training to achieve a level of auditory skill above that maintained by the client’s everyday conversational experience will have only a temporary effect, since the skills will gradually decrease to the stable level after training is removed. One example of this could arise from intensive training of a client by a single speaker. If the client learned to use particular characteristics and mannerisms of the speaker that were not generalized, one would expect these skills to be lost relatively quickly after training. Second language acquisition is another good example where skills decrease if unused for long periods of time.
*Device used.* The total effect of a hearing loss on communication depends on the compensating benefits achievable with a hearing aid or other sensory-prosthetic device, as well as on the negative effect of the hearing loss. One would therefore expect that the effectiveness of training would be greater for more effective sensory-prosthetic devices. Thus, the benefits of a well-fitted sensory-prosthetic device may include faster learning of auditory skills as well as a higher stable level of performance with the device. The first electronic hearing aids were not wearable and were used exclusively in training sessions because learning was faster with the help of the aid. Modern hearing aids and cochlear implants are wearable, however, and offer stable levels of auditory performance that are higher than the level of performance that could be achieved in the unaided condition, as well as faster learning of the required skills. In evaluations of assistive devices, the learning aspect is often dismissed on the tacit assumptions that no new auditory skills will be learned with the new aid, or that the relative levels of performance with alternative aids will not be affected significantly by learning. These assumptions are justifiable in comparisons of similar devices such as hearing aids with small differences in the gain characteristics across frequency. New generations of assistive devices that are quite different from conventional hearing aids are being developed. They include speech-processing hearing aids, cochlear implants, and tactile processors (Levitt, 1986). For these devices, the previous assumptions may no longer be justifiable.
To clarify this statement, consider the hypothetical case of an adult with a severe hearing loss who uses a hearing aid, and later decides to get a cochlear
implant. We will apply the mathematical model hypothetically to explore the possible outcomes for the client (see Figure 2). Let $S_h$ be the level of skill with the hearing aid and $S_c$ the level of skill with the implant. Immediately prior to the implantation, $S_h = g_{hh} P_h / f$, where $P_h$ is the practice obtained with the hearing aid under everyday conditions and $g_{hh}$ is the effectiveness coefficient for everyday hearing aid practice (second subscript) on perception using the hearing aid (first subscript). The example assumes that the hearing aid user has had stable hearing over several years and has reached an asymptotic level of performance with the aid. Similarly, $S_c = g_{ch} P_h / f$ when the implant is first used. In this case, the relevant effectiveness coefficient, $g_{ch}$, refers to the effect of everyday hearing aid experience on perception with the cochlear implant (note that $g_{ch}$ is non-zero because we presume that the client’s pre-operative experience with the hearing aid contributes to postoperative performance with the implant). $S_c$ is shown preoperatively by the dashed line in Figure 2 because it cannot actually be evaluated until after the implant operation. If the implant provides quite different information than the hearing aid, $g_{ch}$ is likely to be smaller than $g_{hh}$ and the implant user will perform worse with the implant than with the hearing aid to begin with (as shown in Figure 2). On the other hand, if $g_{ch}$ is greater than $g_{hh}$ there will be an immediate improvement in skills when the hearing aid is replaced by the implant. This is possible if the implant provides information in a similar form to that provided by the hearing aid, but with greater precision. Several years later the stable level of performance with the implant will be $S_c = g_{cc} P_c / f$, where $g_{cc}$ is the effectiveness coefficient for everyday practice on perception with the implant. This is shown by the asymptotic performance on the right of Figure 2. We will assume that the implant is used for the same amount of time and under the same circumstances as the hearing aid was previously used, so that $P_c = P_h$. The benefit of the implant depends on the comparison of $g_{cc}$ with $g_{hh}$. If the implant provides access to information not available through the hearing aid, $g_{cc}$ may exceed $g_{hh}$ and provide a significant benefit even though $g_{ch}$ may have been less than $g_{hh}$ and the implantation resulted in an initial drop in performance. As this example illustrates, the effect of learning can be very important in arriving at the best decision regarding the benefits of an assistive device. If training is provided in the initial period after the operation, the long-term level of performance with the implant may be reached more quickly since $(d/dt) S_c = g_{cc} P_c + g_{ct} P_t - f S_c$, where $g_{ct}$ is the effectiveness coefficient of the training and $P_t$ is the practice provided during training. To be beneficial, $g_{ct}$ must exceed $g_{cc}$ by a large factor because $P_t$, the time spent on training, is usually much smaller than $P_c$, the time spent using the implant in everyday conditions. Figure 2 displays the auditory skills of this hypothetical case as a function of time. The difference between the upper and lower curves after the implantation indicates the cumulative effects of the training. Note that once training ceases, the difference between the two curves is gradually reduced as they both tend to the same asymptote, determined by the
Figure 2. Theoretical learning curves for a hearing aid user who decides to have a cochlear implant. Post-operatively, two learning curves indicate predicted performance with and without a short period of training. See the text for a full explanation.
client’s everyday experience with the implant.
The post-operative performance described by the upper learning curve in Figure 2 is analogous to that of a sub-group of five cochlear implant users who received an eclectic auditory training program shortly after implantation (Lansing & Davis, 1988). Their performance improved on several auditory perception tasks during the rehabilitation period, and performance was maintained at the improved level for a further 9 months without training. In the same study, a control group of eight implant users showed evidence of untrained learning for vowel and consonant perception (as would be suggested by the lower learning curve in Figure 2). The control group received training 9 months after implantation and then improved to about the same level of performance as the experimental group. The authors do not give pre-operative hearing aid scores on the same tests, so it is not possible to determine whether there was an initial drop in performance as postulated in Figure 2. Nor is it possible to estimate the $g_{hh}$ and $g_{ch}$ coefficients for these subjects.
Auditory experience. As illustrated by the hypothetical example above, the difference between trained and untrained performance is transitory and the long-term stable level of an auditory skill is determined by the amount of practice under everyday conditions. Thus, training becomes less worthwhile for a client who already has a lot of auditory experience and is approaching the projected stable level of performance. A person who is well below the stable level of performance has more to gain, and will learn faster than someone who is already close to the stable level. Thus, training tends to be more useful at times when conditions have changed through a sudden loss of hearing, or the fitting of a new device because these changes alter both the immediate performance level
and the projected stable level. A notable exception to these statements may be the case where the training itself has an effect on the everyday experience of the client. For example, the training may introduce the client to a new group of acquaintances or increase confidence and motivation so that the amount of everyday interaction is significantly increased even after training ends. Pragmatic training can also increase the effectiveness as well as the amount of everyday auditory experience.
**Environment.** The environment determines the hearing conditions under which the client must operate in everyday life, and training should aim to increase performance under these conditions. Given that one may wish to produce the maximum improvement under particular conditions, what implications does this have for the selection of a training method? This question is equivalent to the problem of finding the type of practice $P_j$ for a given auditory skill $S_i$ such that $g_{ij}$ is maximized. One might expect that training should be given under the conditions in which communication will often occur, for example, training in noise may be most appropriate for people who work in noisy environments. There are reasons why this may not be the best strategy, however. The first reason is that learning often progresses from simple to more complex stages. In analogy with arithmetic teaching, the initial stages of learning may occur more quickly under simplified and/or formalized conditions. One might also wish to use artificially difficult conditions to concentrate attention on particular aspects of the task, or to challenge the client to reach a higher level of performance. It is possible that a variety of conditions including both artificially simplified and artificially difficult tasks may be more effective than matching real conditions. This is in contrast to the requirements of evaluation which should be administered under conditions that resemble real-life situations, subject to the usual requirements of reliability and sensitivity of the test.
**Personality.** Individual differences between clients can be influenced by factors more subtle than those discussed so far. Research into the effects of client and clinician personalities on the effectiveness of training is complicated by a lack of quantitative measures and the multivariate descriptions required to characterize human attitudes and styles of interaction. Motivation, willingness to learn, willingness to accept criticism and act on it, adaptability, and intelligence are all characteristics that may influence the rate of learning and the eventual effect of training on everyday speech perception skills. For example, Bode and Oyer (1970) suggested that age and intelligence had significant effects on the learning rates within their groups of adult subjects. Many personality characteristics are difficult to quantify but their effects can be significant, nevertheless. Some of these are attitudes that can be changed or controlled rather than fixed characteristics, and the training strategy should be designed to encourage positive attitudes (Binnie, 1977). For example, it is necessary to provide opportunities for success early in training to increase enthusiasm and confidence. Without early success, many clients will give up trying. Too much success can also have negative effects, leading to overconfidence or boredom. We learn from our mistakes,
but too many or too few mistakes can reduce the learning rate. Criterion-based training strategies where progress is contingent on reaching short-term goals, go some way towards achieving a compromise. It may still be necessary, however, to recognize the differences between individuals by choosing criteria appropriately to maintain motivation and avoid stagnation. These considerations are largely dependent on the skill of the trainers and their sensitivity to the attitudes and progress of the clients. Training schemes should therefore be designed to allow choice of materials, rates of progress, and interaction styles that are well suited to the personal traits of the client.
THREE STUDIES
OF ANALYTIC AND SYNTHETIC TRAINING
In this final section, we will briefly discuss three studies concerned with the effects of analytic and synthetic training on speech perception (Alcántara, Cowan, Blamey, & Clark, 1990; Rubinstein & Boothroyd, 1987; Walden et al., 1981). These three studies are of interest because they illustrate the diversity of effects that can arise from training studies under different experimental conditions.
In an auditory training study (Rubinstein & Boothroyd, 1987), 10 subjects received 8 hr of synthetic training on sentence perception and 10 subjects received 4 hr of synthetic training and 4 hr of analytic consonant recognition training. The CUNY Nonsense Syllable Test and the Revised Speech Perception in Noise Test (RSPIN) were used for evaluation. A significant post-training improvement was found for the high predictability items of the RSPIN test, but there was no significant improvement on low predictability items or on the nonsense syllables. The study incorporated no-training phases before and after the training phase, and demonstrated that the improvement in perception of high predictability sentences occurred during the training phase. There was no significant difference between the groups trained with the different methods.
In an earlier study (Walden et al., 1981), a standard 50-hr aural rehabilitation program was provided to three groups of hearing-impaired adults. Ten adults received an extra 7 hr of auditory consonant recognition training, 10 adults received an extra 7 hr of visual consonant training, and the third group of 15 adults received no extra training. The subjects who did not receive the extra analytic training did not improve on consonant recognition. All three groups improved on audio-visual sentence recognition, but the two groups receiving extra training improved significantly more than the other (control) group.
The third study (Alcántara, Cowan, Blamey, & Clark, 1990) was carried out with seven normally-hearing subjects in two groups using a multi-channel electrotactile speech processor and lipreading. One group received synthetic (S) training for 35 hr followed by 35 hr of combined analytic and synthetic (AS) training. The other group was trained in the two conditions in the opposite order. Evaluations were carried out before training, at the mid-point, and at
Table 3
Mean Improvements for Seven Subjects Using the Analytic-Plus-Synthetic (AS) and Synthetic (S) Training Strategies in the Tactile-Plus-Lipreading (TL), Tactile (T), and Lipreading (L) Conditions
| Test | Condition | Training Strategy |
|--------------------|-----------|-------------------|
| | | AS | S |
| Vowels | TL | 17* | 26* |
| | T | 52** | 7 |
| | L | 13 | 20* |
| Consonants | TL | 40** | 17 |
| | T | 30** | 6 |
| | L | 24** | 9 |
| CNC Words | TL | 6 | 11** |
| | L | 3 | 4* |
| CID Sentences | TL | 15** | 17** |
| | L | 6 | 14** |
| Speech Tracking | TL | 16* | 30** |
| | L | 9* | 20** |
Note. Vowel, consonant, CNC Word, and CID Sentence score improvements are shown in percent correct. Improvements in speech tracking are in words per minute.
* indicates an improvement significantly greater than zero with $p < .05$. ** indicates the significance level is $p < .01$ using the Wilcoxon Matched Pair Signed Rank Test.
The replicated within-subject design allowed each subject to act as their own control, gave each subject access to both training types, and balanced for order effects across groups. Table 3 shows the improvements observed averaged over the seven subjects. Both strategies produced significant improvements with a greater number of statistically significant improvements on the vowel and consonant tests arising during AS training, and a greater number of significant improvements on CNC Words, CID Sentences, and speech tracking during the S training strategy. Only vowels and consonants in the T condition showed a significant difference between the two training strategies ($p < .05$, Wilcoxon Signed Rank Test).
These three studies illustrate several points that have been made in this chapter. Firstly, improvements in some of the speech tests were found for both training strategies, but the patterns of improvements were different. This shows the necessity for effectiveness coefficients that depend on both the type of practice and the skill that is being evaluated. Secondly, the difficulty in demonstrating significant differences between training strategies is shown despite a considerable investment of time in training and the achievement of significant improvements on most tests. Thirdly, the three studies appear contradictory, but these differences may be reconciled if the details of the subjects are considered. Rubinstein
and Boothroyd (1987) found no significant effect of analytic training on nonsense syllable recognition, but both of the other studies did. This may have been a result of Rubinstein and Boothroyd's subjects being hearing aid users with relatively high mean aided speech recognition scores. The subjects in the study of Walden et al. (1981) were inexperienced hearing aid users with recently diagnosed hearing losses, and the subjects in the study of Alcántara, Cowan, Blamey, and Clark (1990) had not used the tactile processor before. Walden et al. found that extra analytic training improved performance on audio-visual sentence recognition, but neither of the other studies found a significant difference between synthetic training and a combination of analytic and synthetic training for sentence recognition. The critical difference among the studies may be that the analytic training was additional training in the case of Walden et al., whereas the analytic training replaced part of the synthetic training in the other two studies. These explanations of the different results can only be conjecture, and there is clearly a need for further research to determine more precisely the circumstances under which analytic training is most useful. All three studies suggested that synthetic training had a positive effect on the perception of sentences.
**SUMMARY**
Speech perception and communication are complex processes, affected by a multitude of factors. One factor is experience, and it is widely recognized that speech perception and communication can improve over time as a result of learning, like many other human activities. Auditory training provides experience that may be beneficial in increasing the information obtained by a person from their hearing. The mathematical model described above can be used to form and test hypotheses about the effects of training and other auditory experience on spoken communication skills. Other factors include the severity of the hearing loss, the hearing aid used, the environment, personal qualities of the client and clinician, the type of training, and the type of evaluation used. Despite a long history of clinical practice, the effects of these factors have been investigated in very few controlled studies. In special cases where the previous experience is severely limited, such as deaf children or adults using cochlear implants and tactile devices, training has an obvious role, but there has been little objective comparison of alternative training methods. One of the main reasons is the difficulty of carrying out definitive experiments that measure changes in performance over time in the presence of many confounding variables. The role of these variables may explain the apparently contradictory results that have been reported in the literature on auditory training for hearing aid users and the diverse points of view expressed by practicing clinicians. These controversies may be addressed rationally through careful research, or they may be resolved by default through social and economic pressures. If you were a person with impaired hearing, which would you prefer?
REFERENCES
Alcántara, J.I., Cowan, R.S.C., Blamey, P.J., & Clark, G.M. (1990). A comparison of two training strategies for speech recognition with an electrotactile speech processor. *Journal of Speech and Hearing Research, 33*, 195-204.
Alcántara, J.I., Whitford, L.A., Blamey, P.J., Cowan, R.S.C., & Clark, G.M. (1990). Speech feature recognition by profoundly hearing impaired children using a multiple-channel electrotactile speech processor and aided residual hearing. *Journal of the Acoustical Society of America, 88*, 1260-1273.
Alpiner, J.G., & McCarthy, P.A. (Eds.). (1987). *Rehabilitative audiology: Children and adults*. Baltimore: Williams and Wilkins.
Atkinson, R.C., Bower, G.H., & Crothers, E.J. (1965). *An introduction to mathematical learning theory*. New York: John Wiley and Sons.
Beckett, P., & Haggard, M.P. (1973). The psychoacoustical specification of "tone deafness." *Speech Perception, 2*, 17-22 (Dept of Psychology, Queen's University of Belfast).
Bell, F.H. (1978). *Teaching and learning mathematics (in secondary schools)* (Chapter 3, pp. 97-164). Dubuque, IA: Wm. C. Brown.
Binnie, C. (1977). Attitude changes following speechreading training. *Scandinavian Audiology, 6*, 13-19.
Blamey, P.J., & Cowan, R.S.C. (1992). The potential benefit and cost-effectiveness of tactile devices in comparison with cochlear implants. In I.R. Summers (Ed.), *Tactile aids for the hearing impaired* (pp. 187-217). London: Whurr Publishers.
Blamey, P.J., Pyman, B.C., Gordon, M., Clark, G.M., Brown, A.M., Dowell, R.C., & Hollow, R.D. (1992). Factors predicting postoperative sentence scores in postlinguistically deaf adult cochlear implant patients. *Annals of Otology, Rhinology, and Laryngology, 101*, 342-348.
Bode, D.L., & Oyer, H.J. (1970). Auditory training and speech discrimination. *Journal of Speech and Hearing Research, 13*, 839-855.
Boothroyd, A. (1968). Developments in speech audiometry. *Sound, 2*, 3-10.
Brooks, P.L., & Frost, B.J. (1983). Evaluation of a tactile vocoder for word recognition. *Journal of the Acoustical Society of America, 74*, 34-39.
Brown, A.M., Dowell, R.C., Martin, L.F., & Mecklenburg, D.J. (1990). Training of communication skills in implanted deaf adults. In G.M. Clark, Y.C. Tong, & J.F. Patrick (Eds.), *Cochlear prostheses* (pp. 181-192). Edinburgh: Churchill Livingstone.
Brown, R. (1977). Introduction. In C.E. Snow & C.A. Ferguson (Eds.), *Talking to children. Language input and acquisition*. Cambridge: Cambridge University Press.
Carhart, R. (1960). Auditory training. In H. Davis (Ed.), *Hearing and deafness*. New York: Holt, Rinehart & Winston.
Cornett, R.O. (1967). Cued speech. *American Annals of the Deaf, 112*, 3-13.
Cowan, R.S.C., Alcántara, J.I., Blamey, P.J., & Clark, G.M. (1988). Preliminary evaluation of a multiple channel electrotactile speech processor. *Journal of the Acoustical Society of America, 83*, 2328-2338.
Crandell, C.C., Henoch, M.A., & Dunkerson, K.A. (1991). A review of speech perception and aging: Some implications for aural rehabilitation. *Journal of the Academy of Rehabilitative Audiology, 24*, 113-120.
Cuddy, L.L. (1968). Practice effects in the absolute judgment of pitch. *Journal of the Acoustical Society of America, 43*, 1069-1076.
Danz, A.D., & Binnie, C.A. (1983). Quantification of the effects of training the auditory-visual reception of connected speech. *Ear and Hearing, 4*, 146-151.
Dawson, P. (1989). *A training method to improve suprasegmental production and perception by profoundly deaf children*. Unpublished thesis, School of English and Linguistics, Macquarie University, Sydney, New South Wales, Australia.
De Filippo, C.L., & Scott, B.L. (1978). A method for training and evaluating the reception of ongoing speech. *Journal of the Acoustical Society of America, 63*, 1186-1192.
Dowell, R.C., Mecklenburg, D.J., & Clark, G.M. (1986). Speech recognition for 40 patients receiving multichannel cochlear implants. *Archives of Otolaryngology Head and Neck Surgery, 112*, 1054-1059.
Durity, R.P. (1982). Auditory training for severely hearing-impaired adults. In D.G. Sims, G.G. Walter, & R.L. Whitehead (Eds.), *Deafness and communication. Assessment and training* (pp. 296-311). Baltimore: Williams and Wilkins.
Erber, N.P. (1982). *Auditory training*. Washington, DC: A.G. Bell Association.
Erber, N.P. (1984). *QUEST?AR: QUESTions for Aural Rehabilitation*. Abbotsford, Victoria, Australia: Helographics.
Fitts, P.M., & Posner, M.I. (1967). *Human performance*. Belmont: Brooks/Cole Publishing Company.
Garstecki, D.C. (1981). Auditory-visual training paradigm for hearing-impaired adults. *Journal of the Academy of Rehabilitative Audiology, 14*, 223-238.
Goldstein, M.A. (1939). *The acoustic method for the training of the deaf and hard-of-hearing child*. St. Louis: Laryngoscope Press.
Guberina, P. (1972). *Case studies in the use of restricted bands of frequencies in auditory rehabilitation of the deaf* (Project OVR-Yugo-2-63). Zagreb, Yugoslavia: University of Zagreb.
Howse, D. (1957). On the relation between the intelligibility and frequency of occurrence of English words. *Journal of the Acoustical Society of America, 29*, 296-305.
Hull, C.L. (1943). *Principles of behaviour: An introduction to behaviour theory*. New York: Appleton-Century-Crofts.
Kalikow, D.N., Stevens, K.N., & Elliott, L.L. (1977). Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. *Journal of the Acoustical Society of America, 61*, 1337-1351.
Lansing, C.R., & Davis, J.M. (1988). Early versus delayed speech perception training for adult cochlear implant users: Initial results. *Journal of the Academy of Rehabilitative Audiology, 21*, 29-41.
Lenneberg, E.H. (1967). *The biological foundations of language*. New York: John Wiley & Sons.
Levitt, H. (1986). Hearing impairment and sensory aids: A tutorial review. *Journal of Rehabilitation Research and Development, 23*(1), xiii-xviii.
Liberman, A.M., Cooper, F.S., Shankweiler, D.P., & Studdert-Kennedy, M. (1967). Perception of the speech code. *Psychological Review, 74*, 431-461.
Liberman, A.M., & Mattingly, I.G. (1985). The motor theory of speech perception revised. *Cognition, 21*, 1-36.
Ling, D. (Ed.). (1984). *Early intervention for hearing-impaired children: Oral options*. San Diego: College-Hill Press.
Ling, D. (1991). *Foundations of spoken language for hearing-impaired children*. Washington, DC: A.G. Bell Association.
Lubinsky, J. (1986). Choosing aural rehabilitative directions: Suggestions from a model of information processing. *Journal of the Academy of Rehabilitative Audiology, 19*, 27-41.
Minas, S.C. (1980). Acquisition of a motor skill following guided mental practice. *Journal of Human Movement Studies, 6*, 127-141.
Newell, K.M., & Walter, C.B. (1981). Kinematic and kinetic parameters as information feedback in motor skill acquisition. *Journal of Human Movement Studies, 7*, 235-254.
Novelli-Olmstead, T., & Ling, D. (1984). Speech production and speech discrimination by hearing-impaired children. *The Volta Review, 86*, 72-80.
Rodel, M.J. (1985). Children with hearing impairment. In J. Katz (Ed.), *Handbook of clinical audiology* (3rd ed.) (pp. 1004-1016). Baltimore: Williams and Wilkins.
Ross, M. (1987). Aural rehabilitation revisited. *Journal of the Academy of Rehabilitative Audiology, 20*, 13-23.
Rubinstein, A., & Boothroyd, A. (1987). Effect of two approaches to auditory training on speech recognition by hearing-impaired adults. *Journal of Speech and Hearing Research, 30*, 153-160.
Sanders, D.A. (1971). *Aural rehabilitation*. Englewood Cliffs, NJ: Prentice-Hall.
Walden, B.E., Erdman, S.A., Montgomery, A.A., Schwartz, D.M., & Prosek, R.A. (1981). Some effects of training on speech recognition by hearing-impaired adults. *Journal of Speech and Hearing Research, 24*, 207-216.
|
The influence of sunspaces on the heating demand in living spaces – comparison of calculation methods according to ISO 13790
Magdalena Grudzińska
Department of Construction, Faculty of Civil Engineering and Architecture; Lublin University of Technology; Nadbystrzycka Street 40, 20-618 Lublin, Poland; email@example.com 0000-0001-9271-8797
Funding: This work was supported by the Ministry of Science under grant number FN-9/2021.
Abstract: The calculation method presented in ISO 13790 was developed during the research project PASSYS. It aimed to work out the way of estimating energy demand while taking into account different passive solar systems. The standard includes two calculation methods for sunspaces – a full and simplified method. They differ in terms of basic assumptions and the treatment of solar gains in the sunspace and conditioned rooms. There are some doubts about the interpretation of equations presented in the standard, especially when it comes to modelling the solar radiation distribution within the solar space. The paper presents a discussion on the basic hypotheses applied in full and simplified methods, together with the author’s suggestions regarding modifications to the ISO 13790 calculation methods. The modified methods allowed to satisfactorily predict the functioning of the exemplary sunspaces with a smaller area of glazed partitions and higher radiation absorptivity of the casing, that is spaces similar in terms of solar radiation utilisation to traditional living spaces. The phenomena typical for sunspaces with a high degree of glazing, such as the retransmission of reflected radiation, were not sufficiently taken into account in the calculation methods of the standard.
Keywords: passive sunspace systems, heating demand, ISO 13790, dynamic simulations
1. Introduction
The current version of the ISO 13790 [1] standard was accepted by The European Committee for Standardization in 2008, and its Polish version was approved one year later as PN-EN ISO 13790 standard “Energy performance of buildings. Calculation of energy consumption for heating and cooling”. Although the policy regarding technical conditions that the buildings and their location should have [2] does not include the Polish version of the standard, the current methodology for preparing energy performance certificates for buildings [3] is based on it. The calculation method was developed during the PASSYS research
project [4], aiming to work out a way of estimating the energy demand taking into account the influence of passive systems. The ISO 13790 norm was replaced by ISO 52016-1 [5] in 2017, also presenting calculation procedures for sunspaces. However, this method requires information about the direct and diffuse components of solar radiation. It may restrict its use in Poland because climatic data included in publicly available Typical Meteorological Years contain only total solar radiation incident on planes with different slopes.
Quasi-stationary methods of the standard [1] are based on the hypothesis of constant heat flow in building partitions. Calculations are performed by averaging climatic parameters for quite long periods (e.g. one month or the entire heating season). Phenomena related to the dynamic behaviour of the building, such as the accumulation and release of heat, are taken into account indirectly through the introduction of a heat gain utilization factor.
Appendix E of the standard [1] contains two calculation methods for non-conditioned sunspaces – a full and a simplified one, differing in fundamental concepts and the way of taking into account solar gains in the sunspace and the adjacent heated rooms. Equations included in the standard are formulated in a quite generalised way, and the interpretation of calculation methods raises certain doubts, especially in the area of modelling the distribution of solar radiation in the sunspace. These problems were discussed in the subject literature many times [6], [7], however, a comprehensive solution has yet to be found.
This article presents the author’s suggestions for modifying both methods, as well as correcting the discrepancies of calculation algorithms. Results obtained with the use of the modified full and simplified methods of the quasi-steady state were compared with the results of more accurate dynamic simulations with an hourly step, which allowed to determine the recommended scope of the use of each method.
2. ISO 13790 methods
Heating demand in a monthly, quasi-stationary method is set out as in Eq. 1:
\[ Q_{H,nd} = Q_{H,ht} - \eta_{H,gn} Q_{H,gn} \]
(1)
where: \( Q_{H,nd} \) – energy demand for heating during the time step [MJ], \( Q_{H,ht} \) – total heat transfer, including heat losses through building partitions, and heat losses for heating ventilation air [MJ], \( Q_{H,gn} \) – total heat gains, including internal gains and gains from solar radiation [MJ], \( \eta_{H,gn} \) – gain utilisation factor, calculated as in Eqs 2 and 3:
\[ \eta_{H,gn} = \frac{1 - \gamma_H^{alt}}{1 - \gamma_H^{alt+1}} \quad \text{for } \gamma_H > 0 \text{ and } \gamma_H \neq 1 \]
(2)
\[ \eta_{H,gn} = \frac{a_H}{a_H + 1} \quad \text{for } \gamma_H = 1 \]
(3)
\( \gamma_H \) – heat gains and losses relation (Eq. 4):
\[ \gamma_H = \frac{Q_{H,gn}}{Q_{H,ht}} \]
(4)
\( a_H \) – dimensionless numerical parameter (Eq. 5),
\[ a_H = a_{H,0} + \frac{\tau}{\tau_{H,0}} \]
(5)
\(a_{H,0}\) – referential numerical parameter set out on the national level, for a monthly method,
\(a_{H,0} = 1\) [–], \(v_{H,0}\) – relative time constant, set out on the national level, for a monthly method,
\(\tau_{H,0} = 15\) hours, \(\tau\) – time constant of the building zone characterising the internal thermal inertia
[hours], (Eq. 6),
\[
\tau = \frac{C_m}{3600} \cdot \frac{H_v + H_w}{H_v + H_w}
\]
\(C_m\) – the internal thermal volume of the zone [J/K], \(H_v\) – total coefficient of heat loss through partitions [W/K], \(H_w\) – total coefficient of heat loss through ventilation [W/K].
### 2.1. Heat gains from the sunspace – full method
The method presented in the standard can be used only for the evaluation of non-conditioned sunspaces, i.e. neither heated nor cooled. In the partition wall between the living space and the sunspace, the presence of permanent openings allowing airflow is excluded. If they do exist, the sunspace should be regarded as a part of conditioned space.
Heat losses through the partition wall between the conditioned space and sunspace are determined with the inclusion of temperature reduction factor \(b_v < 1\), which means that heat is transmitted into the environment of higher temperature than external conditions. The temperature reduction factor is determined as follows (Eqs 7 and 8):
\[
b_v = \frac{\theta_{int,H} - \theta_s}{\theta_{int,H} - \theta_e} = \frac{H_{se}}{H_{is} + H_{se}}
\]
(7)
\[
\theta_s = \frac{\theta_{int,H} H_{is} + \theta_e H_{se}}{H_{is} + H_{se}}
\]
(8)
where: \(\theta_{int,H}\) – set-point temperature for heating in the living space [°C], \(\theta_e\) – average external temperature in the given calculation step [°C], \(\theta_s\) – average internal temperature of the sunspace in the given calculation step [°C], \(H_{is}\) – coefficient of heat transfer through the partition between the living space and sunspace [W/K], \(H_{se}\) – coefficient of heat transfer through sunspace casing to the outside [W/K].
In the calculation of the coefficient \(b_v\) (Eq. 7), it is the heat transfer through the partition and the casing of the sunspace that is taken into account, rather than the influence of solar gains on the sunspace temperature \(\theta_s\). It is compensated for by including indirect gains from the sunspace \(Q_{si}\) in the energy balance of living spaces.
Heat gains in the building’s heated zone obtained through a sunspace \(Q_{ss}\) [MJ] are treated as a sum of direct \(Q_{sd}\) and indirect \(Q_{si}\) gains (Eq. 9):
\[
Q_{ss} = Q_{sd} + Q_{si}
\]
(9)
Direct gains reach the conditioned zone through the partition wall between the sunspace and living space. These gains are derived from multiple transmissions (first through the sunspace glazing, and then through windows or doors in the partition wall) or from radiation absorbed on the partitions’ surface. Indirect gains are determined by ISO 13790 as (Eq. 10):
\[
Q_{sd} = F_{sh,e} \left(1 - F_{F,e}\right) g_e \left(\left(1 - F_{F,w}\right) g_w A_w + \alpha_p A_p \frac{H_{p,tot}}{H_{p,e}}\right) I_p t
\]
(10)
where: $F_{sh,e}$ – reduction factor taking into account shading of the sunspace by external obstacles (buildings, trees, hills, elements of the same building) [–] (Eq. 11):
$$F_{sh,e} = F_{hor} F_{ov} F_{fin} \quad (11)$$
$F_{hor}$ – reduction factor from horizon [–], $F_{ov}$ – reduction factor from overhangs [–], $F_{fin}$ – reduction factor from pilasters [–], $F_{fw}$ – frame area fraction in the sunspace outer glazing area [–], $F_{iw}$ – frame area fraction in the total area of the window in the partition wall [–], $g_s$ – total solar energy transmittance of the sunspace glazing [–], $g_w$ – total solar energy transmittance of window glazing in the partition wall [–], $A_w$ – window area of the partition wall [m$^2$], $A_p$ – opaque area of the partition wall [m$^2$], $\alpha_p$ – absorptivity of the partition wall [–], $H_{p,ox}$ – heat transfer coefficient from the internal environment through the opaque part of the partition wall and the sunspace to the external environment [W/K], $H_{p,se}$ – heat transfer coefficient from the absorbing (external) surface of the partition wall through the sunspace to the external environment [W/K], $I_p$ – solar irradiance of the partition wall in a given calculation step [W/m$^2$], $t$ – duration of the calculation step [Ms].
Indirect gains are released to the air in sunspace volume through convection, coming from the energy absorbed on the surface of the casing. They are treated as gains derived from non-conditioned space with the temperature reduction factor $(1 - b_u)$. They are calculated by summing up the gains from each opaque absorption area in the volume of the sunspace and subtracting the gains transmitted by conduction directly through the partition wall, included in $Q_{sd}$ (Eq. 12):
$$Q_{si} = (1 - b_u) F_{sh,e} (1 - F_{F,e}) g_s \sum_j (I_j \alpha_j A_j) - F_{sh,e} (1 - F_{F,e}) g_s \alpha_p A_p \frac{H_{p,tot}}{H_{p,e}} I_p t \quad (12)$$
where (the remaining nomenclature as above): $b_u$ – temperature reduction factor in the given month [–], $I_j$ – solar irradiance on the “j” opaque interior surface of the sunspace in a given time step [W/m$^2$], $\alpha_j$ – absorptivity of the “j” opaque interior surface of the sunspace [–], $A_j$ – area of the “j” opaque interior surface of the sunspace [m$^2$].
### 2.2. Heat gains from the sunspace – a simplified method
On the national level, the use of the simplified method is allowed, subject to the following modifications:
- in the living space, solar gains from the sunspace are ignored – direct gains “supplied” by opaque and glazed parts of the partition wall or indirect gains from the sunspace casing are not included in the heat balance,
- these gains are included as a substitute, through the use of the temperature reduction factor $b_u^*$ while calculating heat transmission from the heated space to the sunspace; it is assumed then that temperature in the sunspace $\theta_s^*$ is the result of not only inflow and outflow of heat through the casing (as in the full method), but also of solar gains in its volume (Eqs 13 and 14):
$$b_u^* = \frac{\theta_{int,H} - \theta_s^*}{\theta_{int,H} - \theta_e} \neq b_u = \frac{H_{se}}{H_{se} + H_{is}} \quad (13)$$
$$\theta_s^* = \frac{\Phi_u + \theta_{int,H} H_{is} + \theta_e H_{se}}{H_{is} + H_{se}} \quad (14)$$
where: $\Phi_u$ – average solar gains in sunspace volume in the calculation step [W].
2.3. Proposed modifications of both methods
The abovementioned methods require some kind of commentary because the equations provided in the standard are not entirely consistent. Firstly, in Eqs 10 and 12 the multiplier $F_{sh,e}(1-F_{F,e})g_w$ is related to the exterior casing of the sunspace. It should be therefore related to the intensity of the radiation incident on the external casing, not with the intensity of the radiation reaching the partition wall $I_p$ or the interior part of the casing $I_j$. Secondly, the subtraction of the element regarding direct gains by the casing in the Eq. 12 means that these gains are not at all taken into account in the calculations (it is shortened with the analogical element in the Eq. 10). The subtracted element should also be multiplied by $(1-b_n)$, which can be physically interpreted as diminishing the indirect gains from the partition wall by a part conducted directly to the interior of the living space. Such notation was placed in the draft of the standard, which was made available by CEN in 2007 to submit comments before the final version was published [8]. Thirdly, the multiplier “$t$” meaning the duration of the calculation step occurs in Eqs 10 and 12 with the component $I_p$, but is omitted in the component $I_j$. Moreover, the standard does not precise the way the radiation intensity in the sunspace should be specified.
Taking the above into account, it was proposed that the calculation of gains $Q_{sd}$ and $Q_{si}$ is performed as follows (Eqs 15-19):
$$Q_{sd} = \left( \left(1 - F_{F,w}\right) g_w A_w + \alpha_p A_p \frac{H_{p,tot}}{H_{p,e}} \right) I_p t$$ \hspace{1cm} (15)
where (remaining nomenclature is as above):
$$I_p = f_p \frac{1}{A_w + A_p} \sum_k F_{sh,ek} A_{sol,k} I_{sol,k}$$ \hspace{1cm} (16)
$f_p$ – the part of solar radiation that is transmitted into the sunspace, incident on the area of the partition wall [–], $k$ – the number of collecting (glazed) surfaces of the external casing of the sunspace facing the given direction, $F_{sh,ek}$ – shading reduction factor of the collecting surface “$k$” of the sunspace, connected with external obstacles [–], $A_{sol,k}$ – an effective area of the sunspace’s collecting surface “$k$” [m$^2$] (Eq. 17), $I_{sol,k}$ – the intensity of solar radiation on surface “$k$” of the sunspace’s exterior casing [W/m$^2$],
$$A_{sol,k} = \left(1 - F_{F,sk}\right) g_{sk} A_{sk}$$ \hspace{1cm} (17)
$F_{F,sk}$ – frame area fraction of the sunspace external glazing in the surface “$k$” [–], $g_{sk}$ – total solar energy transmittance of the sunspace glazing on plane “$k$” [–], $A_{sk}$ – the area of external glazing of the sunspace in surface “$k$” [m$^2$].
$$Q_{si} = (1-b_n) \left( \sum_j \left(I_j \alpha_j A_j\right) - \alpha_p A_p \frac{H_{p,tot}}{H_{p,e}} I_p \right) t$$ \hspace{1cm} (18)
where (the remaining nomenclature is as above):
$$I_j = f_j \frac{1}{A_j} \sum_k F_{sh,ek} A_{sol,k} I_{sol,k}$$ \hspace{1cm} (19)
$f_j$ – the part of solar radiation transmitted into the sunspace, incident on the surface “$j$” of its internal wall [–].
Suggestions for the calculation of coefficients $f_p$ and $f_i$ were presented in part 4.
3. Dynamic simulations
More complex simulation methods are used to carry out computer calculations. The calculation step estimated here is much shorter than in quasi-stationary methods – it can be, for example, one hour or several minutes. This allows to include heat exchange processes that are dependent on temperature change and exposure to solar radiation as discreet dynamic processes [9], [10]. Dynamic simulations can also be used as validation methods for the less accurate procedures (such as quasi-stationary methods) [11], [12].
Postulates concerning the possibility of using the commonly available simulation tools for modelling sunspace systems formulated based on various research works ([13] – [15], among others) were included synthetically in work [16]. The main requirements that the computer programmes should meet to correctly calculate the solar gains in rooms with a high degree of glazing are as follows:
- the capability of defining the actual geometry of the room as well as glazed elements, considering their dimensions, placement in partitions, and orientation in terms of the direction they are facing,
- detailed analysis of solar radiation reaching the walls of the living space, taking into account the division into direct and diffuse components, and also relevantly accurate modelling of radiation incident onto leaning surfaces (e.g. using the models which are taking into account scattered radiation anisotropy),
- description of the radiation transmitted into the living spaces, considering the actual beam path through glazing; the distribution of direct radiation incident on individual internal partitions with the help of weighted proportionality coefficients (taking into account surface area and optic features of the partition, i.e. the ability to absorb and reflect the radiation) or configuration coefficients used for modelling radiative heat exchange is not sufficient,
- the capability of taking into account radiative heat exchange with the sky.
In this work, calculations are performed with the use of BSim simulation program, which meets the above requirements [17]. The algorithms of the program are based on the control volume method, in which building construction elements and air zones are represented by nodal points of specified physical properties, such as density, conduction, and heat capacity. For each of the air zones, there is a balance equation that takes into account the heat flux flowing through the casing, the transmission of solar radiation by transparent elements, heat fluxes generated by installation systems and carried through ventilation, infiltration, or interzonal mixing of air. Processes that are constant in time are modelled via the division into time steps of finite duration, usually lasting up to 1 hour.
The user’s data (e.g. from own measurements) can be inputted into the program as climatic data, or data representing typical meteorological years prepared under the procedures binding in a given country. The essential input parameters include air temperature, the intensity of the direct and diffuse solar radiation, and the relative air humidity. Data regarding the direction and speed of the wind may also be desired, especially if more detailed modelling of the natural exchange of air is being planned. In this research, a Typical Metrological Year for
Warsaw created under the procedures described in [18], available on the https://dane.gov.pl website, was used.
4. The comparison of presented calculation methods
Below there is presented the energy demand obtained for an exemplary living space adjacent to the sunspace, calculated with the help of the proposed modification of algorithms of the full and simplified method. The results were compared with dynamic simulations of the same living space arrangements carried out with assumptions as close as possible to the assumptions of steady-state methods.
The living space has two exterior walls – the wall facing south is adjacent to the sunspace (glazed balcony), and the full, eastern wall is exposed to the external air (Fig. 1). Insulating properties of the partitions are quite high, which corresponds to constructions built after 2014 (Tab. 1). Apart from solar gains in the living space, internal gains on the 3.0 W/m² level (according to Appendix G ISO 13790) were assumed. The air exchange in the room equals 0.5 l/h, and the air is supplied from the outside to meet the requirement of the standard [1] of the lack of infiltration between the sunspace and conditioned space.

**Fig. 1.** A diagram of the living space and the sunspace in BSim program
| Type of space | Heat transfer coefficient $U$ [W/(m²·K)] | Total solar energy transmittance $g$ [-] |
|---------------|----------------------------------------|----------------------------------------|
| | Full part | Window joinery | Glazing |
| Living space | 0.24 | 1.20 – 1.23 | 0.63 |
| Sunspace | 0.50 | 1.66 – 1.69 | 0.62 |
All the balcony walls are glazed (Fig. 1). Two types of glazing were analysed:
- on the entire height of the balcony – variant 1,
- above the height of 1.1 m, with a full casing below – variant 2.
Absorptivity of the interior surfaces of the sunspace was assumed to be equal to 0.2, 0.5, or 0.8. In dynamic simulations, radiation losses caused by the retransmission to the outside were taken into account, which is consistent with the physical characteristics of these phenomena.
In the full method of the ISO 13790, it is assumed that the solar gains in a conditioned space are derived from the radiation absorbed on the surface of full partitions of the sunspace or let through the glazing in the partition wall, making them dependent on the optical properties of the surface. This means that only the radiation dose reaching the given surface before the first reflection is used and the remaining part of the radiation is lost. Phenomena related to multiple reflections in the sunspace are omitted, which causes the underestimation of air temperature in the sunspace.
In the calculations following the simplified method, the retransmission of the radiation on the outside of the sunspace was omitted. Such hypothesis is assumed in the literature of the subject [6] as consistent with the general methodology of the standard and compensating for the fact that the simplified method diminishes the effects of the exposure to solar radiation caused by omitting solar gains transmitted through the glazing of the partition wall of the living space.
In reality, the radiation on individual surfaces of the sunspace is not identical. Because of the sun’s movement in the sky, it can be expected that the intensity of the direct radiation will be the highest on the partition wall and the floor. Accurate analytical methods determine these values by tracing the path of the sun’s rays (“ray tracing”), as in [14], [19].
The distribution of diffuse solar radiation incident on the given surface can be determined in several ways:
- by assuming that radiation division is proportional to surface absorptivity and size; it is the simplest method described in the literature [6, 14, 15],
- using view factors, determining which part of radiation derived from one surface reaches the other surface, depending on their location and geometry; these factors are available in [20], for example.
In the example, the first method was used by deriving factors $f_p$ and $f_f$ from the general formula (Eq. 21)
$$f = \frac{\alpha A}{\Sigma n(1 - \rho_n) An}$$
where: $n$ – the number of the opaque internal surfaces of the sunspace, $\alpha$ – absorptivity of a surface [–], $\rho$ – reflectivity of a surface [–], $A$ – surface area [$m^2$].
These factors were used in total radiation division, which is the sum of direct and diffuse radiation. It is not quite physically correct, however, such simplification was accepted because ISO 13790 methodology does not assume the division of the radiation into individual components. This assumption understates direct solar gains (through the partition wall) in the living space, and therefore it is on the “safe” side.
When comparing the chosen calculation methods below, basic parameters of the system functioning characteristics were presented: air temperature in the sunspace and energy demand in the living space (Tabs 2 and 3).
Table 2. Air temperature [°C] in the sunspace during the heating season, SD – dynamic simulations, ISO p – full method, ISO u – simplified method
| Month | Air temperature in the sunspace [°C] | MAPE [%] |
|-------|-------------------------------------|----------|
| | IX | X | XI | XII | I | II | III | IV | V |
| α = 0.2 | | | | | | | | | |
| SD | 17.9 | 12.5 | 7.2 | 5.1 | 4.2 | 4.8 | 10.2 | 12.2 | 18.0 |
| ISO p | 14.1 | 10.4 | 6.1 | 4.4 | 2.7 | 3.0 | 7.3 | 8.8 | 13.6 | 24.6 |
| ISO u | 37.1 | 24.6 | 13.3 | 10.2 | 12.6 | 15.0 | 26.8 | 34.8 | 46.7 | 145.5|
| SD | 20.8 | 14.3 | 8.1 | 5.7 | 5.3 | 6.0 | 12.5 | 14.8 | 21.5 |
| α = 0.5 | | | | | | | | | |
| SD | 19.2 | 13.6 | 8.4 | 6.3 | 5.6 | 6.2 | 11.6 | 13.7 | 19.3 |
| ISO p | 14.2 | 10.4 | 6.2 | 4.5 | 2.8 | 3.1 | 7.4 | 8.9 | 13.7 | 34.0 |
| ISO u | 27.3 | 18.6 | 10.3 | 7.8 | 8.5 | 10.0 | 18.5 | 23.7 | 32.6 | 48.8 |
| SD | 21.1 | 14.8 | 9.0 | 6.7 | 6.3 | 7.0 | 13.2 | 15.4 | 21.6 |
| α = 0.8 | | | | | | | | | |
| SD | 19.2 | 13.6 | 8.4 | 6.3 | 5.6 | 6.2 | 11.6 | 13.7 | 19.3 |
| ISO p | 14.2 | 10.4 | 6.2 | 4.5 | 2.8 | 3.1 | 7.4 | 8.9 | 13.7 | 34.0 |
| ISO u | 27.3 | 18.6 | 10.3 | 7.8 | 8.5 | 10.0 | 18.5 | 23.7 | 32.6 | 48.8 |
| SD | 22.0 | 15.3 | 9.2 | 6.8 | 6.6 | 7.4 | 13.8 | 16.1 | 22.6 |
| α = 0.5 | | | | | | | | | |
| SD | 19.2 | 13.6 | 8.4 | 6.3 | 5.6 | 6.2 | 11.6 | 13.7 | 19.3 |
| ISO p | 14.2 | 10.4 | 6.2 | 4.5 | 2.8 | 3.1 | 7.4 | 8.9 | 13.7 | 34.0 |
| ISO u | 27.3 | 18.6 | 10.3 | 7.8 | 8.5 | 10.0 | 18.5 | 23.7 | 32.6 | 48.8 |
| SD | 21.1 | 14.8 | 9.0 | 6.7 | 6.3 | 7.0 | 13.2 | 15.4 | 21.6 |
| α = 0.8 | | | | | | | | | |
| SD | 19.2 | 13.6 | 8.4 | 6.3 | 5.6 | 6.2 | 11.6 | 13.7 | 19.3 |
| ISO p | 14.2 | 10.4 | 6.2 | 4.5 | 2.8 | 3.1 | 7.4 | 8.9 | 13.7 | 34.0 |
| ISO u | 27.3 | 18.6 | 10.3 | 7.8 | 8.5 | 10.0 | 18.5 | 23.7 | 32.6 | 48.8 |
Table 3. Heating demand in the living space [kWh] during the heating season, SD – dynamic simulations, ISO p – full method, ISO u – simplified method
| Month | Heating demand in the living space [kWh] | Sum [kWh] | Change to SD* [%] |
|-------|-----------------------------------------|-----------|-------------------|
| | IX | X | XI | XII | I | II | III | IV | V |
| α = 0.2 | | | | | | | | | |
| SD | 1.3 | 62.8 | 135.4| 169.5| 177.1| 155.5| 96.2 | 66.3 | 18.2 | 882.3|
| ISO p | 0.4 | 49.3 | 139.8| 176.3| 183.5| 154.1| 73.4 | 26.6 | 0.1 | 803.4|
| ISO u | 0.0 | 35.1 | 133.0| 170.6| 173.9| 143.4| 54.0 | 1.6 | 0.0 | 711.6|
| SD | 0.0 | 36.0 | 124.5| 161.4| 163.7| 140.7| 66.8 | 36.0 | 0.4 | 729.4|
| α = 0.5 | | | | | | | | | |
| SD | 1.3 | 62.8 | 135.4| 169.5| 177.1| 155.5| 96.2 | 66.3 | 18.2 | 882.3|
| ISO p | 0.4 | 49.3 | 139.8| 176.3| 183.5| 154.1| 73.4 | 26.6 | 0.1 | 803.4|
| ISO u | 0.0 | 35.1 | 133.0| 170.6| 173.9| 143.4| 54.0 | 1.6 | 0.0 | 711.6|
| SD | 0.0 | 22.7 | 117.7| 156.4| 155.5| 131.6| 49.1 | 24.0 | 0.0 | 657.1|
| α = 0.8 | | | | | | | | | |
| SD | 1.3 | 62.8 | 135.4| 169.5| 177.1| 155.5| 96.2 | 66.3 | 18.2 | 882.3|
| ISO p | 0.4 | 49.3 | 139.8| 176.3| 183.5| 154.1| 73.4 | 26.6 | 0.1 | 803.4|
| ISO u | 0.0 | 35.1 | 133.0| 170.6| 173.9| 143.4| 54.0 | 1.6 | 0.0 | 711.6|
| SD | 0.0 | 36.0 | 124.5| 161.4| 163.7| 140.7| 66.8 | 36.0 | 0.4 | 729.4|
| α = 0.2 | | | | | | | | | |
| SD | 1.3 | 62.8 | 135.4| 169.5| 177.1| 155.5| 96.2 | 66.3 | 18.2 | 882.3|
| ISO p | 0.4 | 49.3 | 139.8| 176.3| 183.5| 154.1| 73.4 | 26.6 | 0.1 | 803.4|
| ISO u | 0.0 | 35.1 | 133.0| 170.6| 173.9| 143.4| 54.0 | 1.6 | 0.0 | 711.6|
| SD | 0.0 | 22.7 | 117.7| 156.4| 155.5| 131.6| 49.1 | 24.0 | 0.0 | 657.1|
| α = 0.5 | | | | | | | | | |
| SD | 1.3 | 62.8 | 135.4| 169.5| 177.1| 155.5| 96.2 | 66.3 | 18.2 | 882.3|
| ISO p | 0.4 | 49.3 | 139.8| 176.3| 183.5| 154.1| 73.4 | 26.6 | 0.1 | 803.4|
| ISO u | 0.0 | 35.1 | 133.0| 170.6| 173.9| 143.4| 54.0 | 1.6 | 0.0 | 711.6|
| SD | 0.0 | 36.0 | 124.5| 161.4| 163.7| 140.7| 66.8 | 36.0 | 0.4 | 729.4|
| α = 0.8 | | | | | | | | | |
| SD | 1.3 | 62.8 | 135.4| 169.5| 177.1| 155.5| 96.2 | 66.3 | 18.2 | 882.3|
| ISO p | 0.4 | 49.3 | 139.8| 176.3| 183.5| 154.1| 73.4 | 26.6 | 0.1 | 803.4|
| ISO u | 0.0 | 35.1 | 133.0| 170.6| 173.9| 143.4| 54.0 | 1.6 | 0.0 | 711.6|
* The change of heating demand in the entire heating season under the ISO method compared to dynamic simulation results.
Fig. 2. Air temperature in the sunspace during the heating season: a) variant 1, $\alpha = 0.5$, b) variant 2, $\alpha = 0.5$
Fig. 3. Heating demand in the living space during the heating season: a) variant 1, $\alpha = 0.5$, b) variant 2, $\alpha = 0.5$
In ISO 13790 methods, both in full and simplified versions, the absorptivity of the interior surfaces of the sunspace does not influence its interior temperature. This temperature is determined as dependent only on inflow and outflow of heat through transmission (full method), or as a derivative of solar gains through glazing and thermal features of the construction (simplified method). As a result of these hypotheses, the full method understates, and the simplified method quite significantly overstates interior temperatures, which is especially noticeable in spring and autumn months (Fig. 2). The results of dynamic simulations indicate the increase of interior temperature along with the increase of surface absorbency. The temperature takes intermediate values between the results obtained for the full and simplified method, which can be regarded as a kind of upper and lower limit of the actual inside temperature.
The course of average monthly temperatures and the heating demand in the subsequent months of the heating season were compared with the results of dynamic simulations, determining the Mean Absolute Percentage Error MAPE (Eq. 22):
$$MAPE = \frac{100}{n} \sum_{i=1}^{n} \left| \frac{P_i - S_i}{S_i} \right|$$
(22)
where: \( n \) – number of forecast values, \( P \) – forecast value (according to ISO 13790), \( S \) – accurate value (according to dynamic simulations).
If MAPE > 15% (which was the case in all instances), the forecasts are inaccurate, and they should not be accepted in the analysis of the phenomenon [21]. This does not disqualify the methods presented in the standard, because they are expected to produce a result of the seasonal heating demand that is merely close to the more accurate calculations.
In assumption, quasi-stationary methods ISO 13790 should be “on the safe side”, overstating the seasonal heating demand compared with the calculations with the hourly step, whereas the full method (which is more accurate) should produce a lower heating demand. Such regularity is noticeable only in two calculation cases – variant 2, \( \alpha = 0.5 \), and 0.8 (Fig. 3, Tab. 3). In these instances, the differences between the full method and simulations, are 15.5% and 7.1%, and between simplified method and simulations – 20.8% and 27.9%. This sort of approximation in engineering calculations can be regarded as satisfactory.
The results obtained for the lowest surface absorptivity (in casing variant 2) can raise some doubts in terms of the correct mapping of the psychical processes by both methods of the standard [1], even though differences between them and dynamic simulations alone are, in the worst case, close to 17%. If the absorptivity of the sunspace casing is low, the full ISO method gives the highest heating demand, which is the result of joining smaller indirect solar gains with the incomplete consideration of the buffer effect of the sunspace as a result of lowered inside temperature and indirect gains. In this variant, the simplified ISO method, which overstates the buffer effect of the sunspace, turned out to be closer to dynamic simulations.
Modelling of solar spaces with a high degree of glazing (variant 1) according to the ISO 13790 standard should be regarded as unsatisfactory. The simplified method overstates energy gains when large, glazed surface areas are involved, which is a result of the omitting of radiation retransmission. The significance of this phenomenon diminishes only in the instances of the highest surface absorptivity. In the full method, in turn, omitting the retransmission causes the overstatement of direct and indirect gains, which causes a drop in the heating demand, which is particularly noticeable when the absorptivity increases.
5. Summary
Summing up, ISO 13790 methods (after proposed modifications were taken into account) allowed to satisfactorily predict the functioning of the exemplary sunspace with a smaller area of glazed partitions and higher radiation absorptivity inside of the casing, that is space similar in terms of solar radiation utilisation to traditional living spaces. The phenomena typical for sunspaces with a high degree of glazing, such as the retransmission of reflected radiation, were not sufficiently taken into account in the calculation method of the standard. This effects in bigger discrepancies in results obtained for the sunspace glazed on all surfaces, and for high reflectivity inside the casing.
It is important to remember that the above analyses were carried out for a specific radiation distribution in the sunspace. A better estimation of surface irradiation could affect the accuracy of calculations. However, a detailed analysis of the radiation path goes beyond the area of engineering calculations, which is to be used by the methods contained in ISO 13790.
Among the presented calculation methods, dynamic simulations are a tool that allows taking into account the largest number of factors determining the functioning of a sunspace, i.e. primarily:
• the spatial nature of solar radiation,
• optical properties of the glazing as a function of the angle of incidence,
• radiation retransmission due to reflections in the sunspace,
• varied surface absorptivity,
• ventilation of the sunspace and airflow between the greenhouse and the conditioned room.
Therefore, it is a method with the greatest research potential, if it is consciously used and, if possible, validated in the conditions of the actual operation of the tested objects.
References
[1] ISO 13790:2008 “Energy performance of buildings. Calculation of energy consumption for heating and cooling”, International Organization for Standardization, Geneva, 2008.
[2] Announcement of the Minister of Infrastructure and Development of 17 July 2015 on the announcement of the uniform text of the ordinance of the Minister of Infrastructure on technical conditions to be met by buildings and their location, Journal of Laws of 2015, No. 0, item 1422.
[3] Regulation of the Minister of Infrastructure and Development of 27 February 2015 on the methodology for determining the energy performance of a building or part of a building and energy performance certificates, Journal of Laws of 2015, item 376.
[4] Bourdeau L., Buscarlet C.: “PASSYS, Final Report of the Simplified Design Tool Subgroup”, Commission of the European Communities, Directorate-General XII, Brussels 1989.
[5] ISO 52016-1:2017 “Energy performance of buildings. Energy needs for heating and cooling, internal temperatures and sensible and latent heat loads. Part 1: Calculation procedures”, International Organization for Standardization, Geneva, 2017.
[6] Leenknecht S., Saelens D.: “Comparison between simplified and dynamic calculation of highly glazed spaces”, in: Proceedings of the 1st Central European Symposium on Building Physics, Cracow – Lodz, September 2010, pp. 335-342.
[7] Passerini F., Albatici R., Frattari A.: “Quasi-steady state calculation method for energy contribution of sunspaces: a proposal for the European standard improvement”, in: Proceedings of Building Simulation Applications BSA 2013, 1st IBPSA Italy Conference, Bozen-Bolzano, Italy, pp. 141–150. Available: http://www.ibpsa.org/proceedings/BSA2013/15.pdf [Accessed: 18 Feb 2017]
[8] ISO/FDIS 13790:2006(E) “Energy performance of buildings. Calculation of energy use for space heating and cooling”, Draft for comments by CEN and ISO WG. Available: www.cres.gr/green-building/PDF/prend/set3/WL_14_TC-draft-ISO13790_2006-07-10.pdf [Accessed: 23 Feb 2017]
[9] Gawin D., Kossecka E. (ed.), *Typowy rok meteorologiczny do symulacji wymiany ciepła i masy w budynkach*. Lodz University of Technology, Łódź 2002.
[10] Clarke J.A., *Energy Simulation in Building Design*. Butterworth-Heinemann, Oxford 2001.
[11] Jokisalo J., Kurnitski J.: “Performance of EN ISO 13790 utilisation factor heat demand calculation method in a cold climate”, *Energy and Buildings*, 39 (2007), pp. 236-247. https://doi.org/10.1016/j.enbuild.2006.06.007
[12] Kokogiannakis G., Strachan P., Clarke J.: “Comparison of the simplified methods of the ISO 13790 Standard and detailed modelling programs in a regulatory context”, *Journal of Building Performance Simulation*, 1 (2008), pp. 209-219. https://doi.org/10.1080/19401490802509388
[13] Wall M.: “Climate and energy use in glazed spaces”, Report TABK-96/1009, Lund University, Department of Building Science, Lund 1996.
[14] Roux J.J., Teodosiu C., Covalet D., Chareille R.: “Validation of a glazed space simulation model using full-scale experimental data”, *Energy and Buildings*, 36 (2004), pp. 557-565. https://doi.org/10.1016/j.enbuild.2004.01.030
[15] Oliveti G., De Simone M., Ruffolo S.: “Evaluation of the absorption coefficient for solar radiation in sunspaces and windowed rooms”, *Solar energy*, 82 (2008), 212-219. https://doi.org/10.1016/j.solener.2007.07.009
[16] Hilliaho K., Lahdensivu J., Vinha J.: “Glazed space thermal simulation with IDA-ICE 4.61 software – suitability analysis with case study”, *Energy and Buildings*, 89 (2015), pp. 132-141. https://doi.org/10.1016/j.enbuild.2014.12.041
[17] Wittchen K.B., Johnsen K., Grau K., *BSim user’s guide*. Danish Building Research Institute, Horsholm 2004.
[18] Narowski P., „Dane klimatyczne do obliczeń energetycznych w budownictwie”, *Ciepłownictwo, Ogrzewnictwo, Wentylacja*, 11 (2006), pp. 22-27.
[19] Tiwari G.N., Gupta A., Gupta R.: “Evaluation of solar fraction on north partition wall for various shapes of solarium by Auto-Cad”, *Energy and Buildings*, 35 (2003), pp. 507-514. https://doi.org/10.1016/S0378-7788(02)00158-5
[20] Wiśniewski S., Wiśniewski T.S., *Wymiana ciepła*. Wydawnictwa Naukowo-Techniczne, Warszawa 1994.
[21] Rogalska M., *Wieloczynnikowe modele w prognozowaniu czasu procesów budowlanych*. Lublin University of Technology, Lublin 2016.
|
Formation of As₈ dimers in molecular solid-state arsenic
A.N. Rodionov¹, a, Yu.F. Zhukovskii a, R.I. Kalendarev a, *, J.A. Eiduss b
a Institute of Solid State Physics, Kengaraga 8, Riga, LV-1063, Latvia
b University of Latvia, Rainis boulev. 19, Riga, LV-1098, Latvia
Received 26 August 1996; accepted 6 September 1996
Abstract
Molecular yellow arsenic (γ-As) consists of tetrahedral As₄ molecules that may be packed in various ways. All γ-As modifications, both disordered and crystalline, are metastable and undergo irreversible transitions (polymerization) under action of heat and light, which cause a change in the nature of bonding in the molecules. Polymerization of γ-As leads to the formation of amorphous arsenic (α-As) possessing a continuous random network structure. DTA studies show that polymerization is an activated exothermic process. The value of its enthalpy agrees satisfactorily with an estimate of the excess energy of strained 'banana-shaped' bonds in an As₄ molecule.
Quantum chemical calculations, applying the semiempirical CNDO/BW approach, show that at the initial polymerization stage we have a formation of As₈ dimers due to cleavage of one bond in the tetrahedral As₄ molecule. Simulation of this process shows that formation of a stable As₈ cluster, possessing either D₂ₕ or O₃ symmetry, may take place if the dimerization reaction path possesses D₂ₕ symmetry. In this case a pair of approaching molecules is positioned in staggered 'face-to-face' configuration, which may be considered as a conformation with a six-membered chair-shaped ring dominating in the structure of polymerized α-As. The most favourable is found to be a molecular As₈ dimer with eclipsed 'edge-to-edge' configuration (D₂ₕ symmetry). © 1997 Elsevier Science B.V.
Keywords: As₄ molecule; Yellow arsenic; Polymerization; DTA; CNDO/BW
1. Introduction
Owing to an s²p³ electron configuration in the valence shell, arsenic is capable of forming three σ-bonds positioned in an orthogonal arrangement which determines the possibility of forming tetrahedral molecules of As₄ [1]. Condensation of As₄ vapour on substrates below 200 K produces arsenic as a molecular solid consisting of tetrahedral As₄ molecules that may exist in several crystalline and amorphous states. This is the yellow modification of arsenic (γ-As). It reveals photosensitivity over a wide spectral range. Under the effects of illumination (visible region), irradiation (UV, X-rays, particles) and on heating, irreversible changes take place in the molecular structure of γ-As, bond switching between the As₄ molecules, with formation of a molecular covalently bonded network of amorphous arsenic (α-As). These structural changes constitute the basis of the practically important image formation on γ-As layers [1]. However, unlike other allotropic forms of solid arsenic, existing data about γ-As are, so far, extremely poor [2,3].
The polymerization of γ-As under the above-mentioned effects is a multistage process. The starting stage of molecular cluster formation is of particular interest, namely the study of the reaction pathway of the interaction among tetrahedral As₄ molecules. It is natural to assume that the potential energy hypersurfaces in the reaction between a molecular pair will differ depending on molecular packing in different γ-As modifications. However, even in the disordered phase of γ-As, which exists up to 80 K, the As₄ molecules are oriented pairwise in staggered ‘face-to-face’ configuration [3]. As a result of the reaction, we observe cleavage and switching of bonds between the As₄ molecules, with formation of intermediate molecular clusters. Molecules from the nearest surroundings may also be involved in this process, leading to the creation of the neighbouring phase, which is thermodynamically more stable, in the bulk of γ-As [4].
Recently, we have studied experimentally and theoretically various molecular modifications of γ-As and their structural transformations [1,5–7]. In the present work both DTA measurements of pure yellow arsenic and CNDO/BW quantum chemical calculations of different stable As₈ configurations, as well as possible transition pathways between them, have been continued for further elucidation of the mechanism of γ-As polymerization.
2. Method details
2.1. DTA measurements
The existence of various allotropic forms of yellow arsenic suggested an investigation of the thermodynamics of the process. For this purpose a special differential microcalorimeter of the Calvet type was constructed [8] and adapted for investigating heat phenomena in thin deposited layers at low temperatures. Our DTA device had a maximum sensitivity of 6 μW with respect to heat emission within the 90–420 K range. All the measurements were carried out at a heating rate of 1 K min⁻¹. Yellow arsenic layers were deposited in a separate vacuum cryostat (~10⁻⁵ torr) in the dark. A more detailed description of our DTA experiments was given earlier [4–6].
The γ-As layers obtained showed good adhesion to the substrate surface and did not contain inclusions of nonmolecular arsenic which had served as the vapour source. The samples were monitored by means of the DTA curve. The amount of heat released in exo- and endo-processes must be in proportional dependence on the mass of the sample. Crack formation on γ-As films could sometimes be observed at temperatures exceeding 280 K in the form of ‘noise’ on the declining part of the exothermic DTA curve. The films started to show cracks in the transition process after more than half of the sample had polymerized.
Irradiation of the γ-As layers obtained was carried out inside the cryostat in nitrogen atmosphere. For this purpose an X-ray tube with a Cu anode was used, for the Kα radiation of which the layer thickness for half attenuation exceeds 30 μm. Such radiation interacts with the layer material uniformly throughout the whole bulk. Illumination of the γ-As samples by strongly absorbed light with ~2.5 eV photon energy was effected in the process of their preparation. As a source a xenon tube was used, the IR and red part of its emission spectrum being filtered off by means of a constant-flow 10% CuSO₄ solution. The total illumination dose exceeded by several orders of magnitude the value necessary for producing a photographic effect in γ-As [6].
2.2. CNDO/BW calculations
Details of the semiempirical study of equilibrium molecular clusters of arsenic have been recently described by us elsewhere [7]. All the calculations were performed for the singlet states of various As₈ cluster configurations in sp basis of the valence electrons. The following scheme of quantum chemical modelling is foreseen in our original version of CNDO/BW code:
- preliminary calibration of two-centre αXY and βXY parameters for the corresponding X–Y bond within the small test molecules according to their equilibrium geometry and binding energy;
- optimization of both possible structures of the system under study and transition trajectories between them, using semiempirical calculations of the total energy for successively
varying geometry of this system according to the method of cyclic coordinate descent;
- simulation of various types of intramolecular vibration for the equilibrium structures, using the synchronous scanning of a number of independent internal coordinates.
We used the $z$ axis passing through the outermost atoms of a pair of remote tetrahedral $\text{As}_4$ molecules positioned with respect to each other along the $z$ direction according to $D_{3d}$ symmetry as the basic dimerization reaction pathway. We optimized independently five to seven pairs of internal coordinates (synchronously for both $\text{As}_4$ molecules) in the course of approach of the two interacting molecules along this pathway. As a result of pathway optimization, the binding energy curve $-E_{\text{bind}}(z)$ was calculated. In addition to the one-dimensional scanning along the $z$ coordinate, a similar procedure of two-dimensional scanning along a pair of coordinates was performed. Such a simulation allowed us to calculate various energy surfaces, for instance $-E_{\text{bind}}(z,\theta)$, which are more suitable for theoretical simulation of the reaction pathways.
3. Results and discussion
Our DTA measurements reveal two singularities for freshly prepared y-As layers: an endothermic heat effect at 227 K (due to a reversible transition into the plastic phase characterized by a disordered orientation of the $\text{As}_4$ molecules through rotation around the $z$ axis) and an irreversible exothermic effect with maximum heat release at 280 K (due to polymerization of the molecular arsenic). The applied X-ray dose produced structural changes in about 5% of the material, which distributed themselves uniformly throughout its bulk. Irradiation leads to a shift of the exothermic peak by $\sim 10$ K towards lower temperatures. The position of the endothermic peak remains practically unchanged, whilst the process itself proceeds against the background of additional heat release. In the case of sample illumination, heat release is still more marked, when part of the polymerized amorphous phase becomes comparable with the amount of the initial y-As. Intensities of the exo- and endothermic peaks of y-As allowed us to estimate its content as 30%. The peak of the polymerization exotherm also shifts towards lower temperatures. The relative stability of y-As at low temperatures is determined by the existence of an energy barrier. According to the kinetic equation of Avraami [9] its height was found to be $0.48 \pm 0.07$ eV.
We consider polymerization to be due to the relatively low stability of the tetrahedral molecules of $\text{As}_4$ owing to 'banana'-type strained bonds. This is a result of the $s^2p^3$ electron configuration possessing characteristic $po$-bond angles exceeding $90^\circ$. The excess energy of strain of the molecular tetrahedron is removed in the process of bond cleavage and switching between the interacting $\text{As}_4$ molecules. This results in a change in the type of structure elements: the tetrahedral molecules of $\text{As}_4$ disappear, and instead we observe an appearance of trigonal pyramid-shaped structure units $\text{As}_3\text{As}_3$, with different bonding angles and lengths.
Such an interpretation is confirmed by our CNDO/BW calculations. Careful geometry optimization indicates the existence of the following equilibrium configurations of the $\text{As}_8$ cluster:
1. a pair of remote tetrahedral $\text{As}_4$ molecules (asymptotic plane on the energy hypersurface);
2. a staggered 'face-to-face' configuration possessing $D_{2d}$ symmetry (the corresponding local minimum on the hypersurface is higher by 183 kJ mol$^{-1}$ with respect to the $2\text{As}_4$ configuration);
3. a cubeane (cube-like) configuration possessing $O_h$ symmetry (its local minimum is lower by only 14 kJ mol$^{-1}$ than the above-mentioned asymptotic plane on the hypersurface); as for stability of the cubic configuration, it is probably due to disregard of the repulsive effect of parallel bonds;
4. an eclipsed 'edge-to-edge' configuration possessing $D_{2h}$ symmetry (its minimum is the deepest among the configurations considered, lying 35 kJ mol$^{-1}$ lower than an asymptotic plane).
The $z$ axis is a basic pathway for the structural transitions $1 \rightarrow 2 \rightarrow 3$ within the framework of $D_{2d}$ symmetry trajectory. Both cleavage and switching of a pair of nearest As–As bonds takes place for the dimerization reaction $1 \rightarrow 2$. As for the transition pathway $2 \rightarrow 4$, it is characterized by a more complicated trajectory. All the transitions are found to be low-activated processes.
References
[1] J. Eiduss, R. Kalendarev, A. Rodionov, A. Sazonov, and G. Chikvaidze, Phys. Stat. Sol. (b), 193 (1996) 3.
[2] G. Linck, Ber. Dtsch. Chem. Ges., 32 (1899) 888; 33 (1900) 2284.
[3] M.F. Daniel and A.J. Leadbetter, Phil. Mag. B, 44 (1981) 509.
[4] A. Rodionov, R. Kalendarev, A. Shendrik and Yu. Zakis, Phys. Stat. Sol. (a), 79 (1983) K151.
[5] J. Eiduss, G. Chikvaidze, R. Kalendarev, A. Rodionov and A. Sazonov, J. Mol. Struct., 348 (1995) 123.
[6] A. Rodionov, R. Kalendarev and J. Eiduss, J. Phys.: Condens. Matter 7 (1995) 5805.
[7] A. Rodionov, R. Kalendarev, J. Eiduss and Yu. Zhukovskii, J. Mol. Struct. 380 (1996) 257.
[8] A. Rodionov, R. Kalendarev, G. Chikvaidze and J. Eiduss, Nature, 281 (1979) 60.
[9] G.N. Greaves, S.R. Elliot and E.A. Davis, Adv. Phys., 28 (1979) 49.
|
CURRICULUM VITAE
David E. Pearson
Dean of the Campus
San Diego State University
Imperial Valley Campus
720 Heber Avenue
Calexico, CA 92231
| Office | (760) 768-5520 |
|--------|----------------|
| | |
| | |
| Fax | (760) 768-5568 |
| E-mail | firstname.lastname@example.org |
EDUCATION
Doctor of Philosophy, Sociology, Yale University; 1988.
Master of Arts, Sociology, Yale University; 1981.
Bachelor of Arts, University of Massachusetts at Amherst. Honors in Sociology, magna cum laude; 1979.
POSTDOCTORAL TRAINING
Fellow in International Security Studies, Mershon Center, The Ohio State University, Columbus, OH; 1988-1989.
ADMINISTRATIVE POSITIONS
Dean of the Campus, San Diego State University Imperial Valley Campus, Calexico, CA; 2010-Present. Responsible for the entire range of campus operations: academic programs and program development, faculty and staff, student services, library, physical plant and facilities, fundraising and community relations, and oversight of a $6.2 million operating budget. During my five years as Dean I have launched five major initiatives:
- **Imperial Valley University Partnership.** This collaboration between SDSU-IV, the Imperial County Office of Education, and Imperial Valley College, our local community college, represents a unique form of downward expansion that brought four-year university education to our region for the first time. Using a cohort structure and involving joint admission of students to the community college and SDSU-IV, the program was purposefully designed to increase enrollment, improve academic performance, student retention, and graduation rates while at the same time reducing costs. At the time of its establishment, the program was hailed by CSU Chancellor Charles Reed as a model for higher education in
California. Campus enrollment increased by some 20%. The program received the 2014 Example of Excelencia Honorable Mention at the Baccalaureate Level for advancing educational achievement for Hispanic students.
- **Center for Sustainable Energy**. Working with public and private partners, in less than four years we established an SDSU-recognized research center in the area of renewable energy, advancing the economic development of Imperial County, addressing the green energy goals of the State of California, and contributing to America’s energy independence. We currently have over $20 million in deployed assets, including the largest university-based solar field in California. We also have a Solar Learning Center, several proof-of-concept and R&D projects, plus a power plant simulator for academic and industry training. Our efforts thus far have brought in $4.2 million in grants and other revenues.
- **Project AMCO**. A partnership between SDSU-IV and the Mexico-based Advanced Methods Company has us serving as certifying agent for English language proficiency in Mexico, elsewhere in Latin America, and Europe. Project AMCO holds the promise of greater international recognition for our campus, plus enhanced opportunities for faculty research and student recruitment, internships, and employment. Most importantly, it will yield a vital revenue stream benefitting both the Imperial Valley and San Diego campuses. Employing a web-based platform for administering language evaluations across the globe, Project AMCO commenced operations this in May 2014 with the testing of several thousand students in Mexico.
- **Borderlands Institute**. Established to promote scholarship and activities relevant to the Imperial, Mexicali, and Yuma Valleys, the Borderlands Institute realizes a long-standing dream of our faculty to advance our campus’s centrality in this fascinating bi-national community. In partnership with the Mexican Consulate, the Instituto de Cultura de Baja California, and several nearby Mexican universities, the Borderlands Institute now sponsors academic conferences and public lectures, hosts visiting scholars, coordinates exchange programs with Mexican institutions, and holds cultural events such as community forums, theatre, and concerts.
- **Small Business Development Center**. Our campus’s strategic plan calls for establishing industry partnerships that advance economic and social development in our region, and to develop and deploy new academic programs responsive to workforce needs. To those ends we recently secured a contract with the U.S. Small Business Administration to integrate the Imperial Valley Small Business Development Center into our campus operations. This partnership provides a mechanism for our students to secure internships, and ultimately jobs, helping ensure that the university is fully responsive to local employment needs and that it contributes to our region’s economic development.
As Dean, I am currently engaged in a number of other initiatives designed to increase revenues, expand enrollment, and enhance our campus’s range of programmatic offerings:
• Collaboration with officials from the County of Imperial and the Economic Development Administration to bring a new $6 million building to campus to house our renewable energy efforts.
• Partnering with the City of Calexico and the County of Imperial, to significantly expand our campus by bringing in a $35 million student housing and dining project. Integral to the plan is a workable methodology by which enrollment at the Imperial Valley Campus will increase by approximately fifty percent.
• Collaboration with relevant departments at the San Diego State University main campus to bring the Sustainability Studies and Bachelor of Social Work majors to my campus.
• Collaboration with the Universidad Autónoma de Baja California to bring renewable energy and other STEM classes, and eventually an engineering program, to the Imperial Valley Campus.
• Partnership between SDSU-IV and the Instituto Universitario de Investigación Ortega y Gasset in Mexico City to establish a joint masters-level program in the area of Educational Leadership.
Vice President for Partnership Affairs, the University of Texas at Brownsville/Texas Southmost College; 2006-2010. The role of Vice President included four major areas of responsibility:
• Head of the TSC District Office and the point of contact with the Board of Trustees. Responsibilities included organizing Board meetings, Partnership Committee meetings, Board elections, and retreats and workshops. Represent the institution at the Texas Association of Community Colleges, conduct Trustee elections, and active participation in government relations and fundraising.
• TSC Office of Finance. Responsibilities included managing a total annual budget of $50 million, and an operating budget of $12 million. Oversight of a portion of the institutional budget, and for tax rate setting, facilitating audits, issuing debt, and managing scholarships and endowments.
• UTB/TSC Office of Planning and Construction. Responsibilities included oversight of $170 million in new capital construction projects: a performing arts center, library, classroom building, biomedical research building, center for early childhood studies, and a recreation center. Also responsible for campus strategic planning, including student housing.
• TSC Office of Facilities Services. Responsibilities included oversight of the Physical Plant and its more than one hundred employees. Responsible for campus renovations, historical preservation, landscaping, grounds and building maintenance, and deferred maintenance. Also responsible for real estate purchases and the leasing and insuring of campus properties.
Administrative Fellow, UTB/TSC Leadership Program; 2006.
ACADEMIC POSITIONS
Professor of Sociology, Department of Sociology, San Diego State University, 2010-Present.
Faculty Advisory Council, University of Texas System, Austin, TX; 2005-2006 (FAC Executive Committee, 2006).
President, Academic Senate, UTB/TSC; 2004-2006.
Professor of Sociology, Behavioral Sciences Department, UTB/TSC; 2003-2010.
Founder and Director, Dual Language Certification Program, UTB/TSC; 2002-2004.
Associate Professor of Sociology, Behavioral Sciences Department, UTB/TSC; 1997-2003.
Assistant Professor of Sociology, Department of Anthropology and Sociology, Lafayette College, Easton, PA; 1989-1997.
Adjunct Professor, Southern Connecticut State University, Department of Sociology and Anthropology, New Haven, CT; 1988.
Instructor, Yale University, Department of Sociology, New Haven, CT; 1983-1984.
Teaching Assistant, Yale University, Department of Sociology; 1981-1983.
MILITARY SERVICE
United States Army, MOS 81B20, Illustrator and Construction Draftsman, ETS rank Sp/4, Honorable Discharge; 1971-1974.
SELECTED HONORS AND AWARDS
Featured in *San Diego State University 2013-14 Research Highlights*, “Contributing to America’s Energy Independence.”
President’s Top 25 Award, San Diego State University, May 2011.
Faculty Exceptional Merit, UTB/TSC; 2005.
Junior Faculty Competitive Leave, Lafayette College; 1993-1994.
Yale University Tuition Fellowship; 1979-1983.
University of Massachusetts at Amherst, graduation with University and Departmental Honors, *magna cum laude*; May 1979. Senior Honors Thesis: *Minimal Brain Dysfunction: Effects of Litter Composition on Body Weight, Activity, and Maze Performance in 6-Hydroxydopamine Treated and Normal Rat Pups.*
United States Army, Certificate of Commendation for Meritorious Service, Fort Devens, MA; 1973.
**BOOKS**
David E. Pearson, *Partnership Affairs: The Rise and Fall of a Community University*, (in progress).
David E. Pearson, *Flight of the Dove: KAL 007 and the Seduction of Conspiracy*, 353 pp. (out for review).
David E. Pearson, *The World Wide Military Command and Control System: Evolution and Effectiveness*, (Maxwell AFB, AL, Air University Press, 2000), 403 pp.
David E. Pearson, *KAL 007: The Cover-Up—Why the True Story Has Never Been Told*, (New York, Summit Books, 1987), 462 pp.
**DISSERTATION**
*The Betrayal of Truth and Trust by Government: Deception as Process and Practice*, (Charles Perrow, dissertation chair), 1988, Yale University, 567 pp.
**ORIGINAL ARTICLES**
David E. Pearson, 2013, “Future of the Imperial Valley Campus,” *PostScript*, 26:2, 4-5.
David E. Pearson, 1999, “Organization, Technology, and Ideology in Worldwide Command and Control,” *Defense Analysis*, 15:2, 197-214.
David E. Pearson, 1999, “Will of the People? What Life Would Be Like in Majoritarian America,” *Brownsville Herald*, (January 7), p. 4.
David E. Pearson, 1996, “Sociology and Biosociology,” *The American Sociologist*, 27:2 (Summer), 8-20.
David E. Pearson, 1996, “Laws to Suit Nation’s Majority Would Make Us All Minorities,” *The Morning Call*, (January 16), A7.
David E. Pearson, 1995, “Community and Sociology,” *Society*, 32:5 (July/August), 44-50.
David E. Pearson, 1993, “Post-Mass Culture,” *Society*, 30:5 (July/August), 17-22.
David E. Pearson and E. Martinez de la Fe, 1992, “KAL 007, El Ultimo Misterio de la Guerra Fría,” *El Mundo*, Madrid, (30 August), 12-13.
David E. Pearson, 1991, “Strategic Command and Control in the 1990s,” *Defense Analysis*, 7:4, 373-399.
James W. Hikins and David E. Pearson, 1990, “Review of Flights of Fancy, Flight of Doom: KAL 007 and Soviet-American Rhetoric,” *Journal of Communication*, 40:2 (Spring), 6-11.
David E. Pearson, 1989, “Organizational Problems in Worldwide Command and Control,” *Defense Analysis*, 5:3, 221-43.
David E. Pearson, 1989, “The Media and Government Deception,” *Propaganda Review*, 4 (Spring), 6-11.
David E. Pearson, 1987, “KAL 007: Questions That Won’t Go Away,” *The Nation*, (September 5), 181 and 196-200.
David E. Pearson, 1985, “The Fate of KE007: An Exchange,” *The New York Review of Books*, (September 26), 47-51.
David E. Pearson and John Keppel, 1985, “New Pieces in the Puzzle of Flight 007,” *The Nation*, (August 17/24), 104-10.
John Keppel and David E. Pearson, 1985, “A Misguided, Deadly Flight—KAL’s 007,” *Hartford Courant*, (September 2), C1-4.
David E. Pearson, 1984, “KAL 007: What the U.S. Knew and When We Knew It,” *The Nation*, (August 18/25), 105-24.
David E. Pearson, Lisa A. Raskin, Bennett A. Shaywitz, George M. Anderson, and Donald J. Cohen, 1984, “Radial Arm Maze Performance in Rats Following Neonatal Dopamine Depletion,” *Developmental Psychobiology*, 17:5, 505-17.
David E. Pearson, Martin H. Teicher, Bennett A. Shaywitz, Donald J. Cohen, J. Gerald Young, and George M. Anderson, 1980, “Environmental Influences on Body Weight and Behavior in Developing Rats After Neonatal 6-Hydroxydopamine,” *Science*, 209:4457 (August 8) 715-17.
Bennett A. Shaywitz and David E. Pearson, 1979, “Effects of Phenobarbital on Activity and Learning in 6-Hydroxydopamine Treated Rat Pups,” *Pharmacology, Biochemistry, and Behavior*, 9 (December), 173-79.
**COURSES TAUGHT**
| Undergraduate | Social Change |
|---------------|--------------|
| Alienation | Social Problems |
| American Communities | Social Theory: Classical |
| Class, Status, and Power | Social Theory: Contemporary |
| Complex Organizations | Sociology Senior Seminar |
| Conspiracy and the Cold War | Sociology of Work |
| Course Title | Course Title |
|--------------------------------------------------|--------------------------------------------------|
| Contemporary American Society | Special Topics |
| Cultural Anthropology | Statistics for the Social Sciences |
| Introduction to Sociology | Thesis |
| Mass Communications and Culture | Urban Sociology |
| Methods of Social Research | **Graduate** |
| Modern America: Society, Culture, State | Mass Communications and Culture |
| Organizations and Work | Pro-Seminar on Sociological Theory: Classical and Contemporary |
| Organizing Nuclear Weapons | Pro-Seminar on Sociological Theory: Sociobiology |
| Qualitative Methods of Research | Sociology of Education |
| Quantitative Methods of Research | Sociology of Liberalism and Conservatism |
| Religion in Society | Thesis |
**CONFERENCE PAPERS AND PRESENTATIONS**
Chair and Discussant, Session on Evolution and Culture, Southwestern Social Science Association, Las Vegas, NV, March 13, 2008.
“Sociology’s Biophobia,” International Society for Human Ethology, Montreal, Canada, August 9, 2002.
“Durkheim’s Dilemma: Solidarity and Conscience, Modernization and Media,” Theory Section, American Sociological Association, New York, NY; August 18, 1996.
“Sociology and Biosociology,” Section on Sociological Theory, Pennsylvania Sociological Society, Philadelphia, PA; October 21, 1995.
“Measures of Strategic Effectiveness,” Section on Command, Control, and Intelligence, International Security Studies Section, International Studies Association, Columbus, OH; November 9, 1990.
“Technology, Rationality, and the Arms Race: The Case of the Navy,” Section on the Sociology of Peace and War, American Sociological Association, Washington, DC; August 12, 1990.
“Strategic Command and Control in the 1990s,” Strategic Studies Section, International Studies Association, Washington, DC; April 13, 1990.
“Organizational Problems in Worldwide Command and Control,” Strategic Studies Section, International Studies Association, Washington, DC; November 3, 1988.
“Complexity, Coupling, and Nuclear War,” American Association for the Advancement of Science, New York, NY; May 25, 1984.
**SELECTED RECENT SERVICE AND ACTIVITIES**
Co-Principal Investigator, $1.7 million Jobs Innovation and Accelerator Challenge federal cluster grant (EDA, DOL, SBA); 2011-Present.
P-16 Council, Imperial County Office of Education; 2010-Present.
Academic Deans Council, SDSU; 2010-Present.
Executive Council, SDSU-IV; 2010-Present.
Imperial Valley University Partnership Advisory Board; 2010-Present.
SDSU-IV Strategic Plan *Building on Excellence* Committee, 2013.
Interagency Steering Committee; 2010-2014.
Diversity Committee, University Strategic Planning Initiative, SDSU; 2012.
Imperial Valley Arts Council; 2010-2012.
**PUBLIC LECTURES AND TALKS**
During my time as Dean of the Campus at SDSU-IV I have given many score of public talks, lectures, and university updates and status reports to numerous community and academic audiences. My most recent invited talk was “Leadership, Ethics, and Politics,” delivered on December 5, 2014, at the Instituto Universitario de Investigación Ortega y Gasset, in Mexico City, Mexico. The bulk of the talk was given in Spanish. In previous years I gave many public talks and lectures, perhaps the most significant of which was as Keynote Speaker at the UTB/TSC Distinguished Lecture Series in October 2006. The talk was entitled “The Seduction of Conspiracy.”
**MEDIA APPEARANCES**
During my years as student, professor, and administrator I have made many dozens of appearances on both radio and television. Among my more notable appearances were:
- *Secrets of the Black Box*, a documentary about the Korean Air Lines Flight 007 case (Traveling Light Media), airing on The History Channel in December 2005.
- Moderator for the CBS televised U.S. Senate debate between candidates John Cornyn and Ron Kirk; October 24, 2002.
- *ABC Nightline*, May 1991.
- *NBC Today*, August 1984.
For seven years (1999-2006) I was host of *Society Under Fire*, a 30-minute radio talk show examining controversial social topics and community events. Sponsored by UTB/TSC and broadcast on Public Radio 88 F.M, including KMBH-FM 88.9 (Harlingen) and KHID-FM 88.1 (Edinburg). During my years as host, several hundred individual shows were produced.
**LANGUAGES**
Fluent in Spanish.
**REFERENCES**
Available upon request.
|
A video processing device and method receives data from a common data source, such as a frame buffer and outputs first overlay information in a first color space from a first port and outputs second overlay information in a second color from a second port to facilitate output of multiple overlay images in different color spaces from common memory through different ports. In one embodiment a bidirectional port is used to allow a set of common signal pads or a bus to function as a flexible bidirectional video data port. The bidirectional video data port includes, for example, a port containing a set of common signal pads, a first unidirectional output switch selectable to output, for example, graphic data in a first format over the port, a second unidirectional output switch coupled to the common port that is configurable to selectively output graphics and/or video data, and an input buffer operatively coupled to a video capture engine and to the set of common signal pads that receives input video data in a different format to facilitate operation as a flexible bidirectional video data port.
10 Claims, 6 Drawing Sheets
A block diagram of a video processing system is shown in Figure 1 (prior art). The system includes a frame buffer, a graphics memory reader, a video memory reader, a capture engine, a buffer, a select-serialize, a first unidirectional output switch, a common signal pads, a video encoder, an MPEG decompressor, a video decoder, an MPEG compressor, a port (TVO), a DAC, and a flat panel.
The frame buffer stores frames of video data. The graphics memory reader reads graphics data from a graphics memory. The video memory reader reads video data from a video memory. The capture engine captures video data from the frame buffer. The buffer temporarily stores the captured video data. The select-serialize selects and serializes the video data. The first unidirectional output switch outputs the serialized video data to the common signal pads. The common signal pads provide the video data to a TV. The video encoder encodes the video data. The MPEG decompressor decompresses the encoded video data. The video decoder decodes the compressed video data. The MPEG compressor compresses the decoded video data. The port (TVO) provides the compressed video data to a cable VCR. The DAC converts the compressed video data to analog signals. The flat panel displays the analog signals.
FIG. 2
U.S. Patent Sep. 16, 2003 Sheet 2 of 6 US 6,621,499 B1
200 IMAGE OVERLAY GENERATOR
210 IMAGE READER
202 FRAME BUFFER
204 IMAGE READER
208
212 IMAGE OVERLAY GENERATOR
214 PORT
216 MPEG ENCODER
218
220 PORT
222 DISPLAY
224 RGB
226
228
230
232
206
208
210
212
214
216
218
220
222
224
226
228
230
232
200
202
204
206
208
210
212
214
216
218
220
222
224
226
228
230
232
FIG. 2
FIG. 3
- **GRAPHICS READER**
- 302
- 304
- 316
- **CONVERTER TO YCbCr**
- 314
- **MUX**
- 312a
- 312b
- **FIRST KEYER**
- 204
- **TO PORT 1**
- **MUX**
- 312c
- 312d
- **SECOND KEYER**
- 206
- **TO PORT 2**
- **VIDEO READER**
- 306
- **CONVERTER TO RGB**
- 322
- **MUX**
- 320
- **MUX**
- 324
- **MUX**
- 328
- **MUX**
- 332
The diagram illustrates a system for video processing and output, including:
- **Video Capture Engine**: Captures video input.
- **Buffer**: Stores captured video data.
- **First Unidirectional Output Switch**: Switches video to TV or MPEG Decompressor.
- **MPEG Decompressor**: Decompresses video data.
- **Video Decoder**: Decodes decompressed video.
- **Cable VCR**: Outputs video to cable or VCR.
- **MPEG Compressor**: Compresses video data.
- **Second Unidirectional Output Switch**: Switches compressed video to host.
- **DAC, TVO, ETC**: Outputs video to DAC, TVO, etc.
Connections include:
- **402**: Video from TV to Video Encoder.
- **404**: Video from Video Encoder to First Unidirectional Output Switch.
- **405**: Video from First Unidirectional Output Switch to Buffer.
- **406**: Video from Buffer to Video Capture Engine.
- **16**: Video from Video Capture Engine to Buffer.
- **18**: Video from Buffer to Overlay Keyer.
- **204**: Video from Overlay Keyer to Second Unidirectional Output Switch.
- **410**: Video from Second Unidirectional Output Switch to DAC, TVO, etc.
Additional components include:
- **Frame Buffer**
- **Graphics Memory Reader**
- **Video Memory Reader**
- **Overlay Keyer**
ITU 656 is referenced in the diagram, indicating standardization or compliance with ITU standards.
The diagram illustrates a system for video processing and display, including various components such as a video encoder, MPEG decompressor, video decoder, MPEG compressor, buffer, capture engine, palette RAM, graphics memory reader, video memory reader, RGB to Y, Cr/Cb converter, scale, overlay keyer, Y Cr Cb to RGB converter, overlay keyer or alpha blender, DAC, and flat panel. The system also includes control signals and data paths between these components.
VIDEO PROCESSOR WITH MULTIPLE OVERLAY GENERATORS AND/OR FLEXIBLE BIDIRECTIONAL VIDEO DATA PORT
FIELD OF THE INVENTION
The invention relates generally to video processing devices and methods and more particularly to video processing devices and methods having a bidirectional port and/or multiple overlay image generators such as keyers or alpha blenders.
BACKGROUND OF THE INVENTION
Video processing devices, such as video graphics controllers and other video processing devices may be designed to facilitate the presentation of both graphic data and video data on a display device, such as a computer screen. For example, with multimedia applications, a computer user may be using a word processor while watching a movie. The video processor generates overlays so that the movie appears in a corner or window within the display screen simultaneously with text information from the word processor application. Complex format conversions, scaling, video decompression, and other processes may be necessary. In addition, video processing devices, such as graphics controller chips, may have multiple input/output ports to allow data to be transferred from or to various video encoders, digital decompression modules, digital-to-analog converters, flat panel displays, television output ports and many other peripheral blocks.
Some video processing circuits allow compatibility with older and newer video formats over a common bus or port. For example, FIG. 1 shows a block diagram of a conventional video graphics processor having a frame buffer 10 that stores both graphics data and video data. The frame buffer 10 may be one or more memory modules. In one direction, the frame buffer, through common port 12, receives video information 14 through a buffer 16 via a video capture engine 18, as known in the art. The video capture engine then stores the captured video in the frame buffer 10. In another direction, through the common port 12, the graphics controller can output graphics information 20 obtained from graphics memory reader 22 to a data serializer 24 through a unidirectional output switch 26. The unidirectional output switch 26 may be, for example, a group of tri-state buffers controllable by control signal 28 by a host computer, for example, to allow the direction of information from the common port to flow out from the port or be received from the port through the buffer 16.
An image overlay generator 30 receives graphics information 20 and video information 32 obtained by video memory reader 31 and combines the data 33 for display, for example, on a television through a television out port 34 or may output the combined information 33 to a digital-to-analog converter 36, a flat panel display 38 or other suitable device, process or subprocess. A color space converter 40 converts, for example, video data that may be in Y,Cr,Cb format to RGB format that can be accommodated by the image overlay converter 30. It is useful to reduce the number of color space converters since the converters require integrated circuit space and absorb processing capabilities of the video graphics controller for each conversion.
Conventional graphics controllers may also include, for example, a palletizer RAM 42 that stores graphics data in a predefined format, and if desired, an unpacker 42 that unpacks graphics data that has been stored in a predefined format in the frame buffer. The graphics controller outputs the palletized information or unpacked information to a switch 46 which then allows information to be sent to the serializer 24 or image overlay generator 30. The graphics information is typically in a RGB color space format, and video data is typically in a Y,Cr,Cb color space (digital). As such, a color space converter 52 may be used to convert RGB information from the video memory reader to Y,Cr,Cb information and is passed through a switch 54 to a scaler 56. The scaler 56 may scale the video information to fit within a smaller or larger window within the display space, for example.
A conventional video graphics controller may be connected through a common port to a video encoder 60, a video decompressor 62, such as an MPEG video decompressor, a video decoder 64 and a video compressor 66, such as an MPEG video compressor. The encoders and compressors are typically used to convert data to and from the graphics controller to suitable compressed or decompressed format for other devices, such as digital video discs (DVD's), other display devices and software applications. As shown by arrows 68a, 68b, 68c and 68d the output from the video decoder may be passed directly to an MPEG compressor to be compressed for another subsystem or software application within a multimedia system or video system. A control signal 28 is again used to control whether the decompressor or decoder is operational.
It becomes increasingly important to keep the size of graphics controllers and video processing devices small while still increasing the amount of video and graphics processing ability and types of video processing capabilities. Conventional processors often add additional ports or pins to accommodate additional functionality. In addition, systems such as those shown in FIG. 1 typically do not provide the capability of allowing multiple overlays to be output from a common data source. With the increasing number and types of different displays that may be coupled to a single graphics processing device, it would desirable to allow multiple displays to show the same or differing overlay if desired from the same data source, such as buffer memory 10.
Consequently, a need exists for a video processing device and method that facilitates additional functionality over a common port and if desired, to provide additional overlay capability for multiple displays and/or peripheral modules to allow independent or dual processing of graphics and video overlay information.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a portion of the prior art video processing device.
FIG. 2 is a block diagram of a video processing device having multiple overlay image generators coupled to multiple ports in accordance with one embodiment of the invention.
FIG. 3 is a block diagram illustrating an alternative embodiment of a video processing device employing a plurality of overlay image generators in accordance with the invention.
FIG. 4 is a block diagram of one embodiment of a video processing device in accordance with the invention.
FIG. 5 is a block diagram of a video processing device in accordance with one embodiment of the invention.
FIG. 6 is a block diagram illustrating a video processing device of FIG. 5 showing in more detail a bidirectional flexible video port in accordance with the invention.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT OF THE INVENTION
Briefly, a video processing device and method receives data from a common data source, such as a frame buffer and outputs first overlay information in a first color space from a first port and outputs second overlay information in a second color from a second port to facilitate output of multiple overlay images in different color spaces from common memory through different ports. In one embodiment a bidirectional port is used to allow a set of common signal pads or a common bus to function as a flexible bidirectional video data port. The bidirectional video data port includes, for example, a port containing a set of common signal pads, a first unidirectional output switch selectable to output, for example, graphic data in a first format over the port, a second unidirectional output switch coupled to the common port that is configurable to selectively output graphics and/or video data over the same port, and an input buffer operatively coupled to a video capture engine and to the common port that receives input video data in a different format to facilitate operation as a flexible bidirectional video data port.
FIG. 2 shows a video processing device 200 having a frame buffer 202, a first overlay image generator 204, such as a keyer or alpha blender, and a second overlay image generator 206, such as a keyer or alpha generator. The video processing device 200 also includes a first image reader 208, such as a video data memory reader and another image reader 210, such as a graphics data memory reader. The image readers 208 and 210 are operatively coupled between the frame buffer 202 and the first and second overlay image generators 204 and 206 to allow the image generators to receive data from the frame buffer. The first overlay image generator 204 may be, for example, a keyer designed to combine data in Y,Cr,Cb color space is coupled to receive data from the frame buffer 202 and is coupled to a first port 212, such as a bus or signal pads. The first overlay image generator 204 outputs overlay information 214 in a Y,Cr,Cb color space to the first port 212 for any suitable device process or subprocess, such as an MPEG type encoder 216. The MPEG encoder may be any suitable MPEG-2 encoder as known in the art and outputs encoded video 218.
The second overlay image generator 206 outputs overlay information 220 in a different color space, such as RGB to a different port 222 to facilitate output of overlay information from a common data source out another port of the video processing device. The other port 222 is used to output overlay information such as in RGB format, to display device 224.
Each of the first and second overlay image generators 204 and 206 receive data from both image readers 208 and 210. For example, image data such as graphics data 226 is received by both the first and second overlay image generator 204 and 206. Similarly, other image data to be overlaid with the graphics image data 226, such as video data 228 is received by both the first and second overlay image generators 204 and 206. Each of the image readers 208 and 210 obtain respective image data 230 and 232 from a common data source, such as the frame buffer 202. The frame buffer may include a plurality of memory circuits. The image data 230 may be, for example, video image data whereas the image data 232 may be graphics data. The image readers 208 and 210 and/or overlay generators 204 and 206 may perform any suitable color space conversion as necessary, so that the overlay image generators may suitably overlay graphics and video data or other combinations of information. The multiple overlay image generators output overlay images in different color spaces from common memory 202 through different ports 212 and 222. As such, the device may be used to output the same or different overlay information on multiple displays or to other processes or subprocesses within a video processing and/or display system. In addition, as previously indicated, each of the overlay image generators 204 and 206 may overlay differing image data so that differing overlays may be obtained from the same common source data (e.g., from buffer 202).
FIG. 3 shows another embodiment of a video processing device 300 having multiple overlay image generators 204 and 206 to facilitate multiple overlay image output in different color spaces from common memory through different ports. In this embodiment, the image readers 208 and 210 each include a color space converter. By way of example, image reader 208 may include a graphics data reader 302 which may output graphics data in an RGB color space or other color space, to a color space converter 304 which converts RGB information to another color space, such as Y,Cr,Cb color space. Similarly, image reader 210 may include, for example, a video data reader 306 that obtains video data from the frame buffer 202 (FIG. 2) and outputs video data in Y,Cr,Cb format to a color space converter 308. The color space converter 308 converts the Y,Cr,Cb video data from video reader 306 to an RGB format or other suitable format. The video processing device 300 also includes a switching block 310 that may include, for example, a plurality of multiplexors 312a-312d. Multiplexors 312a and 312b are used to multiplex image information, such as graphics data and video data to the first overlay image generator 204 for output to port 212. The multiplexors may be under control of a host computer or other controller. As shown, multiplexor 312a receives graphics data either in the form of converted Y,Cr,Cb data 314 or graphics data in RGB format from 316 from the graphics data reader 302. Also, multiplexor 312c associated with the second overlay image generator 206 receives the same graphics data in multiple color space formats such as graphics data 314 and graphics data 316.
Multiplexor 312b, also associated with the first overlay image generator 304, receives video data 320 from the video reader 306, such as in the color space format Y,Cr,Cb. The multiplexor 312b also receives, as input data, converted video data converted to a different color space such as RGB data 322. As such, the multiplexors 312a and 312b may switch in any combination of color space graphics and image data to the overlay image generator 204 so that the overlay image generator 204 can output overlay images in different color spaces. Similarly, multiplexors 312c and 312d multiplex graphics data and video data in different color spaces to the second overlay image generator 206 so that the second overlay image generator 206 can output overlay images in different color spaces from common memory.
FIG. 4 shows another embodiment of a video processing device 400 that includes a flexible bidirectional video port 402. Although both overlay image generators 204 and 206 are shown, it will be recognized that only one of the overlay image generators need be used. As shown, the image reader 208, such as a graphics memory reader reads graphics data from the frame buffer 202 for both overlay image generators 204 and 206. In this embodiment, the graphics memory reader reads graphics data for the first overlay image generator 204. The image reader 210, such as a video memory reader is operatively coupled to the frame buffer to read video data from the frame buffer for at least one of the first
and second overlay image generators. In this embodiment, the video memory reader generates video data for both overlay image generators.
The flexible bidirectional video data port 402 includes a first unidirectional output switch 404 that may be used, for example, to output palletized data for compatibility with VESA Feature Connector (VFC) modes, as known in the art. The unidirectional output switch 404 is selectable through a first control line 405 to output, for example, graphics data over common port 406. The bidirectional video data port 402 also includes a second unidirectional output switch 408 which is operatively coupled to one of the image overlay generators, such as image overlay generator 204, and to the common port 406. The second unidirectional output switch 408 is configurable to selectively output either graphic data or video data, or both, from the overlay image generator over the common port 406. The format of the data may be, for example, ITU-656 formatted video. The bidirectional video data port 402 also includes an input buffer 16 coupled to the video capture engine 18 and to a common port 406 and receives input video data from the common port 406 such as, for example, video data in the form of ITU-656 formatted video. As such, the bidirectional video data port 402 facilitates a reduction of bus lines or signal pads by allowing output of graphics data in one format and overlay data in a different format and the input of video data over the same port. As such, different data from a common frame buffer 202 may be output through second unidirectional output switch 408 or first unidirectional output switch 404. The second unidirectional output switch is selectable via a second control line 410 which may be controlled, for example, by a host computer to control the direction of data flow to and from the video processing device.
FIG. 5 shows in more detail the system of FIG. 4. As shown, the video processing device 500 also includes a color space converter 502 coupled between the graphics memory reader 22 and overlay image generator 208 to convert, for example, graphics data in one color space to graphics data in another color space, in the event, for example, the overlay image generator 208 is only capable of accommodating one color space. In addition, a code adder 504 may multiplex additional signal lines for selection by the second unidirectional output switch 408 to allow the addition of, for example, horizontal synchronization, vertical synchronization, and other signals suitable to accommodate particular video signal formats, such as indicated in standard ITU-656.
FIG. 6 shows an example of a first and second unidirectional output switches 404 and 408 and buffer 16 coupled to common port 406 which in this embodiment is a plurality of signal pads 600a–600n. The first unidirectional output switch 404 may include a plurality of tri-state buffers 602a–602n that are enabled by control line 405. Similarly, the second unidirectional output switch 408 may include a plurality of tri-state buffers 604a–604n that may be put in a tri-state mode or active mode by control line 410.
In operation, the video processing device receives data from a frame buffer and outputs first overlay information that is in a first color space from a first port and outputs second overlay information in a second color space from a second port to facilitate output of multiple overlay images in different color spaces from common memory through different ports (see, for example, FIG. 2). The method may also include reading graphics data from the frame buffer for use in generating first and second overlay information and reading video data from the frame buffer for use in generating the first and second overlay information. Where conversion may be necessary, the system also converts graphic data in a first color space for one overlay image generator and also converts video data in a second color space for a different overlay image generator. Data may be provided from a common port to at least one of a video encoder, a video decoder, a video compressor and a video decompressor.
As such, the system allows a bidirectional data port to receive decompressed digital video while efficiently passing decompressed video data out from the video processor to a common port so that a host computer need not process video data which can require a compression or decompression circuit to burden the host processor. As such, the video processing device may transfer decompressed or compressed video for compression or decompression by other applications. Among other differences, unlike prior art systems that typically only allowed the output of graphics data, the system allows the output of overlayed information including video data out a common port and also the flexibility of having multiple image overlay generators to facilitate diverse overlay applications out differing ports.
It should be understood that the implementation of other variations and modifications of the invention in its various aspects will be apparent to those of ordinary skill in the art, and that the invention is not limited by the specific embodiments described. It is therefore contemplated to cover by the present invention, any and all modifications, variations, or equivalents that fall within the spirit and scope of the basic underlying principles disclosed and claimed herein.
What is claimed is:
1. A video processing device comprising:
- a first unidirectional output switch selectable to output at least graphic data over a common port;
- a second unidirectional output switch, operatively coupled to a common set of signal paths of the common port, configurable to selectively output at least one of graphic and video data;;
- a first overlay image generator operatively coupled to the second unidirectional output switch wherein the overlay image generator receives graphics data from a graphics memory reader and video data from a video memory reader to facilitate output of overlayed graphics data and video data from the second unidirectional output switch;
- a second overlay generator operatively coupled to receive data from the frame buffer and operatively coupled to output second overlay information from a port different from a port associated with the port; and
- an input buffer, operatively coupled to a video capture engine and to the common port, that receives input video data to facilitate operation of the common port as a flexible bi-directional video data port.
2. The video processing device of claim 1 further including an RGB to Y, Cr, Cb converter operatively coupled between the graphic memory reader and the first overlay image generator.
3. The video processing device of claim 2 wherein the second overlay image generator is operatively coupled to the graphics memory reader and the video memory reader and a peripheral port.
4. The video processing device of claim 1 wherein the first and second unidirectional output switches are comprised of tri-state buffers controllable by a host processing device.
5. A video processing device comprising:
- a frame buffer;
- a first overlay image generator, operatively coupled to receive data from the frame buffer and operatively
coupled to a first port, that outputs first overlay information in a first color space from the first port;
a second overlay image generator, operatively coupled to receive data from the frame buffer and operatively coupled to a second port, that outputs second overlay information in a second color space from the second port, to facilitate output of multiple overlay images in different color spaces from common memory through different ports;
a graphics memory reader operatively coupled to read graphics data from the frame buffer for at least one of the first and second overlay image generators;
a video memory reader operatively coupled to read video data from the frame buffer for at least one of the first and second overlay image generators; and
a common port comprising:
a first unidirectional output switch selectable to output at least graphic data over the first port;
a second unidirectional output switch, operatively coupled to the first overlay image generator and the first port, configurable to selectively output at least one of graphic and video data from the first overlay image generator over the first port
an input buffer, operatively coupled to a video capture engine and to the first port, that receives input video data from the first port.
6. The video processing device of claim 5 wherein the graphics memory reader is operatively coupled to read graphics data from the frame buffer for both the first and second overlay image generators, the video memory reader is operatively coupled to read graphics data from the frame buffer for both the first and second overlay image generators; and further including:
a first color space converter operatively coupled between the graphics memory reader and the first overlay image generator; and
a second color space converter operatively coupled between the video memory reader and the second overlay image generator.
7. The video processor of claim 5 including at least a video encoder, a video decoder, a video compressor and a video decompressor operatively coupled to the first port.
8. A video processing device comprising:
a method for processing video comprising the step of:
receiving data from a frame buffer;
outputting first overlay information in a first color space from a first port;
outputting second overlay information in a second color space from a second port, to facilitate output of multiple overlay images in different color spaces from a common memory through different ports;
reading graphics data from the frame buffer for use in generating the first and second overlay information;
reading video data from the frame buffer for use in generating the first and second overlay information; and
using a common port comprising:
a first unidirectional output switch selectable to output at least graphic data over the first port;
a second unidirectional output switch, operatively coupled to a first overlay image generator and the first port, configurable to selectively output at least one of graphic and video data from the first overlay image generator over the first port; and
an input buffer operatively coupled to a video capture engine and to the first port, that receives input video data from the first port.
9. The video processing method of claim 8 including: converting graphics data in the first color space for the first overlay image generator; and converting video data in the second color space for a second overlay image generator.
10. The video processing method of claim 8 including providing data from the first port to at least one of a video encoder, a video decoder, a video compressor and a video decompressor.
|
Definitive Detection of Orbital Angular Momentum States in Neutrons by Spin-polarized $^3$He
Terrence Jach$^{1,*}$ and John Vinson$^1$
$^1$Material Measurement Laboratory, National Institute of Standards and Technology,
100 Bureau Drive, Gaithersburg, MD 20899
Abstract
A standard method to detect thermal neutrons is the nuclear interaction $^3$He(n,p)$^3$H. The spin-dependence of this interaction is also the basis of a neutron spin-polarization filter using nuclear polarized $^3$He. We consider the corresponding interaction for neutrons placed in an intrinsic orbital angular momentum (OAM) state. We derive the relative polarization-dependent absorption cross-sections for neutrons in an $L = 1$ OAM state. The absorption of those neutrons results in compound states $J^\pi = 0^-, 1^-, \text{and } 2^-$. Varying the three available polarizations tests that an OAM neutron has been absorbed and probes which decay states are physically possible. We describe the energetically likely excited states of $^4$He after absorption, due to the fact that the compound state has odd parity. This provides a definitive method for detecting neutron OAM states and suggests that intrinsic OAM states offer the possibility to observe new physics, including anomalous cross-sections and new channels of radioactive decay.
Intrinsic Orbital Angular Momentum (OAM) states are quantum states in which the wave packet of a particle is given a helicity by retarding its phase progressively around its axis of travel. The creation of observable intrinsic OAM states in photons and electrons has been convincingly observed and demonstrated [1–4]. It has generated a great deal of interest in the possible creation and observation of OAM states of thermal neutrons. Several theoretical schemes and experimental methods have been reported [5–9]. While the experiments show effects compatible with neutron OAM states, they are unable to rule out non-OAM explanations of the observed data [10, 11]. The genuine creation of a neutron in an intrinsic quantum OAM state can only be demonstrated convincingly by a single-particle interaction that produces measurable quantum states.
One of the principal channels of detecting thermal neutrons is the reaction
\[ n + ^3\text{He} = p + ^3\text{H} + 764 \text{ keV}, \]
(1)
where the kinetic energies of the decay products are \( E_p = 573 \text{ keV} \) and \( E_{^3\text{H}} = 191 \text{ keV} \). The absorption cross-section is highly dependent on the angular momentum of the oriented nuclei. While the actual nuclear matrix elements (dependent only on \( J \)) are not easily determined, their dependence on angular momentum alignment, and therefore polarization, is readily separated out due to the Wigner-Eckart Theorem.
Cross-sections were derived for absorption into the singlet and triplet states by Rose [12], predicting strong dependence of the capture of neutrons by \( ^3\text{He} \) on their respective polarizations. The spin-dependence of oriented thermal neutrons in reaction with oriented \( ^3\text{He} \) was first measured by Passell and Schermer [13], who determined that the nuclear interaction was consistent with absorption occurring exclusively in the singlet state, \( J^\pi = 0^+ \).
In this paper we propose that the thermal capture of a spin-polarized neutron in an OAM state, absorbed by a spin-polarized \( ^3\text{He} \) nucleus, will provide definitive proof of the neutron OAM state. We derive the dependence of the absorption on the aligned angular momentum of a neutron OAM state (with \( L = 1 \)) and that of the \( ^3\text{He} \). We show that the resulting angular momentum dependent cross-sections, with \( J^\pi = 0^- \), \( 1^- \), and \( 2^- \), vary with the available polarizations in a manner that cannot be obtained in the case of ordinary neutrons. The odd parity and the energetics of the neutron absorption suggest which \( J \) states are likely to occur and what decay schemes will result.
We will discuss only the case for thermal neutrons. Thus the interaction of the neutron
with the $^3$He nucleus is purely s-wave in the scattering plane, although orbital angular momentum may be added perpendicular to that plane. An important characteristic of the derivations is to express the results in terms only of the spin-polarization $p$ of the neutrons, the polarization $P_L$ of the OAM states, and the spin-polarization $P_N$ of the $^3$He nuclei, assuming that these are the only parameters at our disposal in an experiment.
We note that in each case, the quantum state is determined at every step along the way. In other words, a neutron is placed in a specific spin state by a process, a specific amount of orbital angular momentum is added with a specific direction relative to the same axis by a device, and it interacts with $^3$He that has been placed into a specific oriented spin state relative to the same axis. The polarizations specify relative numbers of neutrons in specific (i.e. parallel and antiparallel) states. While it may be possible to create OAM states that are not parallel to the wavevector of the neutron [9], all polarizations here are regarded as helicities along the wavevector of the neutron.
Here we take the original calculation of Rose as a starting point to obtain the form of cross-sections that would be observed from spin-polarized neutrons [12]. Assume that we have a neutron with a spin wave function $\chi$ with angular momentum $S = 1/2$ and $m_S = \mu = \pm 1/2$. The $^3$He nucleus has a nuclear spin wave function $\psi$ with a spin $j_N$ where $m_N = \pm 1/2$. The compound nucleus in state $\Xi$ formed by the absorption of the neutron will have an angular momentum $j' = j_N \pm S$ with $m_{j'} = m_N + \mu$.
The cross-section depends on the distribution of spins,
$$\sigma = K(j') \sum_{m_N,\mu} p(m_N)p(\mu) |\langle \Xi_{j',m'} | \psi_{j_N,m_N} \chi_{\frac{1}{2},\mu} \rangle|^2,$$
(2)
where $K(j')$ is the squared nuclear part of the wave function that we will regard as a constant dependent only on $j'$. The quantities $p(m_N)$ and $p(\mu)$ are the probabilities that the spin alignments are $m_N$ and $\mu$.
The probability $p(\mu)$ is generally defined by
$$p(\mu = \pm 1/2) = \frac{p_\pm}{p_+ + p_-},$$
(3)
the probability that the neutron will be parallel (+) or antiparallel (−) to its direction of travel.
The neutron spin polarization will then be defined in the conventional manner by
$$p = \frac{p_+ - p_-}{p_+ + p_-}.$$
The nuclear spin polarization is defined by [12]
\[
P_N = \frac{1}{j} \sum_{m_N} m_N p(m_N).
\] (5)
Thus for $^3$He, the nuclear spin probability $p(m_N)$ and spin polarization $P_N$ are given by
\[
p(m_N) = \frac{P_{N\pm}}{P_{N+} + P_{N-}}; \quad P_N = \frac{P_{N+} - P_{N-}}{P_{N+} + p_{N-}}.
\] (6)
The cross-section then becomes
\[
\sigma = K(j') \sum_{m_N,\mu} p(m_N)p(\mu)|C(j', m'|j_N, m_N; 1/2, \mu)|^2,
\] (7)
where $\langle \Xi_{j', m'} | \psi_{j, m} \chi_{\frac{1}{2}, \mu} \rangle = C(j', m'|j, m; 1/2, \mu)$, the Clebsch-Gordan coefficient for $\mu = \pm 1/2$.
The cross-section (Eq. 7) can be evaluated for the triplet case $j' = j_N + \frac{1}{2}$ and the singlet case $j' = j_N - \frac{1}{2}$. After some algebra, the cross-section for the triplet case in terms of the polarizations becomes
\[
\sigma_1 = \frac{K(j' = 1)}{4}[3 + P_N p],
\] (8)
and the cross-section for the singlet case becomes
\[
\sigma_0 = \frac{K(j' = 0)}{4}[1 - P_N p].
\] (9)
If the neutrons and the $^3$He nuclei are both polarized and parallel such that, $pP_N = 1$, then $\sigma_0 = 0$. Measurements subsequent to that of Passell and Schermer [13] have confirmed that the interaction of the neutron with the $^3$He nucleus is only through the singlet channel [14, 15]. This is the basis for using optically pumped $^3$He gas as a neutron spin polarizing filter [16].
We now extend this to the case of an orbital angular momentum state of the neutron. We limit ourselves to the case where a device will transform the neutrons into an OAM state of $j_L = 1$. Based on the definition of polarization (Eq. 5) and $m_L = \pm 1$, the OAM polarization is given by
\[
P_L = \frac{P_{L+} - P_{L-}}{P_{L+} + P_{L-}}.
\] (10)
We define a total neutron angular momentum state $\Phi$ made up of the neutron OAM state $\zeta_L$ and the neutron spin state $\chi_S$.
TABLE I. Possible OAM neutron states resulting from the control of the polarizations
| $m'$ | $m_L$ | $\mu$ | State |
|------|-------|-------|-------|
| $+\frac{3}{2}$ | $+1$ | $+\frac{1}{2}$ | $|\Phi_{\frac{3}{2},+\frac{3}{2}}\rangle$ |
| $+\frac{1}{2}$ | $+1$ | $-\frac{1}{2}$ | $\sqrt{\frac{1}{3}}|\Phi_{\frac{3}{2},+\frac{1}{2}}\rangle + \sqrt{\frac{2}{3}}|\Phi_{\frac{1}{2},+\frac{1}{2}}\rangle$ |
| $-\frac{1}{2}$ | $-1$ | $+\frac{1}{2}$ | $\sqrt{\frac{1}{3}}|\Phi_{\frac{3}{2},-\frac{1}{2}}\rangle - \sqrt{\frac{2}{3}}|\Phi_{\frac{1}{2},-\frac{1}{2}}\rangle$ |
| $-\frac{3}{2}$ | $-1$ | $-\frac{1}{2}$ | $|\Phi_{\frac{3}{2},-\frac{3}{2}}\rangle$ |
We expect that the neutron spin and orbital angular momentum will combine to form two possible states. For $j' = j_L + j_S = \frac{3}{2}$,
$$|\Phi\rangle = \sum_{m_L,\mu} |\Phi_{\frac{3}{2},m'}\rangle \langle \Phi_{\frac{3}{2},m'}| \zeta_{j_L,m_L}\chi_{\frac{1}{2},\mu}\rangle,$$
(11)
where $m' = m_L + \mu$ and $\langle \Phi_{\frac{3}{2},m'}| \zeta_{j_L,m_L}\chi_{\frac{1}{2},\mu}\rangle = C(3/2, m'| j_L, m_L; 1/2, \mu)$, the Clebsch-Gordan coefficient. For $j' = j_L - j_S = \frac{1}{2}$,
$$|\Phi\rangle = \sum_{m_L,\mu} |\Phi_{\frac{1}{2},m'}\rangle \langle \Phi_{\frac{1}{2},m'}| \zeta_{j_L,m_L}\chi_{\frac{1}{2},\mu}\rangle,$$
(12)
where $m' = m_L + \mu$ and $\langle \Phi_{\frac{1}{2},m'}| \zeta_{j_L,m_L}\chi_{\frac{1}{2},\mu}\rangle = C(1/2, m'| j_L, m_L; 1/2, \mu)$.
Since we are only able to control $m_L$ and $\mu$ in our experiment, we end up with spin-polarized OAM neutrons in linear combinations of states as shown in Table 1. When absorbed, we expect that the states $|\Phi\rangle$ will combine with the $^3$He nucleus and its angular momentum wave function $\psi = \psi_N$ to form a compound nucleus in a state $\Xi$.
The possible final compound states will have angular momentum $j'' = 0$, 1, and 2. The final cross-section takes the form
$$\sigma = K(j'') \sum_{m_N,m_L,\mu} p(m_N)p(m_L)p(\mu) \times$$
$$\left| \sum_{j'} \langle \Xi_{j'',m''}| \Phi_{j',m'}\psi_{\frac{1}{2},m_N}\rangle \langle \Phi_{j',m'}| \zeta_{j_L,m_L}\chi_{\frac{1}{2},\mu}\rangle \right|^2,$$
(13)
where $m'' = m' + m_N$.
For $j'' = 0$, only $j' = \frac{1}{2}$ states of $\Phi$ are involved and for $j'' = 2$, only $j' = \frac{3}{2}$ states, so
Eq. 13 reduces simply to
\begin{equation}
\sigma = K(j'') \sum_{m_N, m_L, \mu} p(m_N)p(m_L)p(\mu) \times \\
\times |C(j'', m''|j', m'; 1/2, m_N)C(j', m'|j_L, m_L; 1/2, \mu)|^2.
\end{equation}
A final state of $j'' = 1$ will result in interference from the linear combination of both neutron OAM states as indicated in Table 1.
Evaluating all the coefficients and substituting polarizations for the probabilities, we get the final expressions for the relative cross-sections in terms of the polarizations.
For $j'' = 2$,
\begin{equation}
\sigma_2 = \frac{K(j'' = 2)}{24}[24 - 5(1 - pP_L) - 4(1 - pP_N) \\
- 5(1 - P_LP_N)],
\end{equation}
noting that the polarizations only appear in the expressions for the cross-sections two at a time. The cross-section contribution for $j'' = 2$ never goes to zero. It is a minimum for $pP_L = pP_N = P_LP_N = 1$ and a maximum for $pP_L = P_LP_N = -1$.
For $j'' = 1$,
\begin{equation}
\sigma_1 = \frac{K(j'' = 1)}{24}[3(1 - pP_L) + (6 - 4\sqrt{2})(1 - pP_N) \\
+ (3 + 4\sqrt{2})(1 - P_LP_N)],
\end{equation}
where the irrational coefficients are the result of interference between the two possible total angular momentum states of the OAM neutron. If we assume perfect polarization of the neutron spin, OAM states, and $^3$He nuclei, $\sigma_1 = 0$ for $pP_L = pP_N = P_LP_N = 1$ and a maximum for $pP_L = P_LP_N = -1$.
Finally, for $j'' = 0$, we get
\begin{equation}
\sigma_0 = \frac{K(j'' = 0)}{12}[1 - pP_L + pP_N - P_LP_N].
\end{equation}
For $j'' = 0$, the cross-section is zero when $pP_L = 1$ or $P_LP_N = 1$. The maximum cross section is given when $pP_N = 1$ and $pP_L = -1$. This is in contrast to the case for the conventional neutron singlet compound state which automatically goes to zero when $pP_N = 1$ (Eq. 9). It is further seen that the presence of OAM neutrons changes the character of the singlet cross-section, so that even if $P_L = 0$, the polarization behavior is not the same.
Through careful manipulation of the relative polarizations of both spins and the OAM state, the relative values of the $K(J'')$ can be determined.
The excited-state energy levels of the $^4$He nucleus may give us an indication of the likelihood that the matrix elements $K$ are non-zero [17]. For conventional thermal neutrons, the interaction n + $^3$He occurs at 20.578 MeV above the $^4$He ground state, forming a compound state with isospin $T = 0$. It is conveniently 368 keV above a broad ($J^\pi = 0^+, \ T = 0$) resonance at 20.21 MeV, but 7.7 MeV below the nearest ($J^\pi = 1^+, \ T = 0$) energy level at 28.31 MeV. The experimental observation of neutron absorption through only the $J^\pi = 0^+$ channel is credited to the exclusive proximity of this $^4$He compound state [15].
Owing to the $0^+$ ground state of the compound $^4$He nucleus and the addition of one unit of orbital angular momentum from the OAM neutron, we expect the final states of the compound nucleus to be odd parity, $J^\pi = 0^-, \ 1^-$, and $2^-$. An OAM neutron absorbed by $^3$He would form a compound nucleus in the immediate vicinity of broad excited states of $^4$He ($J^\pi = 0^-, \ T = 0$) lying at 21.01 MeV and ($J^\pi = 2^-, \ T = 0$) at 21.84 MeV, respectively, while the nearest level ($J^\pi = 1^-, \ T = 0$) lies considerably higher at 24.25 MeV. The decay products of these levels may differ from Eq. 1 although available data is based on different interactions [17]. In particular, while the $0^+$ excited state decays by emitting a proton, both the aforementioned $0^-$ and $2^-$ states can decay by reemitting a neutron.
In conclusion, we propose a detection method for individual neutrons put into intrinsic orbital angular momentum states that depends on an unambiguous quantum effect, the nuclear interaction with $^3$He. We have shown that when a thermal, spin-polarized neutron, put into a helical $L = 1$ OAM state with specific $m_L$ along its wavevector, is absorbed by spin-polarized $^3$He, the compound nuclear state should be readily distinguished from the case of an ordinary neutron. Three possible final states may result, with a total angular momentum $J^\pi = 0^-, \ 1^-$, and $2^-$. The dependence of the possible absorption cross-sections on the polarizations of the initial states will differ considerably from the dependence for ordinary neutrons.
As in the case for ordinary neutrons, there are energetic grounds for assumption that the compound nucleus may not form in all three total angular momentum channels. Furthermore, because the parity of the final state is negative, we have reason to believe that the nuclear interaction part of the matrix element will be distinct from the positive parity case of the singlet or triplet in the compound state formed with ordinary neutrons. The
creation in quantity of thermal OAM neutrons offers potential for new physics, including alternative channels for radioactive decay, modified cross sections for nuclear interactions, and previously unobserved nuclear processes.
The authors acknowledge useful discussions with R. Cappelletti and C. Majkrzak.
* firstname.lastname@example.org
[1] L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, “Orbital angular momentum of light and the transformation of laguerre-gaussian laser modes,” Phys. Rev. A 45, 8185–8189 (1992).
[2] Jonathan Leach, Miles J. Padgett, Stephen M. Barnett, Sonja Franke-Arnold, and Johannes Courtial, “Measuring the orbital angular momentum of a single photon,” Phys. Rev. Lett. 88, 257901 (2002).
[3] S. M. Lloyd, M. Babiker, G. Thirunavukkarasu, and J. Yuan, “Electron vortices: Beams with orbital angular momentum,” Rev. Mod. Phys. 89, 035004 (2017).
[4] K. Y. Bliokh, I. P. Ivanov, G. Guzzinati, L. Clark, R. Van Boxem, A. Béché, R. Juchtmans, M. A. Alonzo, P. Schattschneider, F. Nori, and J. Verbeeck, “Theory and applications of free-electron vortex states,” Physics Reports 690, 1 – 70 (2017).
[5] C. W. Clark, R. Barankov, M. G. Huber, M. Arif, D. G. Cory, and D. A. Pushin, “Controlling neutron orbital angular momentum,” Nature 525, 504 (2015).
[6] D. Sarenac, D. G. Cory, J. Nsofini, I. Hincks, P. Miguel, M. Arif, C. W. Clark, M. G. Huber, and D. A. Pushin, “Generation of a lattice of spin-orbit beams via coherent averaging,” Phys. Rev. Lett. 121, 183602 (2018).
[7] D. Sarenac, J. Nsofini, I. Hincks, M. Arif, C. W. Clark, D. G. Cory, M. G. Huber, and D. A. Pushin, “Methods for preparation and detection of neutron spin-orbit states,” New J. Phys. 20, 103012 (2018).
[8] D. Saranec, C. Kapahi, W. Chen, C. W. Clark, D. G. Cory, M. G. Huber, I. Taminiiaiu, K. Zhernenkov, and D. A. Pushin, “Generation and detection of spin-orbit coupled neutron beams,” PNAS 116, 20328 (2019).
[9] N. Geerits and S. Sponar, “Twisting neutral particles with electric fields,” Phys. Rev. A 103, 022205 (2021).
[10] R. L. Cappelletti, T. Jach, and J. Vinson, “On the observation of intrinsic orbital angular momentum states of neutrons,” Phys. Rev. Lett. 200, 090402 (2018).
[11] Ronald L. Cappelletti and John Vinson, “Photons, orbital angular momentum, and neutrons,” physica status solidi (b), 2000257 (2020).
[12] M. E. Rose, “Elementary theory of angular momentum,” (J. Wiley & Sons, New York, 1957) Chap. 10.
[13] L. Passell and R. I. Schermer, “Measurement of the spin dependence of the He$^3$(n,p)T reaction and of the nuclear susceptibility of adsorbed He$^3$,” Phys. Rev. 150, 146 (1966).
[14] S. P. Borzakov, H. Malecki, L. B. Pikel’ner, M. Stempinski, and E. I. Sharapov, “Energy levels of light nuclei A = 4,” Sov. J. Nucl. Phys. 35, 307 (1982).
[15] O. Zimmer, G. Ehlers, B. Farago, H. Humblot, W. Ketter, and R. Scherm, “A precise measurement of the spin-dependent neutron scattering length of $^3$He,” EPJ Direct 4, 1 (2002).
[16] T. R. Gentile, E. Babcock, J. A. Borchers, W. C. Chen, D. Hussey, G. L. Jones, W. T. Lee, C. F. Majkrzak, K. V. O’Donovan, W. M. Snow, X. Tong, S. G. E. te Veltuis, T. G. Walker, and H. Yan, “Polarized He$^3$ spin filters in neutron scattering,” J. Phys. B 356, 96 (2005).
[17] D.R. Tilley, H.R. Weller, and G.M. Hale, “Energy levels of light nuclei A = 4,” Nuclear Physics A 541, 1–104 (1992).
|
## Appetizers
**Loaded Fries**
Baked with two types of cheese, topped with bacon and served with your choice of ranch dressing or sour cream - 4.99
**French Fries** - 2.09
**Cheese Garlic Bread**
Garlic bread with melted cheese - 3.29
With Bacon - 3.99 With Ham - 3.79
Add Dipping Sauce - .50
**Garlic Bread**
Our delicious bread topped with our savory garlic butter - 1.99
**Traditional Wings** - 80 per wing
Hot, Mild, BBQ, Teriyaki, Plain or Sweet Chili
**Boneless Wings** - 80 per wing
Hot, Mild, BBQ, Teriyaki, Plain or Sweet Chili
**Mozzarella Styx** (6)
Breaded mozzarella served with dipping sauce - 4.99
**Chicken Strips** (4) - 4.49
## Bread Styx
*Marinara Included. Extra Marinara - .50 Garlic Butter - .50*
| Size | Plain | with cheese | extra items |
|------|-------|-------------|-------------|
| 8" | 3.00 | 4.25 | .80 |
| 10" | 4.00 | 6.00 | 1.00 |
| 12" | 6.00 | 8.25 | 1.20 |
| 16" | 8.00 | 12.00 | 1.50 |
## Salads
**Dressings:**
Giovanni’s Red Dressing, Golden Italian, Bleu Cheese, Creamy Italian, Fat-Free Ranch, Fat-Free Italian, 1000 Island, Ranch, French, Honey Mustard, Raspberry Vinaigrette
Each additional dressing - .50
**Garden Salad**
Fresh lettuce with tomatoes and cheese
Sm. - 2.99 Lg. - 3.99
**Chef Salad**
Fresh lettuce with ham, bacon, cheese and tomatoes
Sm. - 3.99 Lg. - 4.99
**Antipasto Salad**
Fresh lettuce with ham, pepperoni, bacon, onions, mushrooms, green peppers, banana peppers, cheese, tomatoes, black and green olives
Sm. - 4.99 Lg. - 5.99
**Italian Salad**
Fresh lettuce, cheese and pepperoni
Sm. - 2.99 Lg. - 3.99
**Grilled Chicken Salad**
Fresh lettuce with grilled chicken, cheese and tomatoes
Sm. - 4.99 Lg. - 5.99
**Crispy Chicken Salad**
Fresh lettuce with crispy chicken, cheese and tomatoes
Sm. - 4.99 Lg. - 5.99
**Fajita Chicken Salad**
Fresh lettuce with fajita chicken, cheese and tomatoes
Sm. - 4.99 Lg. - 5.99
**Taco Salad**
Tortilla chips, seasoned beef, lettuce, cheddar cheese and tomato
Sm. - 4.99 Lg. - 5.99
## Party Specials
### Party Special #1
19” Regular Crust 2-Item Pizza
With a large bag of chips and a 2-liter of pop - 17.99
### Party Special #2
16” Regular Crust 2-Item Pizza
With an order of 10” bread styx with cheese - 15.99
### Party Special #3
(2) 14” Regular Crust 1-Item Pizzas - 16.99
## Personal Pizza Special!
8” 2-Item Pizza
Served with chips and a fountain drink - 5.49
(add .50 for a 20-oz. drink)
## Sub Special!
Your Choice of Any Sub
(Add .50 for Turkey Club, Super Sub, Steak & Bacon, Chicken or Philly)
served with chips and a fountain drink - 5.49
(add .50 for a 20-oz. drink)
---
740.947.GIOS (4467)
Dine In,
Carry Out,
Pickup Window
or Delivery
WAVERLY, OH
740.947.4467
- Hours -
Monday - Thursday
10:00 a.m. - 10:00 p.m.
Friday & Saturday
10:00 a.m. - 11:00 p.m.
Sunday
11:00 a.m. - 10:00 p.m.
Menu items and pricing are subject to change
Add 15% gratuity for parties of 10+
We accept Visa, Mastercard, Discover and American Express
513 E. Emmitt Avenue
Waverly, OH
740.947.GIOS (4467)
**SUBS**
All subs are baked golden brown on our Pepperidge Farm bun
Pickle spear available on request
**The Big Red**
Steak hoagie with cheese, fried onions and mushrooms topped with lettuce and our Giovanni’s red dressing - 4.99
**Stromboli**
Steak hoagie with pizza sauce, onions and cheese - 4.99
**Steak**
Steak hoagie with cheese topped with lettuce, tomato, onion and mayonnaise - 4.99
**Steak & Bacon**
Steak hoagie baked with cheese and bacon topped with lettuce, tomato, onion and mayonnaise - 5.49
**Super Sub**
Salami, ham, cheese, pepperoni and bacon topped with lettuce, tomato, onion and Creamy Italian - 5.49
**Meatball Sub**
Italian meatballs baked with pizza sauce and cheese - 4.99
**Veggie Sub**
Onions, green peppers, banana peppers, mushrooms, green and black olives baked with cheese then topped with lettuce, tomato and mayonnaise - 4.99
**Turkey Sub**
Turkey with cheese, lettuce, tomato, onion and mayonnaise - 4.99
**Turkey & Bacon Club**
Turkey with ham, cheese and bacon topped with lettuce, tomato, onion and mayonnaise - 5.49
**Steak & Mushroom Gravy**
Steak with mushroom gravy, cheese and mushrooms - 4.99
**Texas T**
Delicious pork tenderloin with lettuce, tomato, onion and mayonnaise - 4.99
**B.L.T**
Cooked bacon pieces with melted cheese topped with lettuce, tomato, onion and mayonnaise - 4.99
---
**PIZZAS**
All pizzas include cheese and sauce
| Size | 8" | 10" | 12" | 14" | 16" | 19" |
|--------|-------|-------|-------|-------|-------|-------|
| Cheese | 4.25 | 6.00 | 8.25 | 10.25 | 12.00 | 14.75 |
| Ea. Addl. Item | .80 | 1.00 | 1.20 | 1.40 | 1.50 | 1.80 |
**Gluten-Free Pizza** 10" - 7.50 Ea. Addl. Item - 1.00 ea
**TOPPINGS**
Pepperoni • Italian Sausage • Ham • Beef • Bacon • Banana Peppers
Tomato • Black Olives • Green Olives • Pineapple • Green Pepper • Mushroom
Extra Crust • Onion • Extra Cheese • Jalapeño • Chicken (additional charge)
---
**SPECIALTY PIZZAS**
| Size | 8" | 10" | 12" | 14" | 16" | 19" |
|--------|-------|-------|-------|-------|-------|-------|
| Chicken, Bacon, Ranch | 6.25 | 8.25 | 11.25 | 14.75 | 17.75 | 20.00 |
Ranch sauce topped with chicken, bacon and cheese.
**BBQ Chicken**
BBQ sauce topped with chicken, onion, bacon and cheese.
**Buffalo Chicken**
Our Buffalo sauce topped with chicken, tomatoes and cheese.
**BLT Pizza**
Bacon and cheese topped with mayonnaise, lettuce and tomatoes.
**Taco Pizza**
Our Taco meat topped with four types of cheese, lettuce and tomatoes. Served with sour cream and salsa.
**Pepperoni Pounder**
A generous mix of pepperoni and Old World style pepperoni, 4 types of cheese and topped with Italian spices.
**Deluxe** 8.00 11.00 15.00 18.00 21.00 24.00
Pepperoni, sausage, mushrooms, onion, bacon, green & banana peppers, green & black olives, and ham.
---
**CALZONES**
**Calzone**
Stuffed rolled pizza with pizza sauce, mozzarella and provolone cheese, plus your choice of pizza items inside. Served with dipping sauce
10" Cheese Calzone - 4.75 Extra Items - .65
12" Cheese Calzone - 5.75 Extra Items - .85
---
**DESSERTS**
**Apple Pizza**
Apple topping baked with cinnamon streusel and topped with icing and powdered sugar - 5.99
**Cinnamon Styx**
Fluffy pizza dough with yummy cinnamon streusel and topped with icing and powdered sugar - 4.99
**Chocolate Chip Cookie**
8" cookie, freshly baked upon order - 4.99
---
**PASTA DINNERS**
All dinners served with hot garlic bread or baked Italian roll. Add small garden salad - 1.50 *Family Size does not come with a salad.
**Baked Lasagna**
Flat pasta smothered in Giovanni's flavorful sauce, topped with cheese and baked to perfection
Individual - 6.99 Family - 19.99
**Baked Spaghetti**
Smothered in Giovanni's flavorful sauce topped with cheese and baked to perfection
Individual - 6.99 Family - 19.99
**Baked Spaghetti without Cheese**
Smothered in Giovanni's flavorful sauce and baked to perfection
Individual - 6.49 Family - 19.99
**Baked Fettuccine Alfredo**
Smothered in Giovanni's flavorful sauce and baked to perfection
Individual - 6.99 Family - 19.99
---
**BEVERAGES**
Flavor choices vary by selection of fountain, 2-liter or 20-oz. bottle
**Fountain Drinks**
20-oz. Bottle
**Bottled Water**
2-Liter Pop
---
**DESSERTS**
**Cinnamon Snazzy**
A delicious dessert on our sub bun, covered in cinnamon streusel, toasted and topped with icing and powdered sugar - 2.99
**Chocolate Chip Cookie**
8" cookie, freshly baked upon order - 4.99
|
Edge Cache-assisted Secure Low-Latency Millimeter Wave Transmission
Wanming Hao, Member, IEEE, Ming Zeng, Gangcan Sun, and Pei Xiao, Senior Member, IEEE
Abstract—In this paper, we consider an edge cache-assisted millimeter wave cloud radio access network (C-RAN). Each remote radio head (RRH) in the C-RAN has a local cache, which can prefetch and store the files requested by the actuators. Multiple RRHs form a cluster to cooperatively serve the actuators, which acquire their required files either from the local caches or from the central processor via multicast fronthaul links. For such a scenario, we formulate a beamforming design problem to minimize the secure transmission delay under certain performance requirements for RRH. Due to the difficulty of directly solving the formulated problem, we divide it into two independent ones: i) minimizing the fronthaul transmission delay by jointly optimizing the transmit and receive beamforming; ii) minimizing the maximum access transmission delay by jointly designing cooperative beamforming among RRHs. An alternatively iterative algorithm is proposed to solve the first optimization problem. For the latter, we first design the analog beamforming based on the channel state information of the actuators. Then, with the aid of successive convex approximation and $\delta$-procedure techniques, a semidefinite program (SDP) is formulated, and an iterative algorithm is proposed through SDP relaxation. Finally, simulation results are provided to verify the performance of the proposed schemes.
Index Terms—Edge cache, secure transmission delay, millimeter wave, multicast, beamforming.
I. INTRODUCTION
With rapidly growing service in ultra-high-definition video (UHDV), autonomous driving, and connected vehicles, internet of things (IoTs), etc., how to realize high-rate and low-latency transmissions is of practical significance for future wireless communications [1]. To this end, cloud radio access network (C-RAN) has emerged as a promising enabling technology [2], [3]. In C-RAN, a central baseband unit (BBU) is in charge of resource allocation and signal processing, while the low-cost and low-power remote radio heads (RRHs) are connected to the BBU via wireless or wired fronthaul links [4].
The capacity of the fronthaul links becomes bottleneck for C-RAN, causing severe latency and degraded user experience, especially for applications such as virtual reality (VR), UHDV, and autonomous driving [5]. To tackle this problem, edge cache has been developed to relieve the burden of the fronthaul links and lower the end-to-end latency [6]. The main idea of edge cache is that RRHs should pre-fetch the most frequently requested files from the cloud and store them in the RRHs’ local caches during the off-peak traffic periods (such as midnight) [7]. When the files required by the actuators (devices) are cached in the RRHs, they can be directly transmitted to the actuators from the RRHs, leading to reduced fronthaul link data traffic and decreased latency [8]. Therefore, the cache technique will play a pivotal role in future wireless networks [9].
Edge cache-assisted C-RAN enables cooperation among RRHs to cancel the inter-RRH interference via coordinated multiple-point transmission (CoMP) [10]. However, the CoMP approach requires cooperative RRHs to share the actuators’ data, so that the fronthaul links have to carry each actuator’s data multiple times. This limits the cooperative size and increases the fronthaul burden. To address this problem, the multicast technique can be used to avoid multiple transmission by multicasting the actuator’s message to all cooperative RRHs at the same time. Dai et al. [11] proposed to use the wireless multicast in a cache-assisted C-RAN, where the cooperative RRHs pre-store the popular contents via multicast fronthaul links. Hu et al. [12] also adopted the multicast beamforming approach over fronthaul links to deliver the actuator’s data to a group of RRHs.
Due to the broadcast nature of wireless transmission, confidential messages may be eavesdropped by malicious attackers, jeopardizing the secrecy of the information transmission [13], [14]. Traditionally, the security issues at the wireless communication have been handled at the higher layer by using encryption approaches. However, the huge growth in the number of wireless devices and the rapid development of computing technologies have surfaced the vulnerability of the conventional encryption methods [15]. Thus, the physical layer security (PLS) has been developed as a complementary approach to the conventional encryption methods for securing confidential information [16]. Another important reason is that for low-end, low-energy IoT devices with limited battery life and computational capabilities, most of the available energy and resources should be dedicated to core application functionalities, and there may be little left for supporting security. PLS techniques allow light-weight encryption at the higher layer while ensuring...
the secrecy of transmissions. The key idea of PLS is to use the randomness of the wireless channels to refrain the illegitimate side from wiretapping the users’ information [17]. In this paper, we study the edge cache-assisted secure low-latency transmissions. Due to its ultra wide bandwidth, millimeter wave (mmWave) is applied to the fronthaul and access links [18]. Specifically, for a given cache strategy, the central processor (CP) first delivers the required non-cached files to cooperative RRHs via the mmWave multicast fronthaul links, and then the RRHs jointly transmit the overall files to the actuators. The system objective is to design the beamforming of the CP and the RRHs to minimize the secure transmission delay.
A. Related Works
Edge caching has attracted increasing attention recently. Some works focused on the delivery strategies [19]–[22] and others studied the cache placement problems [23]–[26]. Specifically, in [19], Tao et al. proposed to form a multicast group for users requesting the same content, so that these users can be jointly served by the same group of RRHs. Under a given cache strategy, the authors investigated the dynamic RRH clustering and multicast beamforming to minimize the weighted sum of backhaul cost and transmit power. Fu et al. [20] studied the power control problem for non-orthogonal multiple access (NOMA) transmissions in wireless cache networks. The authors proposed a deep neural network-based method to minimize the transmission delay. In a cache-enabled multigroup multicasting network, He et al. [21] designed three transmission schemes to minimize the delivery latency. In [22], Liu et al. explored the potential of energy efficiency (EE) of the cache-enabled networks, and identified the optimal cache capacity that maximizes the EE. Furthermore, the authors analyzed the obtained EE gain brought by cache. Based on the flexible physical-layer transmission and the diverse requirements of different users, Liu et al. [23] studied the cache placement problem, and proposed centralized and distributed cache strategies to minimize the download delay. In [24], Zheng et al. applied the cache technique to a distributed relay system, and proposed a hybrid cache scheme to minimize the outage probability. Zhu et al. [25] investigated the performance of cache-enabled ultra-dense small cell networks, and derived the successful content delivery probability (SCDP). The authors proposed two algorithms, namely constrained cross-entropy algorithm and heuristic probabilistic content placement algorithm to minimize the SCDP. Under random cache at the RRHs, Cui et al. [26] proposed two cooperative transmission schemes by jointly considering RRH cache and cooperation. The authors derived the expression of the successful transmission probability based on each scheme.
Although the cache problem has been investigated in [19]–[26], the secure transmission aspect was not considered. In [27], Derrick et al. designed a cache scheme that effectively enhances the PLS for a backhaul-limited cellular network. However, since the authors’ focus was to minimize the transmit power subject to the secrecy rate, the secure transmission delay has not been considered. In [28], Xu et al. investigated the PLS in a mobile edge computing network, and formulated a weighted sum energy minimization problem under a given secure transmission delay. Wang et al. [29] studied the security transmission problem in a cache-assisted heterogeneous network. To realize the secure and energy-efficient transmissions, a joint cache placement and file delivery scheme was proposed. In [30], Cheng et al. considered a cache-assisted unmanned aerial vehicle (UAV) system, and developed a joint optimization strategy via designing UAV trajectory and time scheduling to improve the secure transmission. Kiskani et al. [31] investigated the secure approach in an Ad Hoc network with cache. The authors proposed a novel decentralized secure coded caching scheme to enhance the secure storage, where the nodes only transmit the coded file for protecting the user information. Most aforementioned works mainly focus on the cache placement design to enhance the PLS without considering the secure transmission delay.
B. Main Contributions
In this paper, we investigate the secure transmission delay minimization problem in an edge cache-assisted mmWave C-RAN, where each RRH is equipped with a local cache. To reduce the hardware cost and energy consumption, we consider the single radio frequency (RF) chain and multiple antennas structure at the CP and RRHs. In addition, the CoMP technique is adopted among the RRHs. Therefore, the main challenge is how to jointly design the transmit and receive beamforming at the first phase as well as the cooperative beamforming for secure transmission at the second phase, and the main contributions of this paper include:
• We develop a two-phase transmission frame structure. At the first phase, the uncached files are fetched from the CP to RRHs through the multicast fronthaul link. At the second phase, all required files are transmitted from the RRHs to the actuators via the CoMP technique. In this regard, we formulate a secure transmission delay minimization problem by jointly optimizing transmit and receive beamforming at the first phase as well as the cooperative beamforming among RRHs at the second phase.
Since the original problem is intractable, we divide it into two independent optimization problems, and minimize the transmission delay for each problem. For the former, we need to jointly optimize the transmit beamforming at the CP and the receive beamforming at the RRHs, and an alternatively iterative algorithm is proposed.
For the second phase, we minimize the maximum secure transmission delay to guarantee fairness. Moreover, the general scenario with imperfect CSIs for eavesdroppers (Eves) links are assumed. We first design the analog beamforming for each RRH. Then, by the successive convex approximation (SCA) and $S$-procedure techniques, the problem is transformed into a semidefinite programm (SDP), which can be recast into a convex one by dropping the rank-one constraint. Meanwhile, our test results show that semidefinite relaxation (SDR) gives rank-one solutions with nearly 99% probability.
The rest of this paper is organized as follows. The system model and problem formulation are presented in Section II. In Section III, the solution to the formulated secure transmission delay problem is provided. Simulation results are presented in Section IV. Finally, conclusions are drawn in Section V.
**Notations:** We use the following notations throughout this paper: $(\cdot)^*$, $(\cdot)^T$ and $(\cdot)^H$ denote the conjugate, transpose and Hermitian transpose, respectively. $\|\cdot\|$ is the Euclidean norm, $\mathbb{C}^{x \times y}$ means the space of $x \times y$ complex matrix, $\text{Re}(\cdot)$ and $\text{Tr}(\cdot)$ denote real number operation and trace operation, respectively. $[\cdot]^+$ denotes the $\max\{0, \cdot\}$, and $\text{Diag}(I_1, \ldots, I_M)$ is a diagonal matrix.
## II. System Model and Problem Formulation
In this section, we first describe the studied system model, and propose a two-phase frame structure for the file transmission. Next, we define the secure transmission delay and formulate a transmission delay minimization problem.
### A. System Model
As shown in Fig. 1, we consider the downlink transmission of a cache-enabled C-RAN, which includes one CP, $L$ RRHs, $K$ actuators. Meanwhile, there are $S$ Eves that may intercept the actuators’ information. The same frequency mmWave carrier is adopted at the fronthaul links from the CP to the RRHs and the access links from the RRHs to the actuators. To reduce the energy consumption and hardware cost, the CP and RRHs are all equipped with a single RF chain, which is connected to multiple antennas via phase shifters (PSs) and power amplifier (PA) or low noise amplifier (LNA), as illustrated in Fig. 2. Denote the antenna number at the CP and each RRH $l \in L = \{1, \ldots, L\}$ is equipped with a finite-size cache, and can pre-fetch files from the CP during the off-peak period based on the content popularity and predefined cache strategies [23], [32]. In addition, we assume that the BBU stores all the files requested by the actuators in its library. Denote the file set by $F = \{1, \ldots, F\}$, and each actuator requests only a file in a given time interval. In addition, each file $f$ can be split into $U$ segments with equal size, and each segment $(f, u) \in U \in \{1, \ldots, U\}$ can be independently cached at the RRHs.
In this paper, the CoMP technique is adopted, where multiple RRHs form one cluster and cooperatively serve the actuators. Here, we only consider one cluster to facilitate the analysis, but our proposed scheme can be readily extended to multiple clusters. Similar to [21], [27], we denote the cache status of segment $(f, u)$ in the cooperative RRH cluster by $b_{f,u}$, which


| Notation | Description |
|----------|-------------|
| $L$, $L$ | Number and set of RRHs |
| $K$, $K'$ | Number and set of actuators |
| $S$, $S$ | Number and set of Eves |
| $M$, $M$ | Number and set of antennas at the CP |
| $N$, $N$ | Number and set of antennas at each RRH |
| $F$, $F$ | Number and set of files |
| $U$, $U$ | Number and set of segments |
| $L$, $L$ | Number and set of RRHs |
| $b_{f,u}$ | The cache state of segment $(f, u)$ |
| $c_{k,f}$ | If actuator $k$ requires file $f$ |
| $H_l$ | Downlink channel from the CP to RRH $l$ |
| $g_k$ | Downlink channel from $L$ RRHs to actuator $k$ |
| $g'_s$ | Downlink channel from $L$ RRHs to eavesdropper $s$ |
| $w$ | Transmit beamforming vector of the CP |
| $q_f$ | Receive beamforming vector of RRH $l$ |
| $z_l$ | Transmit beamforming vector of RRH $l$ |
| $v$ | Cooperative digital precoding vector for $L$ RRHs |
| $p$ | Transmit power of the CP |
| $P_{\text{max}}$ | Maximum transmit power of RRH $l$ |
| $t_i$ | Delivery delay for transmitting actuator $k$’s file from the CP |

can be expressed as
\[ b_{f,u} = \begin{cases}
1, & \text{if segment } (f,u) \text{ is cached in RRHs}, \\
0, & \text{otherwise}.
\end{cases} \]
(1)
Here, we propose a transmission frame structure as shown in Fig. 3, which includes two transmission phases: the fronthaul transmission phase and the access transmission phase. During the fronthaul transmission phase, the CP delivers the uncached files requested by the actuators to RRHs via multicast, where \( t_k \) denotes the delivery time for transmitting actuator \( k \)'s file. When all required files are fetched from the CP, \( L \) RRHs cooperatively serve \( K \) actuators during the access transmission phase.
During the fronthaul link transmission phase, the received signal by the RRH \( l \) can be expressed as
\[ y_l = q_l H_l w \sqrt{P} x + q_l n_l, \]
(2)
where \( H_l \in \mathbb{C}^{N \times M} \) denotes the downlink mmWave channel matrix from the CP to RRH \( l \), \( q_l \in \mathbb{C}^{1 \times N} \) and \( w \in \mathbb{C}^{M \times 1} \) denote the transmit beamforming vector of the CP and the receive beamforming vector of RRH \( l \), respectively; \( x \) denotes the multicast signal, satisfying \( \mathbb{E}[|x|^2] = 1 \), and \( P \) denotes the transmit power of the CP; \( n_l \) is the independent and identically distributed (i.i.d.) additive white Gaussian noise (AWGN) vector, where each entry follows \( CN(0, \delta^2) \). Due to the constant amplitude modulation for the transmit and receive beamforming, we have \( ||q_l||_n = 1/\sqrt{N} \) (\( n \in \mathcal{N} \)) and \( ||w||_m = 1/\sqrt{M} \) (\( m \in \mathcal{M} \)), where \( \mathcal{N} = \{1, \cdots, N\} \) and \( \mathcal{M} = \{1, \cdots, M\} \) [33].
For the mmWave channel, we adopt the widely used limited-scattering channel model with a uniform linear array [34]. Each scatter is assumed to contribute to a single propagation path. The mmWave channel can be expressed as
\[ H_l = \sum_{g=1}^{G} \alpha_{l,g} a_{RRH}(\theta_{l,g}) a_{CP}^H(\phi_{l,g}), \]
(3)
where \( G \) is the number of multiple paths, \( \alpha_{l,g} \) represents the complex gain of the path \( g \) from the CP to RRH \( l \), \( \theta_{l,g} \) and \( \phi_{l,g} \) are the angle of arrival (AoA) and angle of departure (AoD) of path \( g \), respectively. When the half-wavelength antenna space is adopted, the steering vectors \( a_{RRH}(\theta_{l,g}) \) and \( a_{CP}(\phi_{l,g}) \) can be expressed as
\[ a_{RRH}(\theta_{l,g}) = [1, e^{j\pi 1\theta_{l,g}}, e^{j\pi 2\theta_{l,g}}, \ldots, e^{j\pi(N-1)\theta_{l,g}}]^T, \]
(4a)
\[ a_{CP}(\phi_{l,g}) = [1, e^{j\pi 1\phi_{l,g}}, e^{j\pi 2\phi_{l,g}}, \ldots, e^{j\pi(M-1)\phi_{l,g}}]^T. \]
(4b)
Based on (2), the achievable rate of RRH \( l \) can be written as
\[ R_l(q_l, w) = \log \left( 1 + P ||q_l H_l w||^2 / \sigma^2 \right). \]
(5)
It is clear that the achievable multicast fronthaul rate is limited by the RRH with the worst channel condition, and is given by
\[ R_l(Q, w) = \min_{l \in \mathcal{L}} \{R_l(q_l, w)\}, \]
(6)
where \( Q = [q_1^T, \ldots, q_L^T]^T \). Accordingly, the fronthaul transmission delay can be calculated as
\[ T = \sum_{k=1}^{K} t_k = \sum_{k=1}^{K} \frac{\Psi_k}{R_l(Q, w)}, \]
(7)
where \( \Psi_k = \sum_{f=1}^{F} c_{k,f} \Omega_f \), \( \Omega_f \) denotes the size of file \( f \), while \( c_{k,f} \) is a binary variable, satisfying \( \sum_{f=1}^{F} c_{k,f} = 1 \). Note that \( c_{k,f} = 1 \) when actuator \( k \) requires file \( f \), and vice versa.
During the access transmission phase, \( L \) RRHs cooperatively serve all actuators, and we denote the downlink channel vector from RRHs to actuator \( k \) as \( g_k = [g_{k,1}^T, g_{k,2}^T, \ldots, g_{k,L}^T]^T \), where \( g_{k,l}^T \in \mathbb{C}^{1 \times N} \) represents the downlink channel vector from RRH \( l \) to actuator \( k \). Denote the signal for actuator \( k \) as \( x_k \), satisfying \( \mathbb{E}[|x_k|^2] = 1 \). The received signal by the actuator \( k \) can thus be expressed as
\[ y_k = g_k Z v_k x_k + \sum_{l \neq k}^{K} g_k Z v_l x_l + n_k, \]
(8)
where \( v_k \in \mathbb{C}^{L \times 1} \) is the digital beamformer for actuator \( k \). \( Z \in \mathbb{C}^{N \times L} \) denotes the analog beamforming matrix, and is given as \( \text{Diag}(z_1, \ldots, z_L) \), where \( z_l \in \mathbb{C}^{N \times 1} \) denotes the analog beamforming vector of RRH \( l \). The achievable rate of actuator \( k \) can be written as
\[ R_{2,k}(Z, V) = \log \left( 1 + \frac{|g_k^T Z v_k|^2}{\sum_{l \neq k}^{K} |g_k^T Z v_l|^2 + \sigma_k^2} \right), \]
(9)
where \( V = [v_1, \ldots, v_K] \).
We consider the scenario when the channel fading between the CP and the actuators or Eves is so large that the actuators and Eves cannot receive the information from the CP. Therefore, the Eves only attempt to intercept the information of the actuators from the RRHs, and the received signal at Eve \( s \) can be expressed as
\[ y_s = g'_s Z v_k x_k + \sum_{l \neq k}^{K} g'_s Z v_l x_l + n_s, \]
(10)
where \( g'_s \in \mathbb{C}^{1 \times N} \) denotes the channel vector from the RRHs to Eve \( s \). The mmWave channels \( g_k \) and \( g'_s \) have the similar structure to (3), and thus, the detailed expressions are omitted here. The intercepted rate of Eve \( s \) on actuator \( k \)'s signal can be written as
\[ R'_{2,k}(Z, V) = \log \left( 1 + \frac{|g'_s^T Z v_k|^2}{\sum_{l \neq k}^{K} |g'_s^T Z v_l|^2 + \sigma_s^2} \right). \]
(11)
Finally, the achievable secrecy rate by actuator \( k \) can be expressed as [35], [36]
\[ \hat{R}_{2,k}(Z, V) = \left[ R_{2,k}(Z, V) - \max_{s \in \mathcal{S}} \{R'_{2,k}(Z, V)\} \right]^+, \]
(12)
where \( \mathcal{S} = \{1, 2, \ldots, S\} \) denotes the Eve set. Next, we define the secure transmission delay for actuator \( k \) as
\[ T_k = \frac{\hat{\Psi}_k}{\hat{R}_{2,k}(Z, V)}, \]
(13)
where \( \hat{\Psi}_k = \sum_{f=1}^{F} c_{k,f} \Omega_f \).
B. Problem Formulation
In general, cache status \( b_{f,u} \) or cache placement depends on several factors, such as user behavior, information popularity distribution and so on, and it can be decided by
several advanced cache schemes, e.g., artificial intelligence-based multi-timescale framework method [32] and deep Q-learning method [37]. Here, we assume that the cache status $b_{f,u}$ has been fixed according to a certain cache strategy, and the required files by the actuators, i.e., $c_{k,f}$ are also given in advance [19], [21]. In this work, we mainly focus on the beamforming design to minimize the two-phase transmission delay.
To ensure system fairness, we aim to minimize the maximum secure transmission delay, and formulate the following optimization problem:
\begin{align}
\min_{(Q,w,Z,Y)} & \quad \max_{k \in K} T_k + \sum_{l=1}^{K} t_k \\
\text{s.t.} & \quad \sum_{l=1}^{K} |v_k(l)|^2 \leq P_{\text{max}}^l, l \in L, \\
& \quad ||q_l||_n = 1/\sqrt{N}, \quad ||z_l||_n = 1/\sqrt{N}, n \in N, l \in L, \\
& \quad ||w_m|| = 1/\sqrt{M}, m \in M,
\end{align}
where (14b) denotes per-RRH transmit power constraint. To handle problem (14), we need to design the analog beamformers for both the CP and the RRHs at the fronthaul transmission phase as well as the hybrid analog/digital beamformer of cooperative RRHs at the access transmission phase, which is an intractable problem.
### III. Problem Solution
Since the two transmission phases are relatively independent, we can equivalently divide the original problem into two parts, i.e., the transmit and receive beamforming design problem at the fronthaul transmission phase (i.e., $\mathbb{P}1$) as well as the hybrid analog/digital beamforming design problem at the access transmission phase (i.e., $\mathbb{P}2$), namely:
\begin{align}
\mathbb{P}1 & \quad \min_{(Q,w)} \sum_{k=1}^{K} t_k \\
\text{s.t.} & \quad ||q_l||_n = 1/\sqrt{N}, n \in N, l \in L, \\
& \quad ||w_m|| = 1/\sqrt{M}, m \in M.
\end{align}
\begin{align}
\mathbb{P}2 & \quad \min_{(Z,Y)} \max_{k \in K} T_k \\
\text{s.t.} & \quad \sum_{l=1}^{K} |v_k(l)|^2 \leq P_{\text{max}}^l, l \in L, \\
& \quad ||z_l||_n = 1/\sqrt{N}, n \in N, l \in L.
\end{align}
#### A. Beamforming Design at the Fronthaul Link
Due to $\sum_{k=1}^{K} t_k = \frac{1}{R_f(Q,w)} \sum_{k=1}^{K} \Psi_k$, we can rewrite $\mathbb{P}1$ as the following max-min rate problem
\begin{align}
\max_{(Q,w)} & \quad \min_{l \in L} R_f(q_l, w) \\
\text{s.t.} & \quad (15b), (15c).
\end{align}
Based on (5), (17) can be equivalently written as follows:
\begin{align}
\max_{(Q,w)} & \quad \min_{l \in L} ||q_l H_l w||^2 \\
\text{s.t.} & \quad (15b), (15c).
\end{align}
(18) is still difficult to handle due to the non-smooth objective function and the non-convex constraints. Moreover, the multiplication among the optimization variables makes it even more intractable. To this end, we first initialize the receive beamformer $\hat{Q} = [\hat{q}_1^T, \ldots, \hat{q}_L^T]^T$. By introducing the auxiliary variable $\eta$ and relaxing the constraint (15b), (18) can be reformulated as the following tractable optimization problem
\begin{align}
\max_{[\eta,w]} & \quad \eta \\
\text{s.t.} & \quad |\hat{h}_l w|^2 \geq \eta, m \in M, \\
& \quad ||w_m|| \leq 1/\sqrt{M}, m \in M,
\end{align}
where $\hat{h}_l = \hat{q}_l H_l$. One can observe that (19b) is the only non-convex constraint. According to the first-order Taylor approximation formula, $|\hat{h}_l w|^2$ can be approximated as
\begin{equation}
|\hat{h}_l w|^2 \approx \hat{w}^H \hat{H}_l \hat{w} + 2 \text{Re}(\hat{w}^H \hat{H}_l (w - \hat{w})),
\end{equation}
where $\hat{H}_l = \hat{h}_l^H \hat{h}_l$, and $\hat{w}$ denotes the initial transmit beamformer. Finally, we formulate the convex optimization problem as
\begin{align}
\max_{[\eta,w]} & \quad \eta \\
\text{s.t.} & \quad \hat{w}^H \hat{H}_l \hat{w} + 2 \text{Re}(\hat{w}^H \hat{H}_l (w - \hat{w})) \geq \eta, m \in M, \\
& \quad ||w_m|| \leq 1/\sqrt{M}, m \in M.
\end{align}
The above problem can be solved using the standard convex optimization techniques, e.g., interior-point method. However, the obtained solutions may not satisfy the constraint (15c). To address this issue, we normalize each element of $w^*$ as follows
\begin{equation}
[w^*]_m = \frac{1}{\sqrt{N}} \frac{|w^*_m|}{||w^*_m||}, m \in M,
\end{equation}
where $w^*$ denotes the solution of (21). Upon obtaining $w^*$, we proceed with the design of the receive beamformer for each RRH, namely
\begin{align}
\max_{[q_l]} & \quad ||q_l H_l w^*||^2 \\
\text{s.t.} & \quad ||q_l||_n = 1/\sqrt{N}, n \in N.
\end{align}
It is readily known that the optimal receive beamformer can be obtained as
\begin{equation}
[q^*_l]_n = \frac{1}{\sqrt{N}} \frac{[H_l w^*]_n}{||H_l w^*||_n}, n \in N.
\end{equation}
Next, we replace $\hat{q}_l$ in (19) with $q^*_l$ and resolve (19). The above process is repeated until the result converges. We summarize the alternatively iterative scheme in Algorithm 1.
B. Beamforming Design at the Access Link
Problem 2 is also difficult to handle due to the non-smooth and non-convex objective function (16a) and the constant modulus constraint (16c). Furthermore, joint optimization of analog and digital beamforming is extremely challenging. Next, we first design analog beamformer \( \mathbf{Z} \), and rewrite \( \mathbf{g}_k^H \mathbf{Z} \mathbf{v}_k \) as
\[
\mathbf{g}_k^H \mathbf{Z} \mathbf{v}_k = \sum_{l=0}^{L} g_k^l \mathbf{z}_l \mathbf{v}_k,
\]
where \( \mathbf{g}_k = [g_k^0, g_k^1, \ldots, g_k^L] \) with \( g_k^l \in \mathbb{C}^{1 \times N} \) denoting the sub-channel vector from RRH \( l \) to actuator \( k \). Similar to [38], we design the sub-beamforming to maximize the equivalent sub-channel gain \( |\mathbf{g}_k^H \mathbf{z}_l|^2 \). The optimal sub-beamforming can be expressed as
\[
|\mathbf{z}_l|_n = \frac{1}{\sqrt{N}} \frac{|g_k^l|^*}{||g_k^l||}, n \in \mathcal{N}.
\]
In addition, to guarantee fairness among the actuators, we maximize the equivalent sub-channel gain for each actuator in turn. Finally, we can obtain the analog beamformer \( \mathbf{Z}^* \) and the equivalent channel gain \( \tilde{\mathbf{g}}_k = \mathbf{g}_k \mathbf{Z}^* \). To this end, (9) and (11) can be rewritten as
\[
R_{2A}(\mathbf{V}) = \log \left( 1 + \frac{|\tilde{\mathbf{g}}_k \mathbf{v}_k|^2}{\sum_{i \neq k}^{K} |\tilde{\mathbf{g}}_i \mathbf{v}_i|^2 + \sigma_s^2} \right),
\]
\[
R_{2,A}^s(\mathbf{V}) = \log \left( 1 + \frac{|\tilde{\mathbf{g}}_k^s \mathbf{v}_k|^2}{\sum_{i \neq k}^{K} |\tilde{\mathbf{g}}_i^s \mathbf{v}_i|^2 + \sigma_s^2} \right),
\]
where \( \tilde{\mathbf{g}}_k^s = \tilde{\mathbf{g}}_k^s \mathbf{Z}^* \).
In fact, Eves are usually passive, and their CSIs may not be perfectly known by the CP [39]. Therefore, channel uncertainty is unavoidable and should be considered. In this paper, we define the channel uncertainty as follows
\[
\tilde{\mathbf{g}}_k^s = \hat{\mathbf{g}}_k^s + \Delta \hat{\mathbf{g}}_k^s,
\]
where \( \hat{\mathbf{g}}_k^s \) denotes the estimated equivalent channel vector, and \( \Delta \hat{\mathbf{g}}_k^s \) is the corresponding error, which is assumed to be bounded by \( \tau_s \), namely \( \Delta \hat{\mathbf{g}}_k^s (\Delta \hat{\mathbf{g}}_k^s)^H \leq \tau_s \).
Next, we introduce an auxiliary variable \( t \) and transform the original problem 2 into
\[
\begin{align*}
\min_{\{\mathbf{V}\}} & \quad t \\
\text{s.t.} & \quad R_{2,A}(\mathbf{V}) - \max_{s \in S} \left\{ R_{2,A}^s(\mathbf{V}) \right\} \geq \hat{\Psi}_k / t, k \in \mathcal{K}, \\
& \quad \tilde{\mathbf{g}}_k^s = \hat{\mathbf{g}}_k^s + \Delta \hat{\mathbf{g}}_k^s, \Delta \hat{\mathbf{g}}_k^s (\Delta \hat{\mathbf{g}}_k^s)^H \leq \tau_s, s \in S, \\
& \quad \sum_{l=0}^{K} |\mathbf{v}_k(l)|^2 \leq P_{\text{max}}, l \in \mathcal{L}.
\end{align*}
\]
Problem (29) is non-convex due to the constraints (29b) and (29c). Next, we propose advanced approximation approaches to transform them into convex ones. By introducing auxiliary variables \( \alpha_k \) and \( \beta_k \), (29b) can be split into the following constraints
\[
\log(1 + \alpha_k) - \log(1 + \beta_k) \geq \hat{\Psi}_k / t, k \in \mathcal{K},
\]
\[
\alpha_k \leq \frac{|\tilde{\mathbf{g}}_k \mathbf{v}_k|^2}{\sum_{i \neq k}^{K} |\tilde{\mathbf{g}}_i \mathbf{v}_i|^2 + \sigma_s^2}, k \in \mathcal{K},
\]
\[
\beta_k \geq \frac{|\tilde{\mathbf{g}}_k^s \mathbf{v}_k|^2}{\sum_{i \neq k}^{K} |\tilde{\mathbf{g}}_i^s \mathbf{v}_i|^2 + \sigma_s^2}, k \in \mathcal{K}, s \in S.
\]
Nonetheless, (30a)-(30c) are still non-convex constraints. Next, we approximate \( \log(1 + \beta_k) \) using the first order Taylor approximation formula, and obtain
\[
\log(1 + \beta_k) \approx \log(1 + \beta_k^{[i]}) + \frac{\beta_k - \beta_k^{[i]}}{1 + \beta_k^{[i]}},
\]
where \( \beta_k^{[i]} \) denotes the value of \( \beta_k \) at the \( i \)th iteration. On this basis, (30a) can be transformed into the following convex constraint
\[
\log(1 + \alpha_k) - \log(1 + \beta_k^{[i]}) - \frac{\beta_k - \beta_k^{[i]}}{1 + \beta_k^{[i]}} \geq \hat{\Psi}_k / t, k \in \mathcal{K}.
\]
To deal with (30b), we first define \( \mathbf{G}_k = \tilde{\mathbf{g}}_k^H \mathbf{g}_k \) and \( \mathbf{V}_k = \mathbf{v}_k \mathbf{v}_k^H \). By introducing auxiliary variable \( \mu_k \), (30b) can be split into the following constraints:
\[
\alpha_k \mu_k \leq \text{Tr}(\mathbf{G}_k \mathbf{V}_k), k \in \mathcal{K},
\]
\[
\mu_k \geq \sum_{r \neq k}^{K} \text{Tr}(\mathbf{G}_k \mathbf{V}_r) + \sigma_s^2, k \in \mathcal{K}.
\]
In fact, (33a) and (33b) can be regarded as SDP constraints with Rank\( (\mathbf{V}_k) = 1 \). In addition, according to [40], \( \alpha_k \mu_k \) has the following upper bound as a valid surrogate function \( \alpha_k \mu_k \leq \frac{\alpha_k^{[i]}}{2\mu_k^{[i]}} \mu_k^2 + \frac{\mu_k^{[i]}}{2\alpha_k^{[i]}} \alpha_k^2 \), and (33a) can thus be transformed into the following convex constraint:
\[
\frac{\alpha_k^{[i]}}{2\mu_k^{[i]}} \mu_k^2 + \frac{\mu_k^{[i]}}{2\alpha_k^{[i]}} \alpha_k^2 \leq \text{Tr}(\mathbf{G}_k \mathbf{V}_k), k \in \mathcal{K},
\]
where \( \alpha_k^{[i]} \) and \( \mu_k^{[i]} \) represent the values of \( \alpha_k \) and \( \mu_k \) at the \( i \)th iteration, respectively.
To handle (30c), we introduce the classic S-Procedure [41]:
**Lemma 1:** Define the following function
\[
f_i(\mathbf{v}) = \mathbf{v} \mathbf{U}_i \mathbf{v}^H + 2\text{Re}(\mathbf{c}_i \mathbf{v}^H) + b_i, i \in [1, 2],
\]
where \( \mathbf{v} \in \mathbb{C}^{1 \times T}, \mathbf{U}_i \in \mathbb{C}^{T \times T}, \mathbf{c}_i \in \mathbb{C}^{1 \times T}, b_i \in \mathbb{R} \) and \( \Gamma \) is any integer. If the following expression
\[
f_i(\mathbf{v}) \leq 0 \Rightarrow f_2(\mathbf{v}) \leq 0
\]
holds, there must exist a \( \lambda \) satisfying
\[
\lambda \begin{bmatrix} \mathbf{U}_1 & \mathbf{c}_1^H \\ \mathbf{c}_1 & b_1 \end{bmatrix} - \begin{bmatrix} \mathbf{U}_2 & \mathbf{c}_2^H \\ \mathbf{c}_2 & b_2 \end{bmatrix} \geq \mathbf{0}.
\]
Combing (29c) and (30c), we have
\[
\Delta \hat{\mathbf{g}}_k^s \mathbf{V}_k (\Delta \hat{\mathbf{g}}_k^s)^H + 2\text{Re}(\tilde{\mathbf{g}}_k^s \mathbf{V}_k (\Delta \hat{\mathbf{g}}_k^s)^H) + \tilde{\mathbf{g}}_k^s \mathbf{V}_k (\tilde{\mathbf{g}}_k^s)^H
\]
\[
\leq \beta_k \left\{ \sum_{r \neq k}^{K} \left[ \Delta \hat{\mathbf{g}}_r^s \mathbf{V}_r (\Delta \hat{\mathbf{g}}_r^s)^H + 2\text{Re}(\tilde{\mathbf{g}}_r^s \mathbf{V}_r (\Delta \hat{\mathbf{g}}_r^s)^H) + \tilde{\mathbf{g}}_r^s \mathbf{V}_r (\tilde{\mathbf{g}}_r^s)^H \right] + \sigma_s^2 \right\}.
\]
Then, we introduce the auxiliary variables \( \psi_k, \kappa_k \) and \( \phi_k \), and split (38) into the following constraints:
\[
\Delta \hat{\mathbf{g}}_k^s \mathbf{V}_k (\Delta \hat{\mathbf{g}}_k^s)^H + 2\text{Re}(\tilde{\mathbf{g}}_k^s \mathbf{V}_k (\Delta \hat{\mathbf{g}}_k^s)^H) + \tilde{\mathbf{g}}_k^s \mathbf{V}_k (\tilde{\mathbf{g}}_k^s)^H - \psi_k \leq 0,
\]
\[
\Delta \hat{\mathbf{g}}_k^s \mathbf{V}_k (\Delta \hat{\mathbf{g}}_k^s)^H + 2\text{Re}(\tilde{\mathbf{g}}_k^s \mathbf{V}_k (\Delta \hat{\mathbf{g}}_k^s)^H) + \tilde{\mathbf{g}}_k^s \mathbf{V}_k (\tilde{\mathbf{g}}_k^s)^H + \phi_k - \sigma_s^2 \leq 0,
\]
\[
\Delta \hat{\mathbf{g}}_k^s (\Delta \hat{\mathbf{g}}_k^s)^H - \tau_s \leq 0,
\]
\[
\psi_k \leq \kappa_k^2, \kappa_k^2 \leq \beta_k \phi_k.
\]
Algorithm 2: The Proposed Iterative Algorithm for Solving $\mathbb{P}2$.
1 **Initialize** $\alpha_k^{[i]}, \beta_k^{[i]}, \mu_k^{[i]}, \kappa_k^{[i]}, i = 1$, the maximum iteration index $I_{\text{max}}$.
2 **repeat**
3 Solve the relaxed problem (44) and obtain the solution $\mathbf{V}_k^*, \alpha_k^*, \beta_k^*, \mu_k^*, \gamma_k^*, \varepsilon_k^*, \kappa_k^*, \phi_k^*, \psi_k^*$.
4 Update $i \leftarrow i + 1$.
5 Update $\alpha_k^{[i]} \leftarrow \alpha_k^*, \beta_k^{[i]} \leftarrow \beta_k^*, \mu_k^{[i]} \leftarrow \mu_k^*, \kappa_k^{[i]} \leftarrow \kappa_k^*$.
6 **until** $i = I_{\text{max}}$ or Convergence;
where $\Xi_i = -\sum_{i=1}^{K} \mathbf{V}_i$. Combing Lemma 1, (39a) and (39c), we can obtain the following convex linear matrix inequality (LMI)
$$
\begin{bmatrix}
\mathbf{V}_k - \mathbf{V}_k & -(\mathbf{g}_k^H \mathbf{V}_k)^H \\
-\mathbf{g}_k^H \mathbf{V}_k & \psi_k - \gamma_k \tau_s - (\mathbf{g}_k^H \mathbf{V}_k (\mathbf{g}_k^H))^H
\end{bmatrix} \succeq 0.
$$
(40)
Similarly, (39b) can be recast into the following convex LMI:
$$
\begin{bmatrix}
\varepsilon_k - \Xi_k & -(\mathbf{g}_k^H \Xi_k)^H \\
-\mathbf{g}_k^H \Xi_k & \sigma_k^2 - \phi_k - \varepsilon_k \tau_s - (\mathbf{g}_k^H \Xi_k (\mathbf{g}_k^H))^H
\end{bmatrix} \succeq 0.
$$
(41)
In addition, $\kappa_k^2 \leq \beta_k / \phi_k$ can be expressed in the following matrix form according to the Schur complement lemma [42]
$$
\begin{bmatrix}
\beta_k & \kappa_k \\
\kappa_k & \phi_k
\end{bmatrix} \succeq 0.
$$
(42)
According to the first-order Taylor approximation formula, $\kappa_k^2 \approx (\kappa_k^{[i]})^2 + 2(\kappa_k - \kappa_k^{[i]})\kappa_k^{[i]}$. Therefore, $\psi_k \leq \kappa_k^2$ in (39d) can be written as the following convex constraint:
$$
(\kappa_k^{[i]})^2 + 2(\kappa_k - \kappa_k^{[i]})\kappa_k^{[i]} \geq \psi_k, k \in \mathcal{K}.
$$
(43)
Finally, we formulate the SDP problem as
$$
\begin{align*}
\min_{\{\mathbf{V}_k, \alpha_k, \beta_k, \mu_k, \gamma_k, \varepsilon_k, \kappa_k, \phi_k, \psi_k\}} & \quad I \\
\text{s.t.} & \quad \sum_{k=1}^{K} \mathbf{V}_k(l, l) \leq P_{\text{max}}, l \in \mathcal{L}, \\
& \quad \text{Rank}(\mathbf{V}_k) = 1, k \in \mathcal{K}, \\
& \quad (32), (33b), (34), (40), (41), (42), (43).
\end{align*}
$$
(44a)
Obviously, (44) is a non-convex SDP due to the rank-one constraint. However, by removing (44c), the above problem becomes a convex SDP and can be solved efficiently by numerical solvers such as SDPT3 [43]. To obtain the solution of $\mathbb{P}2$, we need to iteratively solve the relaxed problem of (44). The procedure is summarized as Algorithm 2. Meanwhile, we have the following proposition.
Proposition 1: The objective function of (44) is a non-increasing sequence at each iteration based on the proposed Algorithm 2, and it converges to a stationary solution.
Proof To prove proposition 1, we first need to confirm that the solution of problem (44) at the $i$th iteration is also a feasible solution for the $(i + 1)$th iteration. We assume that $\mathbf{V}_k^*, \alpha_k^*, \beta_k^*, \mu_k^*, \gamma_k^*, \varepsilon_k^*, \kappa_k^*, \phi_k^*, \psi_k^*$ are the optimal solutions of problem (44) at the $i$th iteration. To proceed with $\mathbb{P}2$, the convex approximated techniques are adopted for constraints (32), (34) and (43). Therefore, we need to prove that those constraints still hold at the $(i + 1)$th iteration for the solutions obtained from the $i$th iteration. To facilitate analysis, we define the following function
$$
f(\beta_k^{[i]}) = \log(1 + \beta_k^{[i]}) + \frac{\beta_k^* - \beta_k^{[i]}}{1 + \beta_k^*}.
$$
(45)
By replacing the variable $\beta_k^{[i]}$ at the $(i + 1)$th iteration with the value obtained at the $i$th iteration, namely $\beta_k^{[i+1]} = \beta_k^*$, we have
$$
\begin{align*}
f(\beta_k^{[i+1]}) &= \log(1 + \beta_k^{[i+1]}) + \frac{\beta_k^* - \beta_k^{[i+1]}}{1 + \beta_k^*} \\
&= \log(1 + \beta_k^*) \\
&\geq \log(1 + \beta_k^{[i]}) + \frac{\beta_k^* - \beta_k^{[i]}}{1 + \beta_k^*},
\end{align*}
$$
(46a)
(46b)
(46c)
where (46c) is obtained by the first-order Taylor approximation. Therefore, we show that (32) still holds at the $(i + 1)$th iteration for the solutions obtained from the $i$th iteration. The same conclusions can be obtained for (34) and (43) following a similar procedure, and they are thus omitted here. As a result, the solution of problem (44) at the $i$th iteration is also a feasible solution for the $(i + 1)$th iteration.
Since problem (44) is convex, the objective function value achieved at each iteration will decrease or at least maintain the value achieved at the previous iteration. Due to the limited transmit power, the objective function value has a lower bound and converges to a stationary solution.
Finally, we need to consider whether the obtained solution satisfies the rank-one constraint or not. The rank-one solution is satisfied if the following condition holds [44]:
$$
\frac{\Upsilon_{\text{max}}(\mathbf{V}_k)}{\text{Tr}(\mathbf{V}_k)} = 1, k \in \mathcal{K},
$$
(47)
where $\Upsilon_{\text{max}}(\mathbf{V}_k)$ denotes the maximum eigenvalue of $\mathbf{V}_k$. To verify (47), we perform 1,000 times simulations, and the solutions are summarized in Table II. From Table II, one can observe that the probability of a rank-one solutions reaches up to almost 99%. Therefore, we can most likely obtain the rank-one solutions via the proposed algorithm. Even when the solutions are not rank-one, several advanced methods can be applied to reconstruct rank-one solutions, e.g., hybrid beamforming design scheme [13] and randomization beamforming design scheme [45].
IV. Numerical Results
In this section, numerical results are presented to evaluate the performance of the proposed schemes. For simplicity, we assume that all RRHs have the same maximum transmit power,
| Symbol | Value | Symbol | Value | Symbol | Value |
|--------|--------|--------|--------|--------|--------|
| $M$ | 120 | $N$ | 6 | $L$ | 6 |
| $K$ | 4 | $S$ | 2 | $\sigma^2$ | 0.01 |
| $d_0$ | 10 m | $\varrho$ | 2 | $\zeta$ | 4 |
| $G$ | 4 | $P$ | 46 dBm | $P_{\text{max}}$ | 40 dBm |
| $U$ | 10 | - | - | - | - |
**Fig. 4:** The fronthaul link delay versus iteration.
i.e., $P^l_{\text{max}} = P_{\text{max}}$. All noise powers are assumed to be the same, i.e., $\sigma_k^2 = \sigma_i^2 = \sigma^2$. All RRHs, actuators and Eves are uniformly distributed within a circular cell with 100 m radius. The distance between the CP and the cell center is 300 m. We assume that the channel between the CP and RRHs has a single path, and the path loss is modeled as $1/(1+(d_l/d_0)^\varrho)$ [46], where $d_l$, $d_0$ and $\varrho$ denote the distance between the CP and RRH $l$, reference distance and the pathloss exponent, respectively. The AoD/AoA are assumed uniformly distributed within $[0, 2\pi]$. In addition, the channels from the RRHs to actuators and Eves contain $G = 4$ paths, and the pathloss is modeled as $1/(1+(d_s/d_0)^\varrho)$. The mmWave bandwidth is set to be 1 GHz. The default simulation parameters are listed in Table III. Unless otherwise specified, these default values are used in simulation.
Figure 4 shows the convergence property of the proposed alternatively iterative algorithm for solving $\mathbb{P}1$ under different CP transmit powers. We set the file size requested by each actuator to 200 Mbits, and half of each file is cached in the RRHs. One can observe that it takes only about 4 iterations for the proposed algorithm to converge. We also plot the convergence speed of the proposed Algorithm 2 for solving $\mathbb{P}2$ in Fig. 5. We consider both the perfect CSI scenario when $\tau = 0$, and two imperfect ones, with different error variance. Note that Fig. 5 only plots the secure transmission delay from the RRHs to the actuators. It is clear that the secure transmission delay first decreases and then converges to a certain value in all scenarios. As expected, the secure transmission delay decreases with $\tau$, and achieves the lowest value under the perfect CSI scenario.
**Fig. 5:** The secure transmission delay versus iteration.
**Fig. 6:** The secure transmission delay versus cache ratio of each file at the RRHs.
Figure 6 plots the secure transmission delay versus cache ratio of each file at the RRHs. Here, we consider two different file requirements for the actuators: $\Omega = [200 \ 200 \ 200 \ 200]$ denotes the case that the file sizes requested by each actuator are 200 Mbits, while $\Omega = [200 \ 400 \ 200 \ 400]$ represents that when the file sizes requested by the four actuators are 200 Mbits, 400 Mbits, 200 Mbits, 400 Mbits, respectively. In Fig. 6, “0” at the horizontal axis means that the requested files by the actuators are not cached in the RRHs at all, while “1” means that the files requested by the actuators are all cached in the RRHs. One can see that the secure transmission delay decreases linearly with the cache ratio for all schemes. The reason is that a higher cache ratio reduces the data transmission from the CP to RRHs, and consequently, lowers the fronthaul links delay.
In Fig. 7, we show the secure transmission delay versus the maximum transmit power of each RRH under different cache ratios, where $\Omega = [200 \ 200 \ 200 \ 200]$. “Cache ratio=[0.2 0.2 0.2 0.2]” means that 20% of requested file by each actuator is cached in the RRHs, while “Cache ratio=[0.5 0.5 0.5 0.5]” means that 50% of requested file by each actuator is cached
Fig. 7: The secure transmission delay versus maximum transmit power of each RRH with $\Omega = [200 \ 200 \ 200 \ 200]$.
Fig. 8: The secure transmission delay versus maximum transmit power of each RRH with $\Omega = [200 \ 400 \ 200 \ 400]$.
in the RRHs. One can see that the secure transmission delay decreases with the maximum transmit power of each RRH. In addition, a higher cache ratio leads to a smaller delay, which is also evidenced in Fig. 6. Fig. 8 shows the results when $\Omega = [200 \ 400 \ 200 \ 400]$. We can reach the same conclusions as in Fig. 7, and the only difference is that the secure transmission delay is higher due to the larger files requested by actuators.
We show the secure transmission delay versus the number of Eves in Fig. 9. Here, we set cache ratio $=[0.2 \ 0.2 \ 0.2 \ 0.2]$ and the required file size $\Omega = [200 \ 200 \ 200 \ 200]$. It can be seen that the secure transmission delay increases with the number of Eves. This coincides with (12), which indicates that the presence of more Eves results in a lower secrecy transmission rate. Additionally, a large CSI estimation error also leads to higher secure transmission delay. We also show results under cache ratio $=[0.5 \ 0.5 \ 0.5 \ 0.5]$ in Fig. 10. One can observe that the secure transmission delay is much lower since more files requested can be directly fetched from the RRHs.
Note that for the specific values of simulation parameters, e.g., estimated channel error bound, each RRH’s transmit power, etc., they are adopted as references, and may not be the same for the real network. Nonetheless, these specific values do not affect the evaluation of the proposed algorithm, and the trend still holds for other values. Once the parameter values are given, we can directly obtain the optimal beamforming and system performance based on the proposed algorithm. For example, a larger channel estimation error leads to a higher transmission delay, and a larger RRH’s transmit power brings a small transmission delay.
V. Conclusion
In this paper, we investigated a two-phase secure transmission delay minimization problem in an edge cache-assisted mmWave C-RAN. At the first transmission phase, a joint transmit beamforming at the CP and receive beamforming at the RRHs scheme was proposed. At the second transmission phase, we first designed the analog beamforming for each RRH, and then transformed the formulated problem into a series of
convex subproblems by SCA technique, $S$-procedure and SDP relaxation. Finally, an iterative algorithm was proposed, which converges to at least a local optimum. The presented simulation results show that the solutions have the rank-one characteristic with a high probability (near 99%). Meanwhile, the proposed algorithms have been shown to achieve fast convergence. Moreover, a detailed illustration has been provided to demonstrate how the secure transmission delay is affected by the different system parameters, e.g., the maximum transmit power at the RRHs, the cache-ratio, the channel estimation error, and the number of Eves. These results can be used as references during system design, where different tradeoffs need to be considered.
REFERENCES
[1] Z. Piao, M. Peng, Y. Liu, and M. Daneshmand, “Recent advances of edge cache in radio access networks for internet of things: Techniques, performances, and challenges,” IEEE Internet Things J., vol. 6, pp. 1010–1028, Feb. 2019.
[2] Y. L. Chen, J. Loo, T. C. Chiau, and L. Wang, “Dynamic network slicing for multi-tiered heterogeneous radio access networks,” IEEE Trans. Wireless Commun., vol. 17, pp. 2146–2161, Apr. 2018.
[3] J. An, K. Yang, J. Wu, N. Ye, S. Guo, and Z. Liao, “Achieving sustainable ultra-dense heterogeneous networks for 5G,” IEEE Commun. Mag., vol. 55, pp. 84–90, Dec. 2017.
[4] W. Hao, X. Chu, H. Zhao, S. Yang, G. Sun, and K. Wong, “Green communication for NOMA-based C-RAN,” IEEE Internet Things J., vol. 6, pp. 666–678, Feb. 2019.
[5] T. T. Vu, D. T. Ngo, M. N. Dao, S. Durrani, and R. H. Middleton, “Spectral and energy efficiency maximization for content-centric C-RANs with edge caching,” IEEE Trans. Commun., vol. 66, pp. 6628–6642, Dec. 2018.
[6] J. Yao and N. Ansari, “Joint content placement and storage allocation in C-RANs for IoT sensing service,” IEEE Internet Things J., vol. 6, pp. 1060–1067, Feb. 2019.
[7] J. Kwak, J. Kim, B. Le, and S. Chong, “Hybrid content caching in 5G wireless networks: Central versus edge caching,” IEEE Trans. Wireless Commun., vol. 17, pp. 3030–3045, May 2018.
[8] S. He, C. Qi, Y. Huang, Q. Hou, and A. Nallanathan, “Two-level transmission scheme for cache-enabled fog radio access networks,” IEEE Trans. Commun., vol. 67, pp. 445–456, Jan. 2019.
[9] W. Hao, G. Sun, O. Mutu, J. Zhang, and S. Yang, “Coordinated hybrid precoding design in millimeter wave fog-RAN,” IEEE Systems J., to appear, 2019.
[10] M. Sawahashi, Y. Kishiyama, A. Morimoto, D. Nishikawa, and M. Tanno, “Coordinated multipoint transmission/reception techniques for LTE-advanced systems with distributed MIMO,” IEEE Wireless Commun., vol. 17, pp. 26–34, Jun. 2010.
[11] B. Dai, Y. Liu, and W. Yu, “Optimized base-station cache allocation for cloud radio access network with multicast backhaul,” IEEE J. Sel. Areas Commun., vol. 36, pp. 1737–1750, Aug. 2018.
[12] B. Hu, Y. Hua, J. Zhang, C. Chen, and X. Guan, “Joint fractional multicast beamforming and user-centric cache allocation in downlink C-RANs,” IEEE Trans. Wireless Commun., vol. 16, pp. 5395–5409, Aug. 2017.
[13] D. W. K. Ng, E. S. Lo, and R. Schober, “Multiobjective resource allocation for secure communication in cognitive radio networks with wireless information and power transfer,” IEEE Trans. Veh. Technol., vol. 65, pp. 1166–1181, May 2016.
[14] C. Lin, K. Yang, and M. Abouln, “Physical layer security for cooperative NOMA systems,” IEEE Trans. Veh. Technol., vol. 67, pp. 4645–4649, May 2018.
[15] N. Yang, L. Wang, G. Geraci, M. Elkashlan, J. Yuan, and M. D. Renzo, “Safeguarding 5g wireless communication networks using physical layer secrecy,” IEEE Commun. Mag., vol. 53, pp. 20–27, Apr. 2015.
[16] A. D. Wyner, “The wire-tap channel,” Bell Syst. Tech. J., vol. 54, no. 8, pp. 1355–1387, 1975.
[17] Z. Chu, H. Xing, M. Johnston, and S. Le Goff, “Secrecy rate optimizations for a MISO secrecy channel with multiple multiantennas eavesdroppers,” IEEE Trans. Wireless Commun., vol. 15, pp. 283–297, Jan. 2016.
[18] W. Hao, O. Mutu, and H. Zhang, “Content placement and resource allocation in massive mimo h-rrcs with limited fronthaul capacity,” IEEE Trans. Wireless Commun., vol. 17, pp. 7691–7703, Nov. 2018.
[19] M. Tao, E. Chen, H. Zhou, and W. Yu, “Content-centric sparse multicast beamforming for cache-enabled cloud RAN,” IEEE Trans Wireless Commun., vol. 15, pp. 6118–6131, Sep. 2016.
[20] Y. Fu, W. Wen, Z. Zhao, T. Q. S. Quek, S. Jin, and F. Zheng, “Dynamic power control for NOMA transmissions in wireless caching networks,” IEEE Access, vol. 6, pp. 38,083–38,093, Oct. 2018.
[21] S. He, J. Ren, Y. Wang, Y. Li, Y. Zhang, W. Zhang, and S. Shen, “Cloud-edge coordinated transmission: Low-latency multicasting transmission,” IEEE J. Sel. Areas Commun., vol. 37, pp. 1144–1158, May 2019.
[22] D. Liu and C. Yang, “Energy efficiency of downlink networks with caching at base stations,” IEEE J. Sel. Areas Commun., vol. 34, pp. 907–923, Mar. 2016.
[23] J. Liu, B. Bai, J. Zhang, and K. B. Letaief, “Cache placement in fog-RANs: From centralized to distributed algorithms,” IEEE Trans Wireless Commun., vol. 16, pp. 7039–7051, Nov. 2017.
[24] G. Zheng, H. A. Surawera, and I. Krikidis, “Optimization of hybrid cache placement for collaborative relaying,” IEEE Commun. Lett., vol. 21, pp. 446–449, Feb. 2017.
[25] Y. Zhu, G. Zheng, L. Wang, K. Wong, and L. Zhao, “Content placement in cache-enabled sub-6 GHz and millimeter-wave multi-antenna dense small cell networks,” IEEE Trans Wireless Commun., vol. 17, pp. 2843–2856, May 2018.
[26] W. Wen, Y. Cui, F. Zheng, S. Jin, and Y. Jiang, “Random caching for secure and efficient resource transmission in heterogeneous wireless networks,” IEEE Trans. Commun., vol. 66, pp. 2809–2825, Jul. 2018.
[27] L. Xiang, D. W. K. Ng, R. Schober, and V. W. S. Wong, “Cache-enabled physical layer security for video streaming in backhaul-limited cellular networks,” IEEE Trans Wireless Commun., vol. 17, pp. 736–751, Feb. 2018.
[28] X. Xu and J. Yao, “Exploiting physical-layer security for multiuser multicarrier computation offloading,” IEEE Wireless Commun. Lett., vol. 8, pp. 9–12, Feb. 2019.
[29] T. Zheng, H. Wang, and J. Yuan, “Secure and energy-efficient transmissions in cache-enabled heterogeneous cellular networks: Performance analysis and optimization,” IEEE Trans. Commun., vol. 66, pp. 5554–5567, Nov. 2018.
[30] F. Cheng, G. Gui, N. Zhao, Y. Chen, J. Tang, and H. Sari, “UAV-relaying-assisted secure transmission with caching,” IEEE Trans. Commun., vol. 67, pp. 3140–3153, May 2019.
[31] M. K. Kiskandar and H. R. Sadadipour, “A secure approach for caching contents in wireless cellular networks,” IEEE Trans. Veh. Technol., vol. 66, pp. 10240–10258, Nov. 2017.
[32] L. T. Tan, R. Q. Hu, and L. Hanzo, “Twin-timescale artificial intelligence aided mobility-aware edge caching and computing in vehicular networks,” IEEE Trans. Veh. Technol., vol. 68, pp. 3086–3099, Apr. 2019.
[33] W. Hao, M. Zeng, Z. Chu, S. Yang, and G. Sun, “Energy-efficient resource allocation for downlink mmwave massive mimo hetnets with wireless backhaul,” IEEE Access, vol. 7, pp. 2457–2471, 2019.
[34] Z. Xiao, T. He, P. Xia, and X. Jia, “Hierarchical codebook design for beamforming training in millimeter-wave communication,” IEEE Trans Wireless Commun., vol. 15, pp. 3380–3392, May 2016.
[35] M. Zeng, N. Nguyen, O. A. Dobre, and H. V. Poor, “Securing downlink massive MIMO-NOMA systems against additive noise,” IEEE J. Sel. Topics Signal Process., vol. 13, pp. 685–699, Jun. 2019.
[36] Z. Chu, K. Cumanan, Z. Ding, M. Johnston, and S. Y. Le Goff, “Secrecy rate optimizations for a MIMO secrecy channel with a cooperative jammer,” IEEE Trans. Veh. Technol., vol. 64, pp. 1833–1847, May 2015.
[37] L. T. Tan and R. Q. Hu, “Mobility-aware edge caching and computing in vehicular networks: A deep reinforcement learning perspective,” IEEE Trans. Veh. Technol., vol. 67, pp. 10190–10203, Nov. 2018.
[38] C. Lin and G. Y. Li, “Energy-efficient design of indoor mmwave and sub-THz systems with antenna arrays,” IEEE Trans. Wireless Commun., vol. 15, pp. 4660–4672, Jul. 2016.
[39] Z. Chu, Z. Zhu, M. Johnston, and S. Y. Le Goff, “Sufficientless wireless communications: Achievable rates for MIMO secrecy channel,” IEEE Trans. Veh. Technol., vol. 65, pp. 6913–6925, Sep. 2016.
[40] P. Song, G. Scutari, F. Facchinei, and L. Lampariello, “D3nt: Distributed multi-cell multigroup multicasting,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3741–3745, Mar. 2016.
[41] A. Boyer, L. El Ghaoui and E. Feron, and V Balakrishnan, Linear matrix inequalities in system and control theory, 1st. Siam, 1994.
[42] H. Zhang, A. Dong, S. Jin, and D. Yuan, “Joint transceiver and power splitting optimization for multiuser MIMO SWIPT under MSE QoS
[43] K.-C. Toh, M. J. Todd, and R. H. Tütüncü, “SDPT3a MATLAB software package for semidefinite programming, version 1.3,” *Optimization methods and software*, vol. 11, no. 1-4, pp. 545–581, 1999.
[44] B. Su, C. Ni, and W. Yu, “Joint transmit beamforming for SWIPT-enabled cooperative NOMA with channel uncertainties,” *IEEE Trans. Commun.*, vol. 67, pp. 4381–4392, Jun. 2019.
[45] N. D. Sidiropoulos, T. N. Davidson, and Zhi-Quan Luo, “Transmit beamforming for physical-layer multicasting,” *IEEE Trans. Signal Process.*, vol. 54, pp. 2210–2221, Jun. 2006.
[46] S. Wang, Y. Wu, J. Ren, Y. Huang, R. Schober, and Y. Zhang, “Hybrid precoder design for cache-enabled millimeter-wave radio access networks,” *IEEE Trans. Wireless Commun.*, vol. 18, pp. 1707–1722, Mar. 2019.
|
An Operational Semantics for a Fragment of PRS
Lavindra de Silva\textsuperscript{1}, Felipe Meneguzzi\textsuperscript{2} and Brian Logan\textsuperscript{3}
\textsuperscript{1} Institute for Advanced Manufacturing, University of Nottingham, Nottingham, UK
\textsuperscript{2} Pontifical Catholic University of Rio Grande do Sul, Porto Alegre, Brazil
\textsuperscript{3} School of Computer Science, University of Nottingham, Nottingham, UK
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org
Abstract
The Procedural Reasoning System (PRS) is arguably the first implementation of the Belief–Desire–Intention (BDI) approach to agent programming. PRS remains extremely influential, directly or indirectly inspiring the development of subsequent BDI agent programming languages. However, perhaps surprisingly given its centrality in the BDI paradigm, PRS lacks a formal operational semantics, making it difficult to determine its expressive power relative to other agent programming languages. This paper takes a first step towards closing this gap, by giving a formal semantics for a significant fragment of PRS. We prove key properties of the semantics relating to PRS-specific programming constructs, and show that even the fragment of PRS we consider is strictly more expressive than the plan constructs found in typical BDI languages.
1 Introduction
The Procedural Reasoning System (PRS) [Georgerff and Ingrand, 1989; Georgerff and Lansky, 1987; 1986] is generally recognised as one of the first implementations of the Belief–Desire–Intention (BDI) [Bratman, 1987] model of agency and practical reasoning. PRS has been extremely influential, and is still widely used [Ghallab et al., 2016], particularly in robotics, e.g., [Ingrand et al., 1996; Alami et al., 1998; Foughal et al., 2016; Niemueller et al., 2017; Lemaignan et al., 2017]. For example, PRS-based systems secured first and second places in recent RoboCup and ICAPS logistics competitions. PRS has also influenced the design of many subsequent BDI-based agent programming languages, e.g., [Rao, 1996; Huber, 1999; Busetta et al., 1999; Winkhoff et al., 2002; Morley and Myers, 2004; Sardina et al., 2006; Sardina and Padgham, 2011], though most of these languages implement only a subset of the programming language features supported by PRS. Surprisingly, given its centrality in the BDI paradigm, PRS lacks a formal operational semantics, which makes it difficult to determine its expressive power relative to other BDI agent programming languages, or to verify the correctness of PRS programs. For example, there is a widespread “folk belief” in the agent programming community that the plan graphs used by PRS are more expressive than most if not all other BDI agent formalisations, yet no proof of this intuition exists.
In this paper, we give a formal semantics for a significant fragment of PRS. We focus on language features specific to PRS, and in particular, its graph-based representation for plans and its programming constructs for maintaining a condition (i.e., maintenance goals). We make three main contributions. First, we develop a formalisation for the syntax of PRS as a directed bipartite graph (Section 2). Second, we provide an operational semantics that accounts for graph-based plans and adopting, suspending, resuming, and aborting (nested) maintenance goals (Section 3). Third, we prove key properties of the semantics of these PRS-specific programming constructs, and show that PRS plan-graphs are strictly more expressive than the plan rules found in typical BDI languages (Section 4). In Section 5, we briefly discuss related work and conclude.
2 PRS Syntax
We briefly recall the syntax and deliberation cycle of PRS, as defined in [Ingrand, 1991]. In the interests of brevity, we omit some features of the language, including meta-level reasoning, ‘true concurrency’, semaphores, and features such as ‘if’ and ‘else’ statements expressible in terms of the fragment we define. Plans in PRS consist of graphs. For compactness, and to allow a precise specification of the semantics, we adopt a plan-rule notation similar to that used in other BDI-based agent programming languages to specify the triggering and context conditions of plans, and specify plan bodies using a textual representation of bipartite graphs. We assume a first-order language with a vocabulary consisting of mutually disjoint and infinite sets of variable, constant, predicate, node, event-goal, and action symbols.
A PRS agent is defined by a belief base $\mathcal{B}$, an action-library $\Lambda$, and a plan-library $\Pi$. A \textit{belief base} is a set of ground atoms. An \textit{action-library} is a set of action-rules specifying the actions available to the agent. An \textit{action} is of the form $act(\vec{t})$, where $act$ is an n-ary action symbol denoting an evaluable function that may change the agent’s environment, and $\vec{t} = t_1, \ldots, t_n$ are (possibly ground) terms. \textit{Action-rules} are similar to STRIPS operators, and are of the form $act(\vec{v})/\psi \leftarrow \Phi^+; \Phi^-$, where $\vec{v} = v_1, \ldots, v_n$ are variables; $\psi$ is a formula specifying the \textit{precondition} of the action; and the
add-list $\Phi^+$ and delete-list $\Phi^-$ are sets of atoms that specify the effects of executing the action. A plan-library $\Pi$ consists of a set of plan-rules of the form $e(\vec{t}) \varphi : \psi \leftarrow G$, where $e$ is an $n$-ary event-goal symbol, $\vec{t} = t_1, \ldots, t_n$ are terms, $\varphi$ and $\psi$ are formulas, and $G$ is a plan-body graph. The rule states that, if the context condition $\psi$ holds, $G$ is a ‘standard operating procedure’ for achieving the event-goal $e(\vec{t})$ or the goal-condition $\varphi$.
Plan-body graphs are built from the following set of user programs: actions; belief addition $+b$ adds the atom $b$ to $B$; belief removal $-b$ removes the atom $b$ from $B$; test $?o$, where $\phi$ is a formula, tests whether $\phi$ holds in $B$; event-goal program or goal-condition program $!ev$, where $ev \in \{\epsilon, \phi\}$, specifies that $ev$ needs to be achieved; wait $\text{wt}(\phi)$ waits until formula $\phi$ holds in $B$; passive preserve $\text{pr}_\pi(!ev, \phi)$ specifies that $!ev$ needs to be achieved while monitoring $\phi$ and aborted if $!ev$ fails or $\phi$ does not hold; and active preserve $\text{pr}_\pi(!ev, \phi)$, which is similar to the former except that if condition $\phi$ does not hold, the plan-body graph for $!ev$ is suspended and the re-achievement of $\phi$ is attempted by posting the goal $?o$. If the goal succeeds and $\phi$ is re-achieved then the active preserve is resumed, and otherwise it is aborted (if the goal fails) or re-suspended (if the goal succeeds but $\phi$ is still not achieved). Since user program $!ev$ may generate sub-goals, wait and preserve programs may also become ‘nested’ within one another, giving rise to potentially complex interactions. We use $\text{pr}_\pi(!ev, \phi)$ to denote either $\text{pr}_\epsilon(!ev, \phi)$ or $\text{pr}_\phi(!ev, \phi)$. Formally, a user program $P_u$ is a formula in the language defined by the grammar
$$P_u ::= act \mid ?\phi \mid +b \mid -b \mid !ev \mid \text{wt}(\phi) \mid \text{pr}_\pi(!ev, \phi).$$
A plan-body graph is a directed bipartite graph representing a partially ordered set of user programs, where the set of nodes (i.e., node symbols) is split into state nodes, and transition nodes initially labelled with the user programs.
**Definition 1.** Let $P_u$ be the set of all user programs. A plan-body graph $G$ is a tuple $(S, T, E_{\text{in}}, E_{\text{out}}, L_0, N, s_0)$, where:
(i) $S$ is a set of state nodes and $s_0 \in S$ the initial node; (ii) $T$ is a disjoint set of transition nodes; (iii) $E_{\text{in}} \subseteq S \times S$ is a set of input edges; (iv) $E_{\text{out}} \subseteq T \times S$ is a set of output edges; (v) function $L_0 : T \rightarrow P_u$ represents the user programs labelling transition nodes; and (vi) $N \subseteq S \cup T$ is a set of current nodes.
We stipulate that for any node $n \in S \cup T$ there exists a sequence $s_1 \cdot t_1 \cdots \cdot s_{k-1} \cdot t_{k-1} \cdot s_k$, such that $s_0 = s_1$, node $n$ is in the sequence, and for each $i \in [1, k - 1] : (s_i, t_i) \in E_{\text{in}}$ and $(t_i, s_{i+1}) \in E_{\text{out}}$.
The PRS deliberation cycle consists of three main steps: processing environment updates, e.g., new event-goals or belief updates; instantiating a plan-body graph to achieve an event-goal or goal condition, or executing a single step in an instantiated plan-body graph; and notifying plan-body graphs when conditions they are monitoring are established or violated. Execution of a plan-body graph starts in the initial node ($s_0$), and progresses to one or more state/transition nodes ($N$). Transition nodes initially represent user programs ($L_0$) and evolve following the semantics in Section 3 until they reach the ‘empty program’ and terminate. The plan-body graph finishes executing (successfully) if no current state nodes $N$ of $G$ lead anywhere. Formally, the plan-body graph is initial if $N = \{s_0\}$, and finished, denoted by fin($G$), if $N \subseteq S$, and for all $s \in N$ and $t \in T : (s, t) \not\in E_{\text{in}}$. All plan-body graphs occurring in a plan-library are initial.
**Example 1.** Consider an agent that has a goal to travel from its current location to a destination $d$ [Sardina and Padgham, 2011]. The goal can be achieved in various ways depending on the distance to the destination, represented by the following plan-rules (each omitted goal-condition $\varphi$ is $\top$):
- travel($d$) : At($x$) $\land$ WalkDist($d, x$) $\leftarrow G_{\text{walk}}$,
- travel($d$) : At($x$) $\land$ $\exists y(\text{InCity}(x, y) \land \text{InCity}(d, y))$ $\leftarrow G_{\text{city}}$,
- travel($d$) : At($x$) $\land$ $\neg \exists y(\text{InCity}(x, y) \land \text{InCity}(d, y))$ $\leftarrow G_{\text{far}}$.
The first plan-rule refers to the plan-body graph $G_{\text{walk}}$ shown in Fig. 1.
### 3 PRS Semantics
Our semantics for PRS follows the approach adopted by the CAN [Winkhoff et al., 2002] agent programming language, as defined in [Sardina and Padgham, 2011], where a transition relation on agent configurations is defined in terms of a set of derivation rules [Plotkin, 1981]. An agent configuration is a tuple $[\Pi, A, B, A, \Gamma]$ where $\Pi$ is a plan-library, $A$ is an action-library, $B$ is a belief base, $A$ is a sequence of executed actions, and $\Gamma$ is a set of intentions (as $\Pi$ and $\Lambda$ do not change during execution, we omit them from derivation rules). Agent configurations represent the execution state of a PRS agent, and intentions are the current states of full programs being pursued in order to achieve top-level event-goals. As in CAN, full programs extend the syntax of user programs to represent the current evolution of a user program, and may contain information used to decide the next transition, e.g., the plan-body graphs yet to be tried in achieving an event-goal.
Formally, a full program (or simply program) is a formula in the language defined by the grammar $P ::= \eta \mid act \mid ?\phi \mid +b \mid -b \mid !ev \mid \text{wt}_\pi(\phi) \mid \text{pr}_\pi(P, \phi) \mid \text{ev} : \{(v_1 : G_1, \ldots, v_n : G_n)\} \mid P \rightsquigarrow P' \mid G \triangleright P \mid \eta \triangleright P$
where $\eta$, ‘nil’ indicates that there is nothing left to execute; $\text{wt}_\pi(\phi)$ denotes either $\text{wt}(\phi)$ or $\overline{\text{wt}}(\phi)$, where $\overline{\text{wt}}(\phi)$ indicates that program $\text{wt}(\phi)$ has been adopted (i.e., its execution has started); $\text{ev} : \{(v_1 : G_1, \ldots, v_n : G_n)\}$ represents the set of relevant plan-rules for achieving the event-goal or goal-condition $\text{ev}$; $P \rightsquigarrow P'$ represents a suspended active preserve program $P'$ whose associated condition is being re-achieved by a recovery program $P$; $G \triangleright P$, where $P = \text{ev} : \{(v_1 : G_1, \ldots, v_n : G_n)\}$, represents the default deliberation mechanism for ‘goal commitment’: achieve $\text{ev}$ using an applicable plan-body graph $G$, but if that fails, try an
alternative applicable graph from those appearing in $P$; and $\eta \triangleright P$ indicates that a plan-body graph for $\text{ev}$ has succeeded. Note that full programs are more general than those yielded by our semantics.
We first give derivation rules for configurations of the form $[\Pi, A, B, A, P]$, where $P$ is a full program, i.e., for a single intention. We give the derivation rules for actions, belief operations, goal and plan adoption, and goal commitment in Section 3.1; for advancing plan-body graphs in Section 3.2; for PRS-specific wait and preserve programs in Section 3.3; and for agents with configurations of the form $[\Pi, A, B, A, I^f]$, i.e., multiple intentions, in Section 3.4.
### 3.1 Semantics for Actions, Belief Updates, Goal and Plan Adoption, and Goal Commitment
Fig. 2 shows the derivation rules for actions and belief operations. Rules $\text{add}$ and $\text{del}$ simply update $B$, replacing programs $+b$ and $-b$ with $\eta$ ($A$ remains unchanged). The semantics for actions is given by rule $\text{act}$. The antecedent checks whether there is a relevant action-rule for $act$ (i.e., one whose ‘head’ $act'$ matches $act$ under substitution $\theta$), and whether the action is applicable (i.e., its precondition holds in $B$); the conclusion of the derivation rule applies the action’s add- and delete-lists to $B$ and appends the action to $A$.
Rule $E_{V_1}$ adopts the event-goal program $e\triangleright e$ by creating the set $\Delta$ of plan-rules relevant for the event, i.e., the rules in $\Pi$ with event-goals matching $e$ via a most general unifier mgu.
$$\Delta = \{\psi\theta : G\theta | e' : \varphi : \psi \leftarrow G \in \Pi, \theta = \text{mgu}(e, e')\} \neq \emptyset$$
$$[B, A, !e] \rightarrow [B, A, e : \langle \Delta \rangle] \quad E_{V_1}$$
**Example 2.** Processing an event-goal program of the form $!\text{travel(Uni)}$ using rule $E_{V_1}$ yields the following (full) program, encoding all relevant options for this event:
$$\text{travel(Uni)} : \psi_1 : G_{walk}, \psi_2 : G_{city}, \psi_3 : G_{fast}$$
with
$$\psi_1 = At(x) \land \text{WalkDist}(x, \text{Uni});$$
$$\psi_2 = At(x) \land \exists y(\text{InCity}(x, y) \land \text{InCity}(\text{Uni}, y));$$ and
$$\psi_3 = At(x) \land \neg \exists y(\text{InCity}(x, y) \land \text{InCity}(\text{Uni}, y)).$$
Similarly, rule $E_{V_2}$ adopts the goal-condition program $!o$ by creating the set $\Delta$ of plan-rules in $\Pi$ that can achieve $\phi$.
$$\Delta = \{\psi\theta : G\theta | e : \varphi : \psi \leftarrow G \in \Pi, \varphi\theta \models o\} \neq \emptyset$$
$$[B, A, !o] \rightarrow [B, A, o : \langle \Delta \rangle] \quad E_{V_2}$$
Rule $Sel$ selects an applicable plan-rule for event-goal or goal-condition $\text{ev}$ from the set of relevant rules, and schedules the associated plan-body graph for execution.
$$\psi : G \in \Delta \quad B \models \psi\theta$$
$$[B, A, \text{ev} : \langle \Delta \rangle] \rightarrow [B, A, G\theta \triangleright \text{ev} : \langle \Delta \setminus \{\psi : G\} \rangle] \quad Sel$$
Rules for goal commitment are shown in Fig. 2. Rule $\triangleright_{stp}$ executes a single step in a program $G \triangleright P$ if the plan-body graph $G$ has neither failed nor finished (see Section 3.2). Rule $\triangleright_{end}$ discards the alternative program $P$ in a program $\eta \triangleright P$ (where $\eta$ here represents a completed graph). Finally, rule $\triangleright_f$ schedules the alternative $P = \text{ev} : \langle \Delta \rangle$ for execution and executes a single step in it, provided $G$ has failed and $P$ has not, i.e., an applicable plan-rule exists for $\text{ev}$.
### 3.2 Semantics for Plan-Body Graphs
We extend the definition of a plan-body graph in Definition 1 to a full plan-body graph, representing the current ‘state’ in the evolution of an initial plan-body graph. A full plan-body graph is of the form $G' = \langle S, T, E_{in}, E_{out}, L_0, L_e, N, s_0 \rangle$ where $L_0 : T \rightarrow P$ is a function that maps each transition node $t \in T$ to a full program $P \in \mathbf{P}$, which represents the current form of the possibly evolved (initial) user program $L_0(t)$. We use the following auxiliary definitions: $bef(t) = \{s | \{s, t\} \in E_{in}\}$ are the input state nodes of $t$; $gft(t) = \{s | \{t, s\} \in E_{out}\}$ are its output state nodes; and the update to function $L_e$ with a (new) program $P$ for $t$ is
$$\text{upd}(L_e, t, P) = (L_e \cup \{(t, L_e(t))\}) \cup \{(t, P)\}. \footnote{We treat the functions $L_0$ and $L_e$ as relations, i.e., as sets of ordered pairs of the form $(t, P)$.}$$
The first rule states that a transition node $t$, in the case where it is not initially associated with a test condition, becomes active if it is not already active but all of its input states are. Becoming active includes (re-)initialising $t$ to correspond to its user program. This is done in case $t$ is part of a cycle.
$$t \in T \quad t \not\in N \quad bef(t) \subseteq N \quad L_0(t) \not\models ?\phi$$
$$[B, A, G] \rightarrow [B', A', G'] \quad G^P_{start}$$
where
$$G' = \langle S, T, E_{in}, E_{out}, L_0, L'_e, N', s_0 \rangle;$$
$$L'_e = \text{upd}(L_e, t, L_0(t)); \quad N' = (N \setminus bef(t)) \cup \{t\}.$$
Once a transition node $t$ is active, it can perform a single execution step in its associated (current) program $L_e(t)$.
$$t \in N \quad [B, A, L_e(t)] \rightarrow [B', A', P] \quad G^P_{stp}$$
$$[B, A, G] \rightarrow [B', A', G']$$
$$G' = \langle S, T, E_{in}, E_{out}, L_0, L'_e, N, s_0 \rangle; \quad L'_e = \text{upd}(L_e, t, P).$$
If a transition node’s program has finished execution, the node becomes inactive and its outgoing nodes become active.
$$t \in N \quad L_e(t) = \eta$$
$$[B, A, G] \rightarrow [B, A, G'] \quad G^P_{end}$$
$$G' = \langle S, T, E_{in}, E_{out}, L_0, L_e, N', s_0 \rangle; \quad N' = (N \setminus \{t\}) \cup aft(t).$$
**Example 3.** Suppose that the agent believes it is currently at home, which is walking distance to the university. In this case, the *ScI rule* transforms the set of relevant options represented by the program in Example 2 into the program $G_{walk} \triangleright travel(Uni) : [\psi_2 : G_{city}, \psi_3 : G_{far}]$, i.e., the agent selects the $G_{walk}$ plan-body graph while keeping the other graphs as backup alternatives, represented by the right-hand side of the $\triangleright$ operator. When the agent starts executing the $G_{walk}$ graph, the rule $G^P_{\text{pert}}$ removes $s_0$ from $N$ and adds the transition node associated with subgoal ‘$p$’$n$. This is then executed using rule $G^P_{\text{stp}}$, whose antecedent uses rule $E_1$ to resolve the subgoal.
If a transition node is initially associated with a test condition, then the node becomes active only if the condition holds in the current belief base. The chosen transition node also becomes inactive at the same execution step, as once the condition is tested there is nothing left to execute. Under this semantics, PRS allows choices in execution within the graph: either a non-deterministic choice when multiple transition nodes, with non ‘mutex’ test programs, exit the same state node, or deterministic choices induced by such tests.
$$t \in T \quad \text{bef}(t) \subseteq N \quad L_0(t) = ?=\phi \quad B \models \phi \quad G^{\psi_\phi}_{\text{stp}}$$
$$[B, A, G] \rightarrow [B, A, G']$$
$G' = \langle S, T, E_{in}, E_{out}, L_0, L'_0, L', N', s_0 \rangle$;
$L'_0 = \text{UPD}(L_0, t, \eta)$; $N' = (N \setminus \text{bef}(t)) \cup \text{aft}(t)$.
The case where $B \not\models \phi$ holds represents failure, i.e., the inability to execute a step in $t$.
Finally, if a plan-body graph has finished (Section 2), rule $G_{end}$ replaces it with program $\eta$.
$$\text{fin}(G)$$
$$[B, A, G] \rightarrow [B, A, \eta] \quad G_{end}$$
**Example 4.** Consider the evolution $G_{pw} \triangleright pw : (\Delta_{\text{pw}})$ of subgoal ‘$p$’$w$. Achieving the former using the graph in Figure 1 involves the parallel execution of the programs $P^1_{pw}$ and $P^2_{pw}$. This ‘split’ is represented by the outgoing edges from the transition node associated with the ‘$t$’$t$ user program, and results in state nodes $s_4$ and $s_5$ being added to $N$, using rule $G^P_{\text{stp}}$. Note that for the transition node associated with ‘$l$’$e$’$2$ to become active, both $P^1_{pw}$ and $P^2_{pw}$ must complete execution, and transition to $s_6$ and $s_7$, respectively.
### 3.3 Semantics for Wait and Preserve Programs
We now give a semantics for wait and preserve programs of the form $\text{WT}(\phi), \text{PR}_p(P, \phi)$, and $\text{PR}_a(P, \phi)$, and suspended active preserve programs of the form $P_1 \rightsquigarrow \text{PR}_a(P_2, \phi)$, where $P_1$ is the recovery program. In all cases, we assume that condition $\phi$ is ground when the program is adopted.
Rule $W_{adopt}$ adopts a wait program, i.e., changes its form to indicate that condition $\phi$ is now being monitored. Rule $W$ specifies that the wait for $\phi$ should continue if $\phi$ does not hold in the belief base. Finally, rule $W_{end}$ specifies that the wait should end if $\phi$ does hold. In all cases, $C = [B, A, \text{WT}(\phi)]$.
$$[B, A, \text{wt}(\phi)] \rightarrow C \quad W_{adopt} \quad B \not\models \phi \quad C \rightarrow C \quad W \quad B \models \phi \quad C \rightarrow C \quad W_{end}$$
The first set of derivation rules for preserve programs apply to both passive and active preserves. Rule $P_{\text{radopt}}$ specifies the adoption of a preserve program, i.e., the adoption of its event-goal or goal-condition program $\text{lev}$. Rule $P_{\text{rstp}}$ executes a single step in a preserve program if $\phi$ is not violated, and $P_{\text{succe}}$ removes a completed preserve program.
$$[B, A, \text{lev} : \{\Delta\}] \rightarrow [B, A, \text{ev} : \{\Delta\}] \quad P_{\text{radopt}}$$
$$[B, A, \text{pr}_x(\text{lev}, \phi)] \rightarrow [B, A, \text{pr}_x(\text{ev} : \{\Delta\}, \phi)] \quad P_{\text{rstp}}$$
$$P \neq \text{lev} \quad B \models \phi \quad [B, A, P] \rightarrow [B', A', P'] \quad P_{\text{rstp}}$$
$$[B, A, \text{pr}_x(P, \phi)] \rightarrow [B', A', \text{pr}_x(P', \phi)] \quad P_{\text{succe}}$$
$$[B, A, \text{pr}_x(\eta, \phi)] \rightarrow [B, A, \eta] \quad P_{\text{fail}}$$
Rule $P_{\text{rfail}}$ specifies that the passive preserve fails if $\phi$ is violated or $P$ is blocked.
$$P \not\in \{\eta, \text{lev}\} \quad (B \not\models \phi \lor [B, A, P] \not\rightarrow) \quad P_{\text{rfail}}$$
$$[B, A, \text{pr}_x(P, \phi)] \rightarrow [B, A, \text{false}] \quad P_{\text{rfail}}$$
Rules $AP_{\text{rfail}}$ to $AP_{\text{rstp}}$ operationalise adopted active and suspended preserve programs. We define three special multisets.\(^2\) Given an expression $\mathcal{E}$ that is either a program or plan-body graph $G = \langle S, T, E_{in}, E_{out}, L_0, L_e, N, s_0 \rangle$, we define a ‘path’ of ‘nested’ (adopted) preserve and wait programs as any element in the multiset $\mathcal{T}(\mathcal{E})$, defined as $\mathcal{T}(\mathcal{E}) =$
$$\begin{cases}
\{\mathcal{E} \cdot \tau \mid \tau \in \mathcal{T}(\hat{P})\} & \text{if } \mathcal{E} = P \rightsquigarrow \text{pr}_a(\hat{P}, \phi) \land P \not\models G \triangleright P_1; \\
\{\mathcal{E} \cdot \tau \mid \tau \in \mathcal{T}(\hat{P})\} & \text{if } \mathcal{E} = \text{WT}(\phi) \rightsquigarrow \text{pr}_a(\hat{P}, \phi); \\
\mathcal{T}(G) & \text{if } \mathcal{E} = \text{PR}_a(G \triangleright P, \phi); \\
\{\mathcal{E}\} & \text{if } \mathcal{E} \in \{\text{WT}(\phi), \text{pr}_x(\text{ev} : \{\Delta\}, \phi)\}; \\
\mathcal{T}(G) & \text{if } \mathcal{E} = G \triangleright P; \\
\mathcal{T}(L_e(t_1)) & \text{if } \mathcal{E} = G \text{ and } N \cap T = \{t_1, \ldots, t_n\}; \\
\emptyset & \text{otherwise}.
\end{cases}$$
When $\mathcal{E} = G$ we take the multiset union of the sequences corresponding to transition nodes in $G$ that are executed in parallel. The first element in such a sequence is a ‘most abstract’ preserve or wait program occurring in $\mathcal{E}$, and the last element is a ‘most deeply’ nested preserve or wait program occurring in $\mathcal{E}$. We use $\mathcal{S}_r(\mathcal{E})$ and $\mathcal{P}_r(\mathcal{E})$ to denote the multisets of all the elements in all the sequences in $\mathcal{T}(\mathcal{E})$ that are, and are not, of the form $P \rightsquigarrow P'$, respectively; i.e., any element in $\mathcal{S}_r(\mathcal{E})$ is a suspended (adopted) active preserve program, and any element in $\mathcal{P}_r(\mathcal{E})$ is an adopted wait, passive preserve, or active preserve program that is not suspended. We use $\mathcal{T}(\mathcal{E})$ to check whether a wait/preserve program in some ‘path’ in $\mathcal{E}$ may be ‘pruned’ by a more abstract preserve program in the path, and we use $\mathcal{S}_r(\mathcal{E})$ and $\mathcal{P}_r(\mathcal{E})$ to count the number of suspended and unsuspended programs occurring in $\mathcal{E}$.
**Example 5.** Suppose $P^1_{pw}$ and $P^2_{pw}$ have evolved to, respectively, adopted programs $\text{WT}(\phi)$ and $\text{PR}_p(P, \phi')$, with $P = c : \{\Delta\}$ (i.e., one parallel branch waits for a condition $\phi$ while the other tries to achieve an event-goal program $e$ while preserving $\phi'$). Then, $\mathcal{T}(G_{walk}) = \{\text{WT}(\phi), \text{PR}_p(P, \phi')\}$. Moreover, if $P$ evolves to $P' = G_e \triangleright e : \{\Delta'\}$, where $G_e$ mentions an adopted program, e.g., $\text{WT}(\phi')$, then $\mathcal{T}(G_{walk}) = \{\text{WT}(\phi), \text{PR}_p(P', \phi') \cdot \text{WT}(\phi')\}$.
\(^2\)Adapted from [Sardina and Padgham, 2011].
Rule $APr_{\text{fail}}$ specifies that the adopted active preserve program $\text{pr}_a(P, \phi)$ fails if $P$ fails and the monitored condition $\phi$ is not violated. If $\phi$ is violated, $APr_{\text{sus}}$ suspends the preserve program and attempts to re-establish $\phi$ using the recovery (goal-condition) program $\nmid \phi$. Rules $APr_{\text{fail}1}$ and $APr_{\text{unsus}}$ specify, respectively, that a suspended preserve program fails if the recovery program fails, and is resumed if the recovery program completes.
$$
\begin{align*}
P &\not\in \{\eta, \text{lev}\} & B &\models \phi & [B, A, P] &\not\rightarrow & APr_{\text{fail}} \\
[B, A, \text{pr}_a(P, \phi)] &\rightarrow [B, A, \nmid \phi]
\end{align*}
$$
$$
\begin{align*}
P &\not\in \{\eta, \text{lev}\} & B &\not\models \phi & [B, A, \text{pr}_a(P, \phi)] &\rightarrow [B, A, \nmid \phi \leadsto \text{pr}_a(P, \phi)]
\end{align*}
$$
$$
\begin{align*}
P_1 &\neq \eta & [B, A, P_1] &\not\rightarrow & [B, A, P_1 \leadsto P_2] &\rightarrow [B, A, \nmid \phi]
\end{align*}
$$
Finally, rules $APr_{\text{sus}1}^{st1}$ and $APr_{\text{sus}2}^{st2}$ define the execution of recovery programs and suspended (active preserve) programs. Rule $APr_{\text{sus}2}^{st2}$ executes a single ‘cleanup’ or ‘notification’ step in the suspended program. Such an execution step amounts to a program $P_1$ evolving to a program $P_2$ that has fewer suspended programs, or fewer unsuspended preserve or wait programs, e.g., due to a failed passive preserve that had a ‘nested’ wait program. The relation $P \prec P'$ is defined for programs $P$ and $P'$ as: $P \prec P' \iff |P_r(P)| < |P_r(P')| \lor (S_r(P) \subset S_r(P'))$.
$$
\begin{align*}
[B, A, P_1] &\rightarrow [B', A', P_1'] \\
[B, A, P_1 \leadsto P_2] &\rightarrow [B', A', P_1' \leadsto P_2]
\end{align*}
$$
$$
\begin{align*}
[B, A, P_2] &\rightarrow [B, A, P_2'] & P_2' &\prec P_2 \\
[B, A, P_1 \leadsto \text{pr}_a(P_2, \phi)] &\rightarrow [B, A, P_1 \leadsto \text{pr}_a(P_2', \phi)]
\end{align*}
$$
Note that rule $APr_{\text{sus}2}^{st2}$ implies that if an active preserve is suspended, all of the (possibly adopted) ‘nested’ programs occurring in $P_2$ are ‘implicitly suspended’, i.e., they can only perform ‘cleanup’ or ‘notification’ steps, so we are guaranteed to terminate.
**Proposition 1.** Any sequence of configurations $[B_1, A_1, P_1], \ldots, [B_n, A_n, P_n]$ is finite if for all $i \in [1, n - 1]$, we have that $P_{i+1} \prec P_i$ and $[B_i, A_i, P_i] \rightarrow [B_{i+1}, A_{i+1}, P_{i+1}]$.
**Proof.** Since each such execution step from $P_i$ to $P_{i+1}$ must yield fewer wait programs, preserve programs, and/or programs of the form $P \leadsto \text{pr}_a(P', \phi)$, it is sufficient to consider whether a ‘switch’ of the latter to resume $\text{pr}_a(P', \phi)$ can lead to it (possibly with an ‘evolved’ $P'$) being suspended again, and whether this can continue indefinitely.
For $\text{pr}_a(P', \phi)$ to resume, we must have $P = \eta$, and then if $\phi$ still does not hold, the program will indeed evolve to $\nmid \phi \leadsto \text{pr}_a(P', \phi)$. Moreover, recovery program $\nmid \phi$ does have at least one relevant plan-rule, as it was once able to evolve to $\eta$ (recall that $\phi$ has been ground from the moment its associated active preserve program was adopted). However, the only possible execution step on $\nmid \phi$ cannot reduce the number of aforementioned programs. □
We use the notation $C \leadsto C'$ to denote that there exists a non-empty sequence of configurations from $C$ to $C'$.
### 3.4 Agent-Level (‘Top-Level’) Semantics
We now give the derivation rules for the top-level execution of an agent program. Transitions between agent configurations are defined by the derivation rules in Fig. 3; an expression $C \Rightarrow C'$ denotes a transition of type $t \in \{\text{PRS, EVENT, COND, INT}\}$.
Rule $A_{prs}$ is the top-level rule, and represents the PRS deliberation cycle. A single PRS type execution step comprises three things: progressing an intention by one step, or removing a completed intention (i.e., $P = \eta$) or a failed one (using rules $A_{int}$ and $A_{rem}$, respectively); processing newly observed event-goals (using rule $A_{ev}$), i.e., creating an intention for each new event-goal that is observed from the (external) environment; and finally, performing all the necessary ‘notification’ and ‘cleanup’ steps on wait, preserve, suspended, and recovery programs (using rule $A_{cond}$) to leave the agent in a ‘sound’ or ‘stable’ configuration. More specifically, $A_{cond}$ takes a single step in an intention if the step will yield an intention with fewer suspended programs, or fewer unsuspended (adopted) preserve or wait programs.
### 4 Properties of the Semantics
We now prove key properties of the semantics and show the greater expressivity of our PRS fragment compared to CAN. In what follows, we use $[B, A, P] \rightarrow$ as an abbreviation for $\exists B', A', P': [B, A, P] \rightarrow [B', A', P']$; and $C_1, C_2$ are agent configurations of the form $[B, A, I, \Gamma]$ such that $C_1$ is sound and $C_1 \Rightarrow C_2$. A configuration $C = [A, \Pi, B, A, I]$ is **sound** if (i) for all $\text{Wt}(\phi) \in P_r(\Gamma), B \not\models \phi$; (ii) for all $P \leadsto P' \in S_w(\Gamma), [B, A, P] \rightarrow$; and (iii) for all $\text{pr}_x(P, \phi) \in P_r(\Gamma), B \models \phi$ and $[B, A, P] \rightarrow$.
Theorem 1 states that all configurations resulting from applying rule $A_{prs}$ on a sound configuration are sound.
**Theorem 1.** Let $C_1$ and $C_2$ be as above. Then, $C_2$ is sound.
**Proof Sketch.** Observe from the antecedent of derivation rule $A_{prs}$ that only one step of type INT is performed on $C_1$,
followed by one of type EVENT, and zero or more of type COND. Assume the theorem does not hold because there is a passive preserve \( \text{pr}_P(P, \phi) \in \mathcal{P}_\tau(\Gamma_2) \), for some \( P \), such that \( B_2 \not\models \phi \) or \( [B_2, A_2, P] \not\rightarrow \). Then, since \( \text{pr}_P(P, \phi) \) is an adopted program appearing in some intention \( P_I \in \Gamma_2 \), either rule \( \text{PR}_{\text{fail}} \) or \( \text{PR}_{\text{suc}} \) can be applied to configuration \( [B_2, A_2, \text{pr}_P(P, \phi)] \) to yield an intention \( P'_I \). Since \( P'_I \not\rightarrow P_I \), rule \( A_{\text{end}} \) will be applied (possibly multiple times) to \( P_I \) until \( \text{pr}_P(P', \phi) \not\in \mathcal{P}_\tau(\Gamma_2) \) for all \( P' \), which contradicts our assumption. Assume instead that the theorem does not hold because there is a program \( P \leadsto P' \in S_\tau(\Gamma_2) \) such that \( [B_2, A_2, P] \not\rightarrow \). Then, either rule \( \text{APR}_{\text{fail}} \) or \( \text{APR}_{\text{unuss}} \) will be applied to configuration \( [B_2, A_2, P \leadsto P'] \) or its ‘evolution’, again resulting in a contradiction. The remaining cases are proved similarly.
The next theorem states that an adopted wait program is only removed in a PRS step if either its condition becomes satisfied, or the program is pruned, i.e., it is a descendant of an adopted preserve or a suspended preserve that is discarded. Given a program \( P \) and ‘path’ \( \tau \in T(P) \), we denote the program at index \( n > 0 \) as \( \tau[n] \) (where \( \tau[1] \) is \( \tau \)'s most abstract program). A program \( P \) is pruned between \( C_1 \) and \( C_2 \) iff for any \( \tau \in T(\Gamma_1) \) with \( T(\Gamma) = \bigcup_{P' \in \Gamma} T(P') \) and \( \tau[k] = P \) for some \( k > 0 \), there exists a \( 0 < j < k \) such that:
1. \( \tau[j] = \text{pr}_R(P_j, \phi_j) \) and \( B_2 \not\models \phi_j \);
2. \( \tau[j] = \text{pr}_R(P_j, \phi_j) \), \( B_2 \not\models \phi_j \), and \( [B_2, A_2, \phi_j] \not\rightarrow \); or
3. \( \tau[j] = P_1 \leadsto P_2 \) and either \( [B_2, A_2, P_1] \rightarrow [B_2, A_2, P'_1] \not\rightarrow \), or \( [B_1, A_1, P_1] \rightarrow [B_2, A_2, P'_1] \rightarrow [B_2, A_2, P'_1 \neq \eta] \not\rightarrow \).
**Theorem 2.** Let \( C_1 \) and \( C_2 \) be as before. For each \( \text{WT}(\phi) \in \mathcal{P}_\tau(\Gamma_1) \) such that \( \text{WT}(\phi) \not\in \mathcal{P}_\tau(\Gamma_2) \), we have that \( B_2 \not\models \phi \), or \( \text{WT}(\phi) \) is pruned between \( C_1 \) and \( C_2 \).
**Proof Sketch.** Let \( \text{WT}(\phi) \in \mathcal{P}_\tau(\Gamma_1) \) be a program such that \( \text{WT}(\phi) \not\in \mathcal{P}_\tau(\Gamma_2) \). Let \( t \) be a transition node currently labelled with \( \text{WT}(T) \), where \( T = \{t, \ldots\} \) and \( N = \{t, \ldots\} \) are the transition nodes and current nodes in a (‘partially executed’) plan-body graph \( G \). We consider the case where if \( B_2 \not\models \phi \), then \( \text{WT}(\phi) \) must have been pruned between \( C_1 \) and \( C_2 \), because no other derivation rules can ‘remove’ \( \text{WT}(\phi) \). First, rule \( G_{\text{end}} \) (which, if applicable, ‘removes’ \( G \)) requires \( \text{fin}(G) \) to hold, which cannot be the case as \( t \in N \); for the same reason, rule \( G_{\text{start}} \) cannot ‘reset’ \( \text{WT}(\phi) \). Second, the antecedent of rule \( \geq_f \) requires that \( [B, A, G] \not\rightarrow \) (for some \( A \) and \( B \in \{B_1, B_2\} \)), which cannot hold as we can take a step on \( \text{WT}(\phi) \) via rule \( W \). Similarly, if \( P \) is an intention in which \( G \) occurs, the antecedent of agent-level rule \( A_{\text{ren}} \) cannot hold (i.e., \( [B_1, A, P, P] \not\rightarrow \) cannot hold for any \( A \)). Then, it follows that \( \text{WT}(\phi) \) is pruned between \( C_1 \) and \( C_2 \).
Theorem 3 states that an adopted preserve program is only ‘removed’ if: its condition becomes violated; its associated program becomes blocked; or the preserve program is pruned.
**Theorem 3.** If \( C_1 \) and \( C_2 \) are as before, and \( \text{pr}_P(P, \phi) \in \mathcal{P}_\tau(\Gamma_1) \) is a program s.t. \( \text{pr}_R(P_2, \phi) \not\in \mathcal{P}_\tau(\Gamma_2) \) for any \( P_2 \):
1. \( B_2 \not\models \phi \) or \( [B_2, A_2, P] \rightarrow [B_2, A_2, P'] \not\rightarrow \);
2. \( [B_1, A_1, P] \rightarrow [B_2, A_2, P'] \rightarrow [B_3, A_3, P''] \not\rightarrow \); or
3. \( \text{pr}_P(P, \phi) \) is pruned between \( C_1 \) and \( C_2 \).
**Proof Sketch.** We show that if two of the conditions above do not hold, then the third will. For example, if (2) and (3) do not hold, then \( \text{pr}_P(P, \phi) \) must have been ‘removed’ by rule \( \text{PR}_{\text{fail}} \) or suspended by \( \text{APR}_{\text{resn}} \), whose antecedents entail (1). Similarly, if (1) and (2) do not hold, then \( \text{pr}_P(P, \phi) \) must have been pruned between \( C_1 \) and \( C_2 \) for the reasons given in the proof of Theorem 2.
Theorem 4 states that a suspended preserve program is only ‘removed’ during a PRS step if: the recovery program completes and the preserve program’s condition is re-established; the recovery program becomes blocked; or both are pruned.
**Theorem 4.** If \( C_1 \) and \( C_2 \) are as before, and \( P_1 \leadsto \text{pr}_R(P_2, \phi) \in S_\tau(\Gamma_1) \) is a program s.t. \( P \leadsto \text{pr}_R(P', \phi) \not\in S_\tau(\Gamma_2) \) for any \( P \) and \( P' \):
1. \( [B_1, A_1, P_1] \rightarrow [B_1, A_1, \eta] \) and \( B_1 \models \phi \);
2. \( [B_2, A_2, P_1] \rightarrow [B_2, A_2, P'_1] \not\rightarrow \);
3. \( [B_1, A_1, P_1] \rightarrow [B_2, A_2, P'_1] \rightarrow [B_3, A_3, P''_1 \neq \eta] \not\rightarrow \);
4. or \( P_1 \leadsto \text{pr}_R(P_2, \phi) \) is pruned between \( C_1 \) and \( C_2 \).
The proof of Theorem 4 is similar to that of Theorem 3.
Theorems 3 and 4 considered the case where all occurrences of preserve programs associated with the same condition \( \phi \), e.g. in multiple intentions in \( \Gamma_1 \), are removed in a PRS step. There is also the case where only some of them are removed in such a step. It is not difficult to develop theorems for this case, though we would need additional formal machinery. Similarly, we can show that active preserve programs are suspended and resumed for the right reasons. For example, with a minor extension to Theorem 3, we can show that if \( \text{pr}_R(P, \phi) \) becomes suspended (i.e., \( \phi \vdash \text{pr}_R(P, \phi) \in S_\tau(\Gamma_2) \)), then \( B_2 \not\models \phi \) and \( [B_2, A_2, \phi] \vdash \) hold.
In the remainder of this section, we characterise the relative expressivity of our PRS fragment and the CAN formalism as defined in [Sardina and Padgham, 2011]. A CAN plan-rule is of the form \( e : \psi \leftarrow P \), where \( e \) is an event-goal, \( \psi \) is a context condition, and \( P \) is a plan-body, i.e., a formula in the language defined by the grammar \( P ::= \)
\[
\begin{align*}
&\text{act} \mid ?e \mid +b \mid -b \mid !e \mid P_1 ; P_2 \mid P_1 \parallel P_2 \mid \text{Goal}(\phi_s, P', \phi_f),
\end{align*}
\]
where \( P_1 ; P_2 \) is sequential composition, \( P_1 \parallel P_2 \) is parallel composition, and \( \text{Goal}(\phi_s, P', \phi_f) \) is a declarative goal specifying that formula \( \phi_s \) (the goal) should be achieved using program \( P' \), failing if \( \phi_f \) becomes true. The remaining programs are defined as for PRS user programs.
To state our results, we need the notions of execution traces and solutions. We define these only for PRS as the definitions for CAN are analogous.
**Definition 2.** An execution trace of an agent configuration \( C = [A, \Pi, B, A, \Gamma] \) is a finite sequence of agent configurations \( C_1, \ldots, C_n \) such that \( C = C_1 \) and \( C_i \overset{\text{PRS}}{\rightarrow} C_{i+1} \) for all...
$i \in [1, n - 1]$; the solution in $C_1, \ldots, C_n$ is the sequence $\mathcal{A}$ of actions such that $\mathcal{A}_n = A_1 \cdot \mathcal{A}$.
With $\Lambda$, $\Pi$, $\mathcal{B}$, and $\Gamma$ as above, $\text{sol}(\Lambda, \Pi, \mathcal{B}, \Gamma)$ denotes the set of solutions, i.e., the set of sequences of actions performed in the execution traces of configuration $[\Lambda, \Pi, \mathcal{B}, \epsilon, \Gamma]$, where $\epsilon$ denotes the empty string. Theorem 5 states that a CAN plan-library $\Pi^-$ not mentioning $\text{Goal}(\phi_\alpha, P, \phi_f)$ programs (as there is no corresponding program in PRS) can be translated into an equivalent PRS plan-library.
**Theorem 5.** If $\Pi^-$ is a CAN library and $\Lambda$ an action library, there exists a PRS library $\Pi_e$ s.t. for any event-goal $\epsilon$ and beliefs $\mathcal{B}$: $\text{sol}(\Lambda, \Pi^-, \mathcal{B}, \{\epsilon\}) = \text{sol}(\Lambda, \Pi_e, \mathcal{B}, \{\epsilon\})$.
**Proof Sketch.** Given a CAN plan-rule $\epsilon : \psi \leftarrow P \in \Pi^-$, the first step is to obtain the corresponding PRS plan-rule $\epsilon : \top : \psi \leftarrow G$. We define three functions: $g(P, n)_\text{in}$, $g(P, n)_\text{out}$, and $g(P, n)_L$, where $n \in \mathbb{N}$. When $n = 1$, these functions represent respectively the elements $E_{\text{in}}$, $E_{\text{out}}$, and $L_0$ in $G$. If $P = \text{act}$, we define $g(P, n)_\text{in} = \{(n \cdot s, n \cdot t)\}$; $g(P, n)_\text{out} = \{(n \cdot t, n \cdot s')\}$; and $g(P, n)_L = \{(n \cdot t, P)\}$, where $s$, $s'$ and $t$ are unique symbols (and thus, $n \cdot t$, for example, is a string). We also define $g(P, n)_\text{start} = n \cdot s$ and $g(P, n)_\text{end} = n \cdot s'$. Intuitively, $n$ uniquely identifies the PRS plan-body ‘subgraph’ corresponding to $P$. If $P = P_1 \mid P_2$ is the sequential composition of CAN programs $P_1$ and $P_2$,
$$g(P, n)_\text{out} = g(P_1, n \cdot 1)_\text{out} \cup g(P_2, n \cdot 2)_\text{out} \cup \{(n \cdot t', n \cdot s'), (n \cdot t, g(P_1, n \cdot 1)_\text{start}), (n \cdot t', g(P_2, n \cdot 2)_\text{start})\};$$
$$g(P, n)_\text{in} = g(P_1, n \cdot 1)_\text{in} \cup g(P_2, n \cdot 2)_\text{in} \cup \{(n \cdot s, n \cdot t), (g(P_1, n \cdot 1)_\text{end}, n \cdot t'), (g(P_2, n \cdot 2)_\text{end}, n \cdot t')\};$$
$$g(P, n)_L = g(P_1, n \cdot 1)_L \cup g(P_2, n \cdot 2)_L \cup \{(n \cdot t, ?\top), (n \cdot t', ?\top), (n \cdot t'', ?\top)\},$$
where transition node $n \cdot t'$ ‘connects’ the subgraphs corresponding to $P_1$ and $P_2$. As before, we define $g(P, n)_\text{start} = n \cdot s$ and $g(P, n)_\text{end} = n \cdot s'$, and $s$, $s'$, $t$, $t'$ and $t''$ are unique symbols. We similarly define the subgraph corresponding to a CAN parallel composition $P_1 || P_2$, test program, etc.
We then show by induction on the structures of $P$ and $G$ above that their traces yield the same solutions. For example, the derivation rules for CAN’s sequential composition can be simulated by repeatedly applying the $G^*_\text{start}$, $G^*_\text{step}$ and $G^*_\text{end}$ rules of PRS, and vice versa. □
Theorem 6 states that the converse does not hold: even if we ignore programs that have no counterparts in CAN, PRS plan-libraries cannot be ‘directly mapped’ to CAN libraries. This result follows from a similar one in [de Silva, 2017] which showed that the ‘plan-body’ representation used in HTN planning allows a more fine-grained interleaving of steps than do CAN plan-bodies: a CAN plan-body must specify steps in a ‘series-parallel’ manner, whereas HTN ‘plan-bodies’ (and PRS plan-body graphs) do not.
In what follows, $\Pi^-_p$ denotes PRS plan-libraries that do not mention goal-condition, wait, nor preserve programs, and $\Pi_e \in \text{CAN}(\Pi^-_p)$ denotes a directly mapped CAN library, i.e., one obtained from $\Pi^-_p$ by ignoring the goal-condition $\varphi$ in each plan-rule, and replacing each graph $G$ appearing in $\Pi^-_p$ with a CAN plan-body $P$ such that the multisets of (user) programs occurring in $G$ and $P$ are the same.
**Theorem 6.** There exists a PRS library $\Pi^-_p$, an action-library $\Lambda$, and event-goal $\epsilon$, s.t. for any CAN library $\Pi_e \in \text{CAN}(\Pi^-_p)$ and beliefs $\mathcal{B}$: $\text{sol}(\Lambda, \Pi^-_p, \mathcal{B}, \{\epsilon\}) \neq \text{sol}(\Lambda, \Pi_e, \mathcal{B}, \{\epsilon\})$.
**Proof Sketch.** We translate an example HTN ‘plan-body’ provided in [de Silva, 2017] into a PRS plan-body graph. First, we create the PRS plan-rule $e_{\text{top}} : \top : \top \leftarrow G$, in particular by taking the following input and output edges for $G$:
$$E_{\text{in}} = \{(s_1, t_1), (s_2, t_2), (s_3, t_3), (s_4, t_4), (s_5, t_4), (s_6, t_5), (s_7, t_6), (s_8, t_6)\};$$
and
$$E_{\text{out}} = \{(t_1, s_2), (t_1, s_3), (t_2, s_4), (t_3, s_5), (t_3, s_6), (t_4, s_7), (t_5, s_8), (t_6, s_9)\}.$$
Second, we set the initial program $L_0(t_i)$ for each $t_i \in \{t_1, \ldots, t_6\}$ to a different event-goal $e_{t_i}$, each of which is associated with a single plan-body graph representing the sequence of unique actions $a^*_{t_1} \cdot a^*_{t_2}$. Finally, we show that the following sequence of actions:
$$a^*_{t_1} \cdot a^*_{t_2} \cdot a^*_{t_3} \cdot a^*_{t_4} \cdot a^*_{t_5} \cdot a^*_{t_6} \cdot a^*_{t_7} \cdot a^*_{t_8} \cdot a^*_{t_9} \cdot a^*_{t_{10}} \cdot a^*_{t_{11}}$$
cannot be produced by any CAN trace of program $e_{\text{top}}$, relative to any CAN plan-library that is directly mapped from the above set of PRS plan-rules; e.g., the CAN body $e_{t_1} : (e_{t_2} : (e_{t_3} : e_{t_4})); e_{t_4}; e_{t_6}$ does not allow $a^*_{t_6}$ to follow $a^*_{t_4}$. □
### 5 Discussion
We proposed an operational semantics for a significant fragment of the OpenPRS variant of PRS that accounts for (possibly nested) graph-based plan-bodies; language features such as active preserves; and for adopting, suspending, resuming, and aborting such programs. We showed that our semantics is sound, correctly accounts for the key interactions between (nested) plans and preserve programs, and that plan-body graphs do not have a direct translation to plan-bodies in a typical BDI-based agent programming language (CAN).
Our work is closely related to that of Dastani et al. [2011] on the semantics of resuming, suspending, and aborting maintenance goals. They define complex goal types that include achievement of goal formulas; maintenance goals and complex temporal goals. All of these can be encoded as plan-body graphs in PRS through a straightforward mapping. Van Riemsdijk et al. [2009] develop a modal logic for goal representation and reasoning mechanisms for non-temporal goals. While the goal types they consider could probably be represented as PRS plan-body graphs, encoding the associated mechanisms for reasoning about goal conflicts is less straightforward. Thangarajah et al. [2011; 2014] define an operational semantics for various types of goals, most of which can be implemented using PRS plan-body graphs and programs. However, their maintenance goals include a construct to proactively maintain a goal condition by anticipating whether it will fail. This specific capability of predicting conditions in the future is lacking in our semantics.
Future work includes investigating whether an arbitrary PRS plan-rule can be simulated by a set of CAN (or AgentSpeak) plan-rules, and extending our semantics to add more PRS features, e.g. meta-level reasoning and plan steps that can overlap in execution.
Acknowledgements
Lavindra is grateful to Félix Ingrand for useful discussions about OpenPRS while Lavindra was a researcher on the GOAC/MARAE projects (2009-2011) at LAAS-CNRS. Felipe thanks CNPq for partial financial support under its PQ fellowship, grant number 305969/2016-1.
References
[Alami et al., 1998] R. Alami, R. Chatila, S. Fleury, M. Ghallab, and F. Ingrand. An architecture for autonomy. *The International Journal of Robotics Research (IJRR)*, 17(4):315–337, 1998.
[Bratman, 1987] M. E. Bratman. *Intention, Plans, and Practical Reason*. Harvard University Press, 1987.
[Busetta et al., 1999] P. Busetta, R. Rönnaquist, A. Hodgson, and A. Lucas. Jack intelligent agents - components for intelligent agents in Java. Technical report, Agent Oriented Software, 1999.
[Dastani et al., 2011] M. Dastani, M. B. van Riemsdijk, and M. Winikoff. Rich goal types in agent programming. In *Proc. of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS)*, pages 405–412, 2011.
[de Silva, 2017] L. de Silva. BDI agent reasoning with guidance from HTN recipes. In *Proc. of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS)*, pages 759–767, 2017.
[Foughali et al., 2016] M. Foughali, B. Berthomieu, S. Dal Zilio, F. Ingrand, and A. Mallet. Model checking real-time properties on the functional layer of autonomous robots. In *Proc. of the International Conference on Formal Engineering Methods (ICFEM)*, pages 383–399, 2016.
[Georff and Ingrand, 1989] M. P. Georgeff and F. F. Ingrand. Decision-making in an embedded reasoning system. In *Proc. of the International Joint Conference on Artificial Intelligence (IJCAI)*, pages 972–978, 1989.
[Georgeff and Lansky, 1986] M. P. Georgeff and A. L. Lansky. A system for reasoning in dynamic domains: Fault diagnosis on the Space Shuttle. Technical Report 375, Artificial Intelligence Center, SRI International, 1986.
[Georgeff and Lansky, 1987] M. P. Georgeff and A. L. Lansky. Reactive reasoning and planning. In *Proc. of the National Conference on Artificial Intelligence (AAAI)*, pages 677–682, 1987.
[Ghallab et al., 2016] M. Ghallab, D. Nau, and P. Traverso. *Automated Planning and Acting*. Cambridge University Press, 2016.
[Huber, 1999] M. J. Huber. JAM: a BDI-theoretic mobile agent architecture. In *Proc. of the Annual Conference on Autonomous Agents*, pages 236–243, 1999.
[Ingrand et al., 1996] F. F. Ingrand, R. Chatila, R. Alami, and F. Robert. PRS: A high level supervision and control language for autonomous mobile robots. In *Proc. of the International Conference on Robotics and Automation (ICRA)*, pages 43–49, 1996.
[Ingrand, 1991] F. F. Ingrand. *OPRS Development Environment Version 1.1b7*, 1991. Last updated January 2014. https://git.openrobots.org/projects/openprs.
[Lemaignan et al., 2017] S. Lemaignan, M. Warnier, E. A. Sisbot, A. Clodic, and R. Alami. Artificial cognition for social human-robot interaction: An implementation. *Artificial Intelligence*, 247:45–69, 2017.
[Morley and Myers, 2004] D. Morley and K. Myers. The SPARK agent framework. In *Proc. of the International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS)*, pages 714–721, 2004.
[Niemueller et al., 2017] T. Niemueller, F. Zwilling, G. Lakeymeyer, M. Löbach, S. Reuter, S. Jeschke, and A. Ferrein. *Cyber-Physical System Intelligence*, pages 447–472. Springer International Publishing, 2017.
[Plotkin, 1981] G. D. Plotkin. A structural approach to operational semantics. Technical report, University of Aarhus, 1981.
[Rao, 1996] A. S. Rao. AgentSpeak(L): BDI agents speak out in a logical computable language. In *Proc. of the European Workshop on Modelling Autonomous Agents in a Multi-Agent World*, pages 42–55. Springer-Verlag, 1996.
[Sardina and Padgham, 2011] S. Sardina and L. Padgham. A BDI agent programming language with failure handling, declarative goals, and planning. *Autonomous Agents and Multi-Agent Systems*, 23(1):18–70, 2011.
[Sardina et al., 2006] S. Sardina, L. de Silva, and L. Padgham. Hierarchical planning in BDI agent programming languages: A formal approach. In *Proc. of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS)*, pages 1001–1008, 2006.
[Thangarajah et al., 2011] J. Thangarajah, J. Harland, D. Morley, and N. Yorke-Smith. Operational behaviour for executing, suspending and aborting goals in BDI agent systems. In *Proc. of the International Workshop on Declarative Agent Languages and Technologies (DALT)*, pages 1–21. Springer, 2011.
[Thangarajah et al., 2014] J. Thangarajah, J. Harland, D. Morley, and N. Yorke-Smith. An operational semantics for the goal life-cycle in BDI agents. *Autonomous Agents and Multi-Agent Systems*, 28(4):682–719, 2014.
[van Riemsdijk et al., 2009] M. B. van Riemsdijk, M. Dastani, and J.-J. Ch. Meyer. Goals in conflict: Semantic foundations of goals in agent programming. *Autonomous Agents and Multi-Agent Systems*, 18(3):471–500, 2009.
[Winikoff et al., 2002] M. Winikoff, L. Padgham, J. Harland, and J. Thangarajah. Declarative & procedural goals in intelligent agent systems. In *Proc. of the International Conference on Principles of Knowledge Representation and Reasoning (KR)*, pages 470–481, 2002.
|
Ecological Load and Balancing Selection in Circumboreal Barnacles
Joaquin C.B. Nunez\(^1\),\(^2\) Stephen Rong,\(^1,2\) Alejandro Damian-Serrano,\(^3\) John T. Burley,\(^1,4\) Rebecca G. Elyanow,\(^2\) David A. Ferranti,\(^1\) Kimberly B. Neil,\(^1\) Henrik Glenner,\(^5\) Magnus Alm Rosenblad,\(^6\) Anders Blomberg,\(^6\) Kerstin Johannesson,\(^7\) and David M. Rand\(^1\)*,\(^2\)
\(^1\)Department of Ecology and Evolutionary Biology, Brown University, Providence, RI
\(^2\)Center for Computational Molecular Biology, Brown University, Providence, RI
\(^3\)Department of Ecology and Evolutionary Biology, Yale University, New Haven, CT
\(^4\)Institute at Brown for Environment and Society, Brown University, Providence, RI
\(^5\)Department of Biological Sciences, University of Bergen, Bergen, Norway
\(^6\)Department of Chemistry and Molecular Biology, University of Gothenburg, Lundberg Laboratory, Göteborg, Sweden
\(^7\)Department of Marine Sciences, University of Gothenburg, Tjärnö Marine Laboratory, Strömstad, Sweden
\(^*\)Present address: Department of Biology, University of Virginia, Charlottesville, VA
*Corresponding authors: E-mails: email@example.com; firstname.lastname@example.org.
Associate editor Michael Rosenberg
Abstract
Acorn barnacle adults experience environmental heterogeneity at various spatial scales of their circumboreal habitat, raising the question of how adaptation to high environmental variability is maintained in the face of strong juvenile dispersal and mortality. Here, we show that 4% of genes in the barnacle genome experience balancing selection across the entire range of the species. Many of these genes harbor mutations maintained across 2 My of evolution between the Pacific and Atlantic oceans. These genes are involved in ion regulation, pain reception, and heat tolerance, functions which are essential in highly variable ecosystems. The data also reveal complex population structure within and between basins, driven by the trans-Arctic interchange and the last glaciation. Divergence between Atlantic and Pacific populations is high, foreshadowing the onset of allopatric speciation, and suggesting that balancing selection is strong enough to maintain functional variation for millions of years in the face of complex demography.
Key words: barnacles, *Semibalanus balanoides*, balancing selection, ecological genomics, ecological load.
Introduction
The relationship between genetic variation and adaptation to heterogeneous environments remains a central conundrum in evolutionary biology (Botero et al. 2015). Classical models of molecular evolution predict that populations should be locally adapted to maximize fitness (Williams 1966). However, species inhabiting highly heterogeneous environments violate this expectation: If gene flow is high in relation to the scale of environmental heterogeneity, species will harbor variation that is beneficial in one condition but deleterious in another (Gillespie 1973), and the resulting ecological load (i.e., the fitness difference between the best and the average genotype across the range of environments where offspring may settle) will prevent local adaptation. Conversely, if gene flow is low, favored alleles will become locally fixed and species should display low levels of genetic variation. Paradoxically, many natural populations living in variable environments possess high dispersal capabilities and harbor more variation than expected under classical models (Metz and Palumbi 1996; Mackay et al. 2012; Messer and Petrov 2013; Bergland et al. 2014). This disconnect between data and theory has motivated the hypothesis that balancing selection, a process where selection favors multiple beneficial alleles at a given locus, is at play to maintain adaptations in these habitats (Levene 1953; Hedrick 2006).
The northern acorn barnacle (*Semibalanus balanoides*) is a model system to study adaptations to ecological variability. This barnacle is a self-incompatible, simultaneous hermaphrodite which outcrosses only with adjacent individuals. Adult barnacles are fully sessile and occupy broad swaths of intertidal shores in both the North Pacific and North Atlantic oceans. These habitats experience high levels of cyclical and stochastic ecological heterogeneity which impose strong selection at multiple spatial scales: microhabitats (intertidal shores), mesohabitats (bays and estuaries), and macrohabitats (continental seaboards) (Schmidt et al. 2008; Nunez et al. 2020). Barnacle larvae, on the other hand, engage in extensive pelagic dispersal by current systems (70–100 km in 5–7 weeks) and may settle in habitats completely different from those of...
their parents (Flowerdew 1983). This contrast between strong adult selection and high juvenile dispersal prevents local adaptation. In addition, *S. balanoides* has a complex demography. It originated in the Pacific, and colonized the Atlantic during the many waves of the trans-Arctic interchange (1–3 Ma) (Vermeij 1991). Like most circumboreal species, it was subjected to drastic range shifts due to the Pleistocene glacial cycles (Wares and Cunningham 2001; Flight et al. 2012), and more recently due to anthropogenic climate change (Jones et al. 2012). As such, *S. balanoides* is a premier system to study how adaptive genetic variation is maintained over broad spatial and evolutionary scales, in the face of ecological load.
Three decades of work have shown that balancing selection, via marginal overdominance (a case where the harmonic mean fitness of heterozygous genotypes must be larger than that of either homozygote) (Levene 1953), maintains adaptive variation at the metabolic gene Mannose-6-phosphate isomerase (*Mpi*) in barnacles across the entire North Atlantic basin (Schmidt and Rand 1999; Dufresne et al. 2002; Rand et al. 2002; Veliz et al. 2004; Nunez et al. 2020). These findings motivate two questions which are addressed in this study. First, how pervasive are balanced polymorphisms in the barnacle genome? And, second, what genes are targets of balancing selection? To investigate functional polymorphism in *S. balanoides*, we quantified genomic variation in North Pacific and North Atlantic populations (fig. 1A–C). In the Pacific, we analyzed samples from British Columbia, Canada (WCAN) as well as a sample of the sister taxon *S. cariosus*. In the Atlantic, we analyzed samples from Maine (ME), Rhode Island (RI), Iceland (ICE), Norway (NOR), and the United Kingdom (UK). For all populations, we sequenced multiple libraries including: a single individual barnacle genome to $\sim$50× coverage, pools of 20–38 individuals per population (i.e., pool-seq; Schlotterer et al. 2014), as well...
as ~600-bp amplicons from the mitochondrial (mtDNA) COX I gene (including previously published COX I data (Wares and Cunningham 2001)). We mapped these data sets to our newly assembled S. balanoides genome (supplementary appendix 1, Supplementary Material online) and characterized genetic diversity across all populations (supplementary appendix 2, Supplementary Material online). We first present our findings in the context of the barnacle’s phylogeography and demographic history. This is pivotal to understand the historical conditions which can contribute to ecological load. Then, we characterize the pervasiveness of balancing selection across the genome, as well as the age of balanced polymorphisms and their putative functional significance in highly heterogeneous environments.
Results
Standing Variation across Oceans
Our pool-seq panels discovered ~3M high-quality single nucleotide polymorphisms (SNPs) across populations at common allele frequencies (>5%). When linkage is removed at 500 bp, the SNP panel thins to ~690,000. Principal component analysis (PCA), on the Linkage disequilibrium (LD) thinned SNPs shows that variation is strongly subdivided by ocean basins (fig. 1D). PC1 captures 74% of the variation and partitions populations across basins. PC2 (8.5% var.) partitions Atlantic populations into two discrete east–west clusters. The western cluster contains ME, RL and ICE, and the eastern cluster contains UK and NOR. These clusters are supported by the abundance of mtDNA haplotypes within and between ocean basins (fig. 1D inset; supplementary table S1, Supplementary Material online) (Wares and Cunningham 2001; Flight et al. 2012; Nunez et al. 2018). The large divergence between oceans is also captured in levels of nucleotide diversity ($\pi$; a metric of standing genetic variation). Surprisingly, North Atlantic populations harbor more genetic variation ($\pi = 1.05\%$) than their Pacific, ancestral, conspecifics ($\pi = 0.55\%$, fig. 1E; supplementary fig. S1, Supplementary Material online). We also estimated the Tajima’s D statistic (D), a neutrality test based on the comparison of two measures of nucleotide polymorphism: $\pi$ and Watterson theta ($\theta$). The value of the D statistic is a good proxy for the excess ($D < 0$), or deficit ($D > 0$), of rare alleles in populations. These data indicate that all North Atlantic populations, especially NOR, have negatively skewed genome-wide values of $D$ (fig. 1F, supplementary fig. S2, Supplementary Material online).
Historical Phylogeography and Structure
We reconstructed changes of historical effective population sizes ($N_e$) with the multiple sequentially Markovian coalescent model (MSMC) using individual whole genomes (Schiffels and Durbin 2014). Our results provide evidence for different phylogeographic trajectories in response to the events of the glaciations (fig. 1G and H). For instance, the Eastern Cluster and the Western Cluster populations shared a common demography throughout the Pleistocene (fig. 1G) but diverged in recent geological time. Namely, Eastern populations (especially NOR) experienced striking increases in $N_e$ in the recent past (fig. 1I), likely following the asynchronous deglaciation of the Fennoscandian ice sheet (Ruddiman and McIntyre 1981; Patton et al. 2017). Western populations, on the other hand, experienced a demographic contraction which started during the last glacial period and ended during the last glacial maxima (~20 ka; fig. 1J) (Brochmann et al. 2003; Maggs et al. 2008; Flight et al. 2012).
We estimated gene flow by computing $f_{ST}$ statistics (Reich et al. 2009) for all possible combinations of target, source 1, and source 2 populations, using individual whole genomes (supplementary fig. S3 and table S2, Supplementary Material online). Our analysis finds no evidence of recent gene flow across oceans. This result is supported by two additional lines of evidence. First, a mtDNA molecular clock analysis (Drummond et al. 2002) suggests that Pacific and Atlantic populations have not exchanged migrants in nearly 2 My (supplementary appendix 3, Supplementary Material online); and second, estimates of genetic differentiation ($F_{ST}$) reveal large amounts of genome-wide divergence (supplementary fig. S4, Supplementary Material online) and foreshadow the onset of allopatric speciation across oceans. Within the North Atlantic, $F_{ST}$ is low (likely due to shared demography until the glacial maximum) and the $f_{ST}$ analysis suggests that admixture is pervasive (supplementary fig. S3 and table S2, Supplementary Material online). These findings are supported by additional ABBA–BABA tests for gene tree heterogeneity (Green et al. 2010) (see supplementary appendix 4, Supplementary Material online). Overall, these findings present three important points: First, they exemplify the complex demography that underlies standing variation in natural populations; second, they confirm that barnacles harbor high levels of genetic variation genome-wide; and third, they reveal the pervasiveness of gene flow and shared variation within ocean basins, where environmental heterogeneity is extensive across “micro” (1–3 m) and “meso” (1–10 km) scales. These conditions provide the environmental context for ecological load at the genomic scale.
Balancing Selection in Barnacles
Balancing selection is expected to produce molecular and phylogenetic footprints not consistent with neutrality (Fijarczyk and Babik 2015). Molecular footprints include: enrichment of old alleles (e.g., trans-species polymorphisms; TSPs), elevated genetic variation (high $\pi$), deficit of rare alleles ($D > 0$), excess SNPs at medium allele frequencies, reduced divergence around the balanced locus (low $F_{ST}$), as well as the accumulation of nonsynonymous variation in the vicinity of balanced polymorphisms; a phenomenon known as sheltered load (Uyenoyama 2005). Likewise, balancing selection will produce a phylogenetic signal composed of diverged clades, corresponding to the balanced haplotypes. Deeply diverged clades will occur when balancing selection has maintained variation over long evolutionary times (i.e., ancestral balancing selection; Fijarczyk and Babik 2015). Notably, these signatures may become highly localized in the genome as the action of recombination over long periods of time will erode long-distance haplotype signatures.
Fig. 2. Evidence for balancing selection across the genome. (A) Enrichment analysis of TSPs across the genome of *Semibalanus balanoides* based on all populations studied. The asterisk symbols represent statistical significance. Prom, promoters; NS, nonsynonymous loci; S, synonymous loci; Cod, coding loci. (B) Plot of Tajima’s $D$ (as a function of length) of exons bearing TSPs versus all other exons not bearing TSPs. (C) Same as (B) but for nucleotide diversity ($\pi$). (D) Same as (B) but for mean $F_{ST}$. (E) Same as (B) but for the ratio of nonsynonymous heterozygosity to synonymous heterozygosity. (F) SFS for whole genes with TSPs versus other genes. Vertical bars are 95% confidence intervals. (G) Candidate genes under balancing selection ranked according to their CPD$_{w-b}$ values (interquartile ranges shown as error bars). Red values indicate statistical significance. Horizontal dashed line indicates CPD$_{w-b} = 0$. In the x-axis, the label “ancient” refers to allele trees whose topology violates the genome-wide phylogeographic expectation (e.g., fig. 1D). “Recent” denotes the opposite case. Three example allele tree topologies are shown. The sister taxon, *Semibalanus cariosus*, is shown as “Ov” (for outgroup). The x-axis for (B), (C), (D), and (E) is exon length ($\times 1,000$ bp).
A joint analysis of our Pacific, Atlantic, and outgroup (*S. cariosus*) data sets reveal 11,917 cosmopolitan SNPs (i.e., SNPs that segregate in all populations across both oceans) which are also TSPs (supplementary data set S1, Supplementary Material online). TSPs, genome-wide, occur in 0.14% coding regions, 0.21% in introns, 0.02% in promoters, 0.01% in 5’-UTRs, and <0.01% in 3’-UTRs. The remainder of TSPs occurs in 0.09% of intergenic regions. An enrichment analysis which compares the abundance of TSPs of each genomic class, relative to all discovered SNPs, reveals that TSPs are significantly overenriched in coding loci (fig. 2A), and 4,415 segregate at high frequencies in all populations (TSPs with heterozygosity [$H_E$] > 0.30; supplementary fig. S5, Supplementary Material online). These patterns of variation could be the result of neutral processes such as recurrent mutation (homoplasy) across all populations of either species. However, the enrichment of cosmopolitan, nonsynonymous, TSPs at common frequencies is not consistent with neutrality. Under a model of strict neutrality, segregating mutations are eventually lost in populations after speciation (Clark 1997). Moreover, coding regions are subjected to purifying selection which removes deleterious and mildly deleterious nonsynonymous variants (Hartl and Clark 1997).
We compared patterns of genetic variation in exons bearing TSPs and other exons. When accounting for exon length, we observe consistently elevated values of $D$ and $\pi$ for TSP-bearing exons relative to other exons (fig. 2B and C; supplementary fig. S6, Supplementary Material online). Except for the ME versus RI comparison (supplementary fig. S7, Supplementary Material online), TSP-bearing exons have consistently low $F_{ST}$ values (fig. 2D). To quantify sheltered load, we compared the ratio of $H_E$ values at nonsynonymous and synonymous mutations in TSP-bearing and other exons. Our results show that medium sized TSP-bearing exons ($\sim 500$ bp) harbor an excess of nonsynonymous $H_E$ (fig. 2E). Notably, we observed that differences between TSP-bearing and other exons become less apparent as exons get longer. The relationship between exon size and the intensity of the balancing selection signatures depends on local recombination rates. Although exact recombination rates are not yet available for *Semibalanus*, empirical data suggest that LD decays at distances <1 kb (supplementary fig. S8, Supplementary Material online). As such, the signals of deviation from neutrality are more apparent on shorter exons, relative to longer ones. We observe 1,107 TSPs that cause nonsynonymous changes and occur in 312 genes with high-confidence annotations (4%; supplementary data set S2, Supplementary Material online). Consistent with our expectation of balancing selection, site frequency spectrum (SFS) analyses show that these 312 candidate genes harbor an excess of SNPs at medium allele frequencies relative to other annotated genes (fig. 2F).
**Age of Balanced Polymorphisms**
To determine the age of the putatively balanced polymorphisms, we ran topological tests on the allele trees for each TSP region across the 312 candidate genes. We built trees using phased haplotypes for each TSP-bearing region for all single-individual genomes. We used these allele trees to compute the cophenetic distance (CPD) between tips. We classified allele trees as having or lacking highly diverged alleles based on the relative mean CPD between haplotypes from the same population versus from different populations (CPD$_{w-b}$; see supplementary methods, Supplementary Material online). The analysis reveals that of the 312 allele trees, 150 carry a significant signature of ancestral balancing selection (CPD$_{w-b} > 0$, Bonferroni $P < 1 \times 10^{-6}$; fig. 2G, supplementary data set S2, Supplementary Material online). This suggests maintenance of diverged haplotypes for more than 2 My, with extreme cases in which haplotypes are shared across species (8–10 My) (Perez-Losada et al. 2008; Herrera et al. 2015). The remaining genes with CPD$_{w-b} < 0$ may represent either cases where the balanced alleles are younger or oversampling of homozygous individuals for any given marker.
**Targets of Selection**
We partitioned our data set among candidate genes with positive and negative CPD$_{w-b}$ allele trees and conducted gene ontology (GO) enrichment analyses. The 150 genes with positive CPD$_{w-b}$ trees show enrichment for terms related to “ion channel regulation,” including genes involved in environmental sensing and circadian rhythm regulation (supplementary table S3, Supplementary Material online). We show examples for three candidate genes under ancestral balancing selection involved in environmental sensing: 1) the painless gene ($Pain$; g1606; fig. 3A), which is involved in nociception (i.e., pain reception), as well as detection of heat and mechanical stimuli (Tracey et al. 2003; Xu et al. 2006); 2) the Pyrexia gene ($Pyx$; g3472; fig. 3B), which is involved in negative geotaxis, and responses to heat (Lee et al. 2005); and 3) the shaker cognate w gene ($Shaw$; g3310; fig. 3C), which is involved in regulation of circadian rhythm (Hodge and Stanewsky 2008; Buhl et al. 2016). These three examples showcase canonical footprints of balancing selection around the TSP, concomitant with a bimodal allele tree.
Among genes with negative CPD$_{w-b}$, we observe enriched functions for “anatomical structure formation” including genes coding for motor proteins and muscle genes (supplementary table S4, Supplementary Material online). In all cases, we used RNA-seq data from ME individuals to confirm that these loci are expressed in adult barnacles.
**Discussion**
In intertidal barnacles, the dichotomy of strong adult selection and high offspring dispersal means that any allele that is beneficial to parental fitness in one generation may be neutral or deleterious in the next (Gillespie 1973). This leads to a fundamental question in evolutionary biology: How are adaptations maintained in the face of extreme ecological variability? In this article, we provide evidence suggesting that balancing selection is widespread across the barnacle genome, with 4% of annotated genes harboring putatively balanced polymorphisms. Notably, these polymorphisms occur in genes with functions that may be important for life in variable environments, and many have been maintained for at least 2 My despite a complex phylogeographic history (Wares and Cunningham 2001; Flight and Rand 2012). Naturally, the heterogeneous nature of the rocky intertidal imposes a segregation “cost” for these balanced polymorphisms, as they occur in individuals that, due to high dispersal, recruit in suboptimal habitats for any given genetic makeup. This ecological load, defined as $L_e = (W_{\text{max}} - W)/W_{\text{max}}$ (where $W$ is mean fitness, and $W_{\text{max}}$ is optimal fitness, across all habitats), will be substantial, as demonstrated by comprehensive recruitment studies in natural habitats (Bernsø 1989; Bernsø et al. 1992; Pineda et al. 2006). For example, at initial settlement, barnacle density can be as high as 76 individuals per cm$^2$, but at maturity, it can be as low as 0.15 individuals per cm$^2$ (0.2% survival) (Pineda et al. 2006). This mass mortality is habitat- and genotype dependent (Schmidt and Rand 2001). This is the type of “fitness cost” envisioned in the Levene model of balancing selection (Levene 1953). As such, our data suggest that the problem of ecological load is a defining condition of the barnacle life cycle. And, more generally, it argues that balancing selection, via marginal overdominance, may be the fundamental process underlying maintenance of adaptation in variable environments.
Is Pervasive Balancing Selection Plausible in Nature?
Under classical models of population genetics, when loci are considered to be independent of each other, the additive effects of widespread balanced polymorphism result in unbearable amounts of fitness variance and genetic death (Kimura and Crow 1964; Lewontin and Hubby 1966). However, if balanced loci have interactive effects (e.g., epistasis), multiple polymorphisms could be maintained with minimum effects on the distribution of fitness variance (King 1967; Milkman 1967; Sved et al. 1967; Wittmann et al. 2017). Based on this theoretical framework, multiple models have been developed to describe the conditions that favor the long-term maintenance of functional variation in spatially varying environments (Gillespie 1973; Hedrick et al. 1976). Moreover, polymorphisms will be less likely to be lost if there is a large number of ecological niches available, if there is migration among niches, and if individuals are proactive in choosing niches where their fitness is maximized (Hedrick et al. 1976). We argue that barnacles satisfy these conditions to some degree.
First, although it is useful to summarize intertidal heterogeneity in the form of discrete microhabitats (Schmidt et al. 2000), individual barnacles experience the rocky shore as a complex tapestry of interactive stressors at three spatial levels. At microhabitat scales, the upper and lower tidal zones pose diametrically different ecological challenges in terms of food availability, competition, predation, and risk of desiccation (Berntess et al. 1991; Schmidt and Rand 1999, 2001). At mesohabitat scales, open coasts versus sheltered estuaries vary in their exposure to wave action, upwelling dynamics, and biotic interactions (Sanford and Menge 2001; Dufresne et al. 2002; Veliz et al. 2004). These, in turn, modify microlevel stressors. Lastly, at macrohabitat scales, topological differences across shores and latitudinal variations in tidal range produce a mosaic of thermal stress along continents (Helmuth et al. 2002). Consequentially, what selection pressures are more important for any given barnacle will emerge from the interactions among these stress gradients. This complex landscape of selection has been captured in studies of the barnacle *Mpi* gene. Accordingly, the locus is under selection at microlevels in the Gulf of Maine (Schmidt and Rand 1999; Schmidt et al. 2000), at mesolevels in the gulf of St Lawrence (Canada) (Dufresne et al. 2002; Veliz et al. 2004), yet it shows tepid signs of selection in the Narragansett Bay (Rhode Island) (Rand et al. 2002; Nunez et al. 2020). Similar complexity has also been captured in temperate populations of *Drosophila*. In these, idiosyncratic weather effects can alter the dynamics of seasonal adaptation (Bergland et al. 2014; Machado et al. 2019). Second, the high dispersal capacity of the larval stage ensures constant migration between these niches across generations. Finally, barnacles also have the ability to choose preferred substrates during settlement. This occurs during the spring when barnacle larvae extensively survey microhabitats for biological, chemical, and physical cues produced by previous settlers before making final commitments of where to settle (Berntess et al. 1992). Unfortunately for the barnacle, this capacity for substrate choice does not mitigate mass mortality during late summer, which leads to strong selection for particular genotypes (Schmidt and Rand 2001). Currently, there is limited evidence for genotype-specific substrate selection or nonrandom settlement (Veliz et al. 2006). A cohort-tracking and sequencing experiment could be utilized to address this question (these experiments are underway). If true, these behaviors may constitute a form of adaptive plasticity, helping barnacles choose habitats where their fitness may be marginally improved. Overall, this suggests that the barnacle’s life history is conducive to the maintenance of balanced polymorphisms.
**Pool-seq in Ecological Genomics**
Our analysis was conducted using pool-seq data. As such, it is important to highlight known caveats associated with this genotyping technology (Anderson et al. 2014; Anand et al. 2016; Nunez et al. 2018). Although incredibly cost-effective, the accuracy of pool-seq is highly dependent on the number of individuals pooled, sequencing coverage, as well as sequencing technology. These caveats can become pronounced when working on nonmodel systems where enforcing uniform sample sizes across populations may be logistically challenging. Nevertheless, pool-seq experiments that deviate from the recommended design (Gautier et al. 2013) result in inaccurate estimates of allele frequency, including undersampling of rare alleles and oversampling of fixed sites (Anderson et al. 2014). These systemic errors have notable impacts when estimating demographic parameters. We ameliorated these shortcomings using a two-pronged approach. First, for each population sampled, we sequenced both a single individual and a pool. The single individual allowed us to estimate demographic parameters. The pool, on the other hand, allowed us to survey common variation across populations. And thus, although each individual approach has unique shortcomings, their combination provides a robust data set to address the questions presented in this study. In addition, because one must filter out sequencing errors, most implementations of high-throughput sequencing in ecological genomics produce skewed SFS distributions by undersampling low-frequency mutations (Achaz 2008, 2009). This problem is exacerbated for pool-seq experiments and can produce biased estimates of common statistics such as $\theta$ and, consequently, Tajima’s $D$. However, because we are interested in understanding patterns of genetic variation at common variants, our analyses are less susceptible to this drawback.
**What Variation Is under Selection?**
Our analyses suggest that 4% (312) of all annotated genes are candidates of balancing selection across the entire range of the species. Although follow-up experiments are needed to determine the replicability and functional importance of these variants, our evidence for balancing selection is consistent with patterns reported for other species. For example, the number of candidate genes in *Semibalanus* is like that observed in *Arabidopsis thaliana* and its close relative *Capsella rubella* (433 genes) (Wu et al. 2017). Similar to *Semibalanus*, these plants diverged $\sim$8 Ma, and their natural populations experience high levels of ecological heterogeneity (Bakker et al. 2006). We must acknowledge that our number may
be an underestimation driven by the nascent state of the genomic tools in *Semibalanus*. Future genome assemblies, combined with improved annotations, will undoubtedly yield a more complete picture of functional variation in the species. In addition, it will allow for a more comprehensive characterization of selection in structural variants and regulatory loci, which have been shown to be fundamental in the evolution of complex phenotypes (Wray 2007; Faria et al. 2019). Despite these limitations, our analysis recovered many candidate genes involved in functions which may be key for life in variable environments. Without more functional validation, the connections between these genes and barnacle ecology are merely speculative. However, many of these candidates have been studied in other systems in the context of stress tolerance. Consequentially, they are fertile grounds for hypothesis generation and follow-up experiments. For instance, the general enrichment for ion channel genes suggests selection related to osmotic regulation (Sundell et al. 2019). This hypothesis is highly plausible given that intertidal ecosystems experience strong salinity fluctuations, repeatedly exposing barnacles to osmotic challenges at all spatial scales. In addition, we observe targets of selection involved in environmental sensing loci (e.g., *pain*, *pyx*, and *shaw*; fig. 3). Similar to osmotic regulation, selection on these genes is entirely plausible given the inherent variability of intertidal habitats. An important hypothesis from the allozyme era is the idea that balancing selection would target genes at the node of metabolic fluxes (Eanes 1999; Watt and Dean 2000). In such cases, balanced variation would provide biochemical flexibility to cope with environmental heterogeneity. In the same vein, we hypothesize that balancing selection may act more often on “sensor genes” which control plastic responses to ecological variation. Testing this hypothesis is beyond the scope of this study and would require the use of allele-specific differential expression experiments in barnacles. We also note that evidence of balancing selection and TSPs at the *Mpi* gene are discussed in Nunez et al. (2020).
**Complex Demography and Speciation**
Our demographic analyses provide clues about how historical events affected genetic variation in barnacle populations. In the Atlantic, our evidence suggests a shared demography throughout the Pleistocene, and that the modern Eastern and Western clusters formed in response to recent events of the last glacial cycle. These findings highlight that the low $F_{ST}$ values observed within the basins arise due to shared ancestry. Moreover, they also suggest that population structure persists in the presence of gene flow. As such, although larvae have the capacity to disperse for hundreds of kilometers, ocean currents (Nunez et al. 2018) and different estuarine flushing times (Brown et al. 2001) allow regions to retain some level of geographical structuring (Johannesson et al. 2018; Nunez et al. 2018). Comparisons between oceans reveal a stark pattern of genome-wide divergence. This pattern is driven by the separation of Pacific and Atlantic populations following the events of the trans-Arctic interchange (Vermeij 1991). Accordingly, the negative levels of $D$ in the North Atlantic may reflect the effect of bottlenecks during the trans-Arctic interchange. Notably, the high levels of $\pi$ in the Atlantic are not concordant with predictions of common colonization models in which variation of the younger population is a subset of the ancestral population (Maggs et al. 2008). We hypothesize that this could be the result of ancient admixture due to repeated trans-Arctic invasions from the Pacific (Vainölä 2003). We recognize that ancestral admixture could generate artificial signatures of balancing selection via the mixing of highly differentiated haplotypes. However, such an occurrence would affect most genes in the genome. Our evidence shows that the signatures of balancing selection are highly localized in TSP regions. For example, although $D$ is elevated in TSP regions, it is negatively skewed genome-wide. Our data do not support recent gene flow between ocean basins. As such, after 2 My of separation, neutral divergence appears to be driving Atlantic and Pacific populations to speciate in allopatry. A closer look to this hypothesis will require crossing individuals from both basins, and surveying offspring fitness and viability. More salient, however, is the observation of shared haplotypes between oceans in our candidate genes for balancing selection. In light of such strong background divergence, this provides evidence that balancing selection on most of these genes is strong and that polymorphisms have been maintained for long periods of time.
**Materials and Methods**
**Barnacle Collections**
Barnacle samples were collected from Damariscotta (Maine, United States; ME), Jamestown (Rhode Island, United States; RI), Calvert Island (British Columbia, Canada; WCAN), Reykjavik (Iceland; ICE), Porthaen (Wales, United Kingdom; UK), and Norddal (Norway; NOR). Additional samples were collected in Bergen (Norway), Torshavn (Faroe Island), and Tjärnö (Sweden). For all samples, species identities were confirmed using Sanger sequencing of the mtDNA COX I region (Bucklin et al. 2011). For the WCAN, RI, ME, ICE, UK, and NOR populations, we collected a single individual for DNA-seq, and a group of 20–40 individuals for pool-seq (supplementary appendix 2, Supplementary Material online). RNA-seq was done on four individuals from Maine. DNA-seq was done on a single individual from the sister taxa *S. cariosus*. DNA/RNA was extracted using Qiagen DNeasy/RNeasy kits. All pools and single individuals were sequenced in their own lanes of an Illumina machine by GENEWIZ LLC using 2 × 150 paired-end configuration.
**Mapping Data Sets to the Genome**
Samples were mapped to a genome assembled de novo for the species (Sbal3.1; NCBI GenBank accession: VOPJ00000000; BioProject: PRJNA575748; BioSample: SAMN12406453; supplementary appendix 1, Supplementary Material online). The genome was assembled using a hybrid approach which combines Pacbio reads and Illumina reads using DBG2OLC (Ye et al. 2016) and Redundans (Pryszzc and Gabaldon 2016). Gene models were constructed using an ab initio method, AUGUSTUS (Stanke and Waack 2003), informed by evidence from the RNA-seq. A gene feature file (GFF) is available as
The model used for gene prediction was trained in *Drosophila melanogaster*. Genes were annotated by pairwise blast against the *D. melanogaster* genome (Dmel; NCBI GenBank: GCA_000001215.4). All annotations are available as supplementary data set S5, Supplementary Material online. DNA reads from all populations were mapped to Sbal3.1 using bwa mem (Li 2013). RNA reads were mapped using Hisat2 (Kim et al. 2015). SNPs were called using the samtools pipeline (Li et al. 2009). Short-read phasing was done using HAPCUT2 (Edge et al. 2017). LD in pools was estimated using LDx (Feder et al. 2012).
**Genome Analyses**
Estimates of $\pi$ and $D$ were done using the popoolation-1 suite (Kofler, Orozco-terWengel, et al. 2011). Estimations of allele frequencies and $f_{ST}$ were done using the popoolation-2 suite (Kofler, Pandey, et al. 2011). Demographic reconstructions were done using MSMC (Schiffels and Durbin 2014). The $f_{ST}$ statistics were estimated using treemix (Pickrell and Pritchard 2012). Bayesian molecular clock analyses were done in BEAST2 (Bouckaert et al. 2014). ABBA/BABA statistics were calculated in Dssite (Malinsky et al. 2020). Phylogenetic inferences were done in IQtree (Chernomor et al. 2016). GO enrichment analysis was done using GOrilla (Eden et al. 2009) and GO terms inferred from our *Drosophila* annotation. The enrichment was assessed by comparing two genes list. The first composed of the genes of interest (i.e., the gene targets), the second one by all the genes annotated in Sbal3.1 (i.e., the gene universe). A detailed description of our analyses can be found in the supplementary methods section, Supplementary Material online, as well as in GitHub: https://github.com/Jcbnunez/BarnacleEcoGenomics.
**Supplementary Material**
Supplementary data are available at Molecular Biology and Evolution online.
**Acknowledgments**
All the authors would like to acknowledge the Centre for Marine Evolutionary Biology (CeMEB) at the University of Gothenburg which organized the Marine Evolution 2018 meeting in which most of the authors met and started this collaborative work. In addition, they acknowledge S. Ramachandran, E. Huerta-Sanchez, D. Sax, D.R. Gaddes, and R.E.F. Gordon for their support and helpful insights, E. Sanford for providing the sample of *Semibalanus cariosus*, M.D. Rand and family for collecting barnacles in Norddal, Norway, and C. Harley for providing the samples from British Columbia. We thank the Natural Environment Research Council, the European Research Council, the Swedish Research Councils VR and Formas (Linnaeus Grant to CeMEB), and SciLife Laboratory. This research was conducted using computational resources and services at the Center for Computation and Visualization, Brown University. J.C.B.N., K.B.N., and S.R. were supported by Graduate Research Fellowships (GRFP) from the National Science Foundation (NSF). J.C.B.N. received additional support from the NSF-Graduate Research Opportunities Worldwide (GROW) fellowship, as well as a fellowship from the Swedish Royal Academy of Sciences (*Kungliga Vetenskapsakademien;* KVA). D.A.F. was supported by a Brown University Karen T. Romer Undergraduate Teaching and Research Award (UTRA). A.D.S. was supported by the NSF (Grant No. OCE-1829835) and a Fulbright Spain Graduate Studies Scholarship. This work was supported by the NSF Integrative Graduate Education and Research Traineeship Grant (IGERT: DGE-0966060) and the National Institutes of Health (NIH: 2R01GM067862) to D.M.R., a Carl Trygger Foundation (CTS 11:14) to M.A.R, the Swedish Research Council (Vetenskapsrådet; Grant Nos. 2017-04559 to A.B., 2017-03798 to K.J.), the Melzer Research Fund to H.G., and the Bushnell Graduate Research and Education Fund (EEB Doctoral Dissertation Enhancement Grant) to J.C.B.N.
**Data Deposition**
Data used in this study are available in the National Center for Biotechnology Information (NCBI), https://www.ncbi.nlm.nih.gov (last accessed September 13, 2020). Raw reads were deposited under submission ID: SUB6188969. SRA are as follows: DNA-seq data sets: SRR10011789, SRR10011802, SRR10011804, SRR10011805, SRR10011807–SRR10011810, SRR10011812–SRR10011814, SRR10011819, and SRR10011825. PacBio data set: SRR10011818; and RNA-seq data sets: SRR10011820–SRR10011823. mtDNA sequences for the COX I genes can be acceded form the following GenBank accession numbers MG925538–MG925662, MG928182–MG928233, and MT329074–MT329592. Whole mtDNAs were deposited under accession numbers MG010647, MG010648, MG010649, MT528636, and MT528637. The barnacle genome (Sbal3.1) is available at NCBI (accession number VOPJ00000000). A GitHub repository with code as well as with the supplementary data sets S1–S5 can be found at https://github.com/Jcbnunez/BarnacleEcoGenomics (last accessed September 13, 2020).
**References**
Achaz G. 2008. Testing for neutrality in samples with sequencing errors. *Genetics* 179(3):1409–1424.
Achaz G. 2009. Frequency spectrum neutrality tests: one for all and all for one. *Genetics* 183(1):249–258.
Anand S, Mangano E, Barizzone N, Bordoni R, Sorosina M, Clarelli F, Corrado L, Martinelli Boneschi F, D’Alfonso S, De Bellis G. 2016. Next generation sequencing of pooled samples: guideline for variants. *Sci Rep.* 6(1):33735.
Anderson EC, Skaug HJ, Borchers DJ. 2014. Next-generation sequencing for molecular ecology: a caveat regarding pooled samples. *Mol Ecol.* 23(3):502–512.
Bakker EG, Stahl EA, Toomajian C, Nordborg M, Kreitman M, Bergelson J. 2006. Distribution of genetic variation within and among local populations of *Arabidopsis thaliana* over its species range. *Mol Ecol.* 15(5):1405–1418.
Bergland AO, Behrman EL, O’Brien KR, Schmidt PS, Petrov DA. 2014. Genomic evidence of rapid and stable adaptive oscillations over seasonal time scales in *Drosophila*. *PLoS Genet.* 10(11)e1004775.
Berntssen MD. 1989. intraspecific competition and facilitation in a northern acorn barnacle population. *Ecology* 70(1):257–268.
Berntess MD, Gaines SD, Bermudez D, Sanford E. 1991. Extreme spatial variation in the growth and reproductive output of the acorn barnacle *Semibalanus balanoides*. *Mar Ecol Prog Ser.* 75:91–100.
Berntess MD, Gaines SD, Stephens EC, Yund DO. 1992. Components of recruitment in populations of the acorn barnacle *Semibalanus balanoides* (Linnaeus). *J Exp Mar Biol Ecol.* 156(2):199–215.
Botero CA, Weissing FL, Wright J, Rubenstein DR. 2015. Evolutionary tipping points in the capacity to adapt to environmental change. *Proc Natl Acad Sci U S A.* 112(17):484–189.
Bouckaert R, Heled J, Kuhnert D, Vaughan T, Wu CH, Xie D, Suchard MA, Rambaut A, Drummond AJ. 2014. BEAST 2: a software platform for Bayesian evolutionary analysis. *PLoS Comput Biol.* 10(4)e1003537.
Brochmann C, Gabrielsen TM, Nordal I, Landvik JV, Elven R. 2003. Glacial survival or tabula rasa? The history of North Atlantic biota revisited. *Taxon* 52(3):417.
Brown AF, Kann LW, Rand DM. 2001. Gene flow versus local adaptation in the northern acorn barnacle, *Semibalanus balanoides*: insights from mitochondrial DNA variation. *Evolution* 55(10):1972–1979.
Bucklin A, Steinke D, Blanco-Bercial L. 2011. DNA barcoding of marine metazoa. *Annu Rev Mar Sci.* 3(1):471–508.
Buhf E, Bradlaugh A, Ogueta M, Chen RF, Stanewsky R, Hodge JJ. 2016. Quasimodo mediates daily and acute light effects on *Drosophila* clock neuron excitability. *Proc Natl Acad Sci U S A.* 113(47):13486–13491.
Chernyavskiy P, von Haeseler A, Minh BQ. 2016. Terrace aware data structure for phylogenomic inference from supermatrices. *Syst Biol.* 65(6):997–1008.
Clark AG. 1997. Neutral behavior of shared polymorphism. *Proc Natl Acad Sci U S A.* 94(15):7730–7734.
Drummond AJ, Nicholls GK, Rodrigo AG, Solomon W. 2002. Estimating mutation parameters, population history and genealogy simultaneously from temporally spaced sequence data. *Genetics* 161(3):1307–1320.
Dufresne F, Bourget E, Bernatchez L. 2002. Differential patterns of spatial divergence in microsatellite and allozyme alleles: further evidence for locus-specific selection in the acorn barnacle, *Semibalanus balanoides*. *Mol Ecol.* 11(1):113–123.
Eanes WF. 1999. Analysis of selection on enzyme polymorphisms. *Annu Rev Ecol Syst.* 30(1):301–326.
Eden E, Navon R, Steinfeld I, Lipson D, Yakhini Z. 2009. GOrilla: a tool for discovery and visualization of enriched GO terms in ranked gene lists. *BMC Bioinformatics* 10(1):48.
Edge P, Bafna V, Bansal V. 2017. HapCUT2: robust and accurate haplotype assembly for diverse sequencing technologies. *Genome Res.* 27(5):801–812.
Faria R, Johnstone KA, Butlin RK, Westram AM. 2019. Evolving inversions. *Trends Ecol Evol.* 34(3):239–248.
Fedr F, Petter DA, Bergland AC. 2012. LDx: estimation of linkage disequilibrium from high-throughput pooled resequencing data. *PLoS One* 7(11):e48588.
Fijarczyk A, Babik W. 2015. Detecting balancing selection in genomes: limits and prospects. *Mol Ecol.* 24(14):3529–3545.
Flight PA, O'Brien MA, Schmidt PS, Rand DM. 2012. Genetic structure and the North American postglacial expansion of the barnacle, *Semibalanus balanoides*. *J Hered.* 103(2):153–165.
Flight PA, Rand DM. 2012. Genetic variation in the acorn barnacle from allozymes to population genomics. *Integr Comp Biol.* 52(3):418–429.
Flowerdew MW. 1983. Electrophoretic investigation of populations of the cirripede *Balanus balanoides* (L) around the North Atlantic seaboard. *Crustaceana* 45(3):260–278.
Gautier M, Foucaud J, Ghari K, Cezard T, Galan M, Loseau A, Thomson M, Pudio P, Kerdelhué C, Estoup A. 2013. Estimation of population allele frequencies from next-generation sequencing data: pool-versus individual-based genotyping. *Mol Ecol.* 22(14):3766–3779.
Gillespie J. 1973. Polymorphism in random environments. *Theor Popul Biol.* 4(2):193–195.
Green RE, Krause J, Briggs AW, Maricic T, Stenzel U, Kircher M, Patterson N, Li H, Zhai W, Frizt MHY, et al. 2010. A draft sequence of the Neandertal genome. *Science* 328(5979):710–722.
Hart DL, Clark AG. 1997. Principles of population genetics. Sunderland (MA): Sinauer Associates.
Hedrick PW. 2006. Genetic polymorphism in heterogeneous environments: the age of genomics. *Annu Rev Ecol Evol Syst.* 37(1):67–93.
Hedrick PW, Ginevan ME, Ewing EP. 1976. Genetic polymorphism in heterogeneous environments. *Annu Rev Ecol Syst.* 7(1):1–32.
Helmuth B, Harley CD, Halpin PM, O'Donnell M, Hofmann GE, Blanchette CA. 2002. Climate change and latitudinal patterns of intertidal thermal stress. *Science* 298(5595):1015–1017.
Herrera S, Watanabe H, Shank TM. 2015. Evolutionary and biogeographical patterns of barnacles from deep-sea hydrothermal vents. *Mol Ecol.* 24(3):673–689.
Hodge JJ, Stanewsky R. 2008. Function of the Shaw potassium channel within the *Drosophila* circadian clock. *PLoS One.* 3(5):e2274.
Johannesson K, Ring AK, Johannesson KB, Renberg E, Jonsson PR, Havernand JN. 2018. Oceanographic barriers to gene flow promote genetic subdivision of the tunicate *Ciona intestinalis* in a North Sea archipelago. *Mar Biol.* 165(8):126.
Jones SJ, Southward AJ, Wethey DS. 2012. Climate change and historical biogeography of the barnacle *Semibalanus balanoides*. *Global Ecol Biogeogr.* 21(7):716–724.
Kim D, Langmead B, Salzberg SL. 2015. HISAT: a fast spliced aligner with low memory requirements. *Nat Methods.* 12(4):357–360.
Kimura M, Crow J. 1964. The number of alleles that can be maintained in a finite population. *Genetics* 49:725–738.
King L. 1967. Continuously distributed factors affecting fitness. *Genetics* 55(3):483–492.
Koller R, Orozco-terWengel P, De Maio N, Pandey RV, Nolte V, Futschik A, Kosiol C, Schlötterer C. 2011. PoPoolation: a toolbox for population genetic analysis of next generation sequencing data from pooled individuals. *PLoS One* 6(1):e15925.
Koller R, Pandey RV, Schlötterer C. 2011. PoPoolation2: identifying differentiation between populations using sequencing of pooled DNA samples (Pool-Sequ). *Bioinformatics* 27(24):3435–3436.
Lee Y, Lee Y, Lee J, Bang S, Hyun S, Kang J, Hong ST, Bae E, Kaang BK, Kim J. 2005. Pyrexia is a new thermal transient receptor potential channel endowing tolerance to high temperatures in *Drosophila melanogaster*. *Nat Genet.* 37(3):305–310.
Levene H. 1953. Genetic equilibrium when more than one ecological niche is available. *Am Nat.* 87(836):331–333.
Lewontin RC, Hubby JL. 1966. A molecular approach to the study of genic heterozygosity in natural populations. II. Amount of variation and degree of heterozygosity in natural populations of *Drosophila pseudoobscura*. *Genetics* 54:595–609.
Li H. 2013. Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. *arXiv* abs/1303.3997.
Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, Marth G, Abecasis G, Durbin R. 2009. Genome Project Data Processing Subgroup. 2009. The Sequence Alignment/Map format and SAMtools. *Bioinformatics* 25(16):2078–2079.
Machado HE, Bergland AO, Taylor R, Tilk S, Behrman E, Dyer K, Fabian DC, Flatt T, González J, Karasov TL. 2019. Broad geographic sampling reveals predictable and pervasive seasonal adaptation in *Drosophila*. bioRxiv (unpublished data). Available from: https://www.biorxiv.org/content/biorxiv/early/2019/10/11/337543.full.pdf (last accessed December 12, 2019).
MacKay TF, Richards S, Stone EA, Barbashia A, Ayroles JF, Zhu D, Casillas S, Han Y, Magwire MM, Cribidian JM, et al. 2012. The *Drosophila melanogaster* Genetic Reference Panel. *Nature* 482(7384):173–178.
Maggs CA, Castilho R, Foltz D, Henzler J, Molly JT, Kelly I, Olsen J, Perez KE, Swain W, Vainölä R, et al. 2008. Evaluating signatures of glacial refugia for North Atlantic benthic marine taxa. *Ecology* 89(Suppl 11):S108–S122.
Malinsky M, Matschner M, Svärdal H. 2020. Duarte - fast D-statistics and related admixture evidence from VCF files. bioRxiv (unpublished data). Available from: https://www.biorxiv.org/content/10.1101/634472v2 (last accessed May 1, 2020).
Messer PW, Petrov DA. 2018. Population genomics of rapid adaptation by soft selective sweeps. *Trends Ecol Evol.* 28(11):659–669.
Metz EC, Palumbi SR. 1996. Positive selection and sequence rearrangements generate extensive polymorphism in the gamete recognition protein bindin. *Mol Biol Evol.* 13(2):397–406.
Millikeman RC. 1965. Heterosis as a major cause of heterozygosity in nature. *Genetics* 55(3):493–495.
Nunez JCB, Ehleringer RG, Ferrario DA, Rand DM. 2018. Population genomics and biogeography of the northern acorn barnacle (*Semibalanus balanoides*) using pooled sequencing approaches. In: Oleksiak MF, Rajapak OP, editors. *Population genomics: marine organisms.* Cham (Switzerland): Springer. p. 139–168.
Nunez JCB, Flight PA, Neil KB, Rong S, Eriksson LA, Ferranti DA, Rosenblad MA, Blomberg A, Rand DM. 2020. Footprints of natural selection at the mannose-6-phosphate isomerase locus in barnacles. *Proc Natl Acad Sci USA.* 117(10):5376–5385.
Patton H, Hubbard A, Andreasen K, Auricai A, Whitehouse PL, Stroeven AP, Shackleton C, Winsorbow M, Heyman J, Hall AM. 2017. Deglaciation of the Eurasian ice sheet complex. *Quat Sci Rev.* 169:148–172.
Perez-Losada M, Harp M, Hoeg JT, Achituv Y, Jones D, Watanabe H, Crandall KA. 2008. The tempo and mode of barnacle evolution. *Mol Phylogenet Evol.* 46(1):328–346.
Pickrell JK, Pritchard JK. 2012. Inference of population splits and mixtures from genome-wide allele frequency data. *PLoS Genet.* 8(11)e1002967.
Pineda J, Starczak V, Stueckle TA. 2006. Timing of successful settlement: demonstration of a recruitment window in the barnacle *Semibalanus balanoides*. *Mar Ecol Prog Ser.* 320:233–237.
Przyzcz LP, Gabaldon T. 2016. Redundans: an assembly pipeline for highly heterogeneous genomes. *Nucleic Acids Res.* 44(12)r113.
Rand DM, Spacht PS, Sackton TB, Schmidt PS. 2002. Ecological genetics of Mpi and Gpi polymorphisms in the acorn barnacle and the spatial scale of neutral and non-neutral variation. *Integr Comp Biol.* 42(4):829–836.
Reich D, Thangaraj K, Patterson N, Price AL, Singh L. 2009. Reconstructing Indian population history. *Nature* 461(7263):899–904.
Ruddiman WF, McIntyre A. 1981. The North Atlantic Ocean during the last deglaciation. *Palaeogeogr Palaeoclimatol Palaeoecol.* 35:145–214.
Sanford E, Menge BA. 2001. Spatial and temporal variation in barnacle growth in a coastal upwelling system. *Mar Ecol Prog Ser.* 209:143–157.
Schiffels S, Durbin R. 2014. Inferring human population size and separation history from multiple genome sequences. *Nat Genet.* 46(8):919–925.
Schlotterer C, Tobler R, Kofler R, Nolte V. 2014. Sequencing pools of individuals-mining genome-wide polymorphism data without big funding. *Nat Rev Genet.* 15(11):749–763.
Schmidt PS, Bertrness MD, Rand DM. 2000. Environmental heterogeneity and balancing selection in the acorn barnacle *Semibalanus balanoides*. *Proc R Soc Lond B.* 267(1441):379–384.
Schmidt PS, Rand DM. 1999. Intertidal microhabitat and selection at Mpi: interlocus contrasts in the northern acorn barnacle, *Semibalanus balanoides*. *Evolution* 53(1):135–146.
Schmidt PS, Rand DM. 2001. Adaptive maintenance of genetic polymorphism in an intertidal barnacle: habitat- and life-stage-specific survival of Mpi genotypes. *Evolution* 55(7):1336–1346.
Schmidt PS, Searge EF, Pearson CA, Riggs C, Rawson PD, Hibsho TJ, Brawley SH, Trussell GC, Carrington E, Wethey DS, et al. 2008. Ecological genetics in the North Atlantic: environmental gradients and adaptation at specific loci. *Ecology* 89(Suppl 1):S91–S107.
Stanke M, Waack S. 2003. Gene prediction with a hidden Markov model and a new intron submodel. *Bioinformatics* 19(Suppl 2):i215–i225.
Sundell K, Wrangel AL, Jonsson PR, Blomberg A. 2019. Osmoregulation in barnacles: an evolutionary perspective of potential mechanisms and future research directions. *Front Physiol.* 10:877.
Sved JA, Reed TE, Bodmer WF. 1967. The number of balanced polymorphisms that can be maintained in a natural population. *Genetics* 55:469–481.
Tracey WD, Wilson RI, Laurent G, Benzer S. 2003. painless, a *Drosophila* gene essential for nociception. *Cell* 113(2):261–273.
Uyenoyama MK. 2005. Evolution under tight linkage to mating type. *New Phytol* 165(1):63–70.
Vainioilä R. 2003. Repeated trans-Arctic invasions in litoral bivalves: molecular biogeography of the *Macoma balthica* complex. *Mar Biol.* 143:935–944.
Veliz D, Bourget E, Bernatchez L. 2004. Regional variation in the spatial scale of selection at MPI and GPI* in the acorn barnacle *Semibalanus balanoides* (Crustacea). *J Evol Biol.* 17(5):953–966.
Veliz D, Duchesne P, Bourget E, Bernatchez L. 2006. Genetic evidence for kin aggregation in the intertidal acorn barnacle (*Semibalanus balanoides*). *Mol Ecol.* 15(13):4193–4202.
Vermeij GJ. 1991. Anatomy of an invasion: the trans-Arctic interchange. *Paleobiology* 17(3):281–307.
Wares JP, Cunningham CW. 2001. Phylogeography and historical ecology of the North Atlantic intertidal. *Evolution* 55(12):2455–2469.
Watt WB, Dean AM. 2000. Molecular-functional studies of adaptive genetic variation in prokaryotes and eukaryotes. *Annu Rev Genet.* 34(1):593–622.
Williams G. 1966. Adaptation and natural selection: a critique of some current evolutionary thought. Princeton (NJ): Princeton University Press.
Wittmann MJ, Bergland AO, Feldman MW, Schmidt PS, Petrov DA. 2017. Seasonally fluctuating selection can maintain polymorphism at many loci via segregation lift. *Proc Natl Acad Sci U S A.* 114(46):E9932–E9941.
Wray GA. 2007. The evolutionary significance of cis-regulatory mutations. *Nat Rev Genet.* 8(3):206–216.
Wu Q, Han TS, Chen X, Chen JF, Zou YP, Li ZW, Xu YC, Guo YL. 2017. Long-term balancing selection contributes to adaptation in *Arabidopsis* and its relatives. *Genome Biol.* 18(1):217.
Xu SY, Cang CL, Liu XF, Peng YQ, Ye YZ, Zhao ZQ, Guo AK. 2006. Thermal nociception in adult *Drosophila*: behavioral characterization and the role of the painless gene. *Genes Brain Behav.* 5(8):602–613.
Ye C, Hill CM, Wu S, Ruan J, Ma ZS. 2016. DBG2OLC: efficient assembly of large genomes using long erroneous reads of the third generation sequencing technologies. *Sci Rep.* 6(1):3900.
|
JUDGMENTS - DOUBLE JEOPARDY - RES JUDICATA - EFFECT OF PRIOR CONVICTION OR ACQUITTAL ON SUBSEQUENT SUIT FOR STATUTORY PENALTY OR FORFEITURE
Edward W. Rothe S.Ed.
University of Michigan Law School
Follow this and additional works at: https://repository.law.umich.edu/mlr
Part of the Civil Procedure Commons, Criminal Procedure Commons, and the Taxation Commons
Recommended Citation
Edward W. Rothe S.Ed., JUDGMENTS - DOUBLE JEOPARDY - RES JUDICATA - EFFECT OF PRIOR CONVICTION OR ACQUITTAL ON SUBSEQUENT SUIT FOR STATUTORY PENALTY OR FORFEITURE, 48 Mich. L. Rev. 1137 (). Available at: https://repository.law.umich.edu/mlr/vol48/iss8/7
Judgments—Double Jeopardy—Res Judicata—Effect of Prior Conviction or Acquittal on Subsequent Suit for Statutory Penalty or Forfeiture—The case of *United States v. One De Soto Sedan* has again focused attention on some of the perplexing problems raised by the statutory imposition of both criminal and civil sanctions for the same wrongful act. The court held that an acquittal in a criminal prosecution for possessing liquor on which no federal tax had been paid was a bar to a civil in rem proceeding to forfeit claimant's car as having been used in the removal, deposit and concealment of the same liquor with intent to defraud the United States of taxes. Since the two proceedings involved the same parties and substantially the same issues, the authority of *Coffey v. United States* was controlling. The *Coffey* case had decided that even though the forfeiture suit is
---
1 (D.C. N.C. 1949) 85 F. Supp. 245.
2 116 U.S. 436, 6 S.Ct. 437 (1886).
a proceeding in rem and civil in form, it is so far criminal in its essential nature that the government cannot sue to enforce a statutory forfeiture after defendant has been acquitted in a prior criminal proceeding to punish his acts as a crime, when the forfeiture is based upon those same acts.
If the prior acquittal is given the effect of a bar to the forfeiture suit, it must be because the doctrine of former jeopardy or the principles of res judicata can be relied on in the forfeiture suit. It is the purpose of this discussion to examine whether and how far res judicata or former jeopardy principles can properly be applied as between a criminal prosecution and a suit to enforce a statutory penalty or forfeiture for the same act or acts. There is a technical distinction between a statutory penalty and a statutory forfeiture, the former being the imposition of a duty to pay money as punishment (though not a fine), while the latter is the process by which, as punishment, one loses his right to property. The two terms will be used interchangeably in this comment except where otherwise indicated. This comment will deal, primarily, with federal law, since that is where the problem assumes its greatest importance and has been most fully treated.
I. The Operation of Double Jeopardy
The double jeopardy clause found in most state constitutions as well as in the Federal Constitution protects an accused from two punishments for the same offense. In the Federal Constitution (Fifth Amendment) the clause is worded: "nor shall any person be subject for the same offense to be twice put in jeopardy of life and limb." By a liberal interpretation, the protection extends to all indictable offenses, including misdemeanors.\(^3\) But, historically, except where a penalty is incidental to conviction for a crime, statutory penalties imposed as a result of a wrongful act are almost always enforced in a proceeding civil in form.\(^4\) For example, the government, suing in its fiscal capacity in an action of debt or libel,\(^5\) need not prove its allegations beyond a reasonable doubt,\(^6\) and may have a verdict directed in its favor;\(^7\) while
\(^3\) 1 Bishop, Criminal Law, 9th ed., §990 (1923); 24 Minn. L. Rev. 522 (1940).
\(^4\) The Palmyra, 12 Wheat. (25 U.S.) 1 (1827); 1 Bishop, Criminal Law, 9th ed., §32 (1923); 51 Harv. L. Rev. 1092 (1938); 40 Yale L. J. 1319 (1931); 25 C.J., Penalties, §§80, 81, 82 (1921); 37 C.J.S., Forfeitures, §2 (1943). See historical analysis in Stout v. State, 36 Okla. 744, 130 P. 553 (1913).
\(^5\) Pettis v. Dixon, Kirby, (Conn.) 179 (1786); The Palmyra, 12 Wheat. (25 U.S.) 1 (1827).
\(^6\) Two Ford Coupé Autos v. United States, (C.C.A. 5th, 1931) 53 F. (2d) 187.
\(^7\) Hepner v. United States, 213 U.S. 103, 29 S.Ct. 474 (1909).
the defendant has no right of confrontation.\textsuperscript{8} Accordingly many courts have classified penalty suits as civil in form and remedial in nature, and since the double jeopardy provision applies only in case of two punitive actions instituted by indictment, the plea of former jeopardy is not available to the defendant in the penalty suit who has been acquitted in a prior criminal prosecution.\textsuperscript{9}
On the other hand, courts desirous of according double jeopardy protection to the defendant have followed the test laid down in \textit{Huntington v. Attrill}\textsuperscript{10} and have inquired whether the penalty is “...in its essential character and effect, a punishment of an offense against the public, or a grant of a civil right to a private person.” Perceiving that the primary purpose of the penalty suit is punitive rather than remedial, these courts have labeled it “quasi-criminal”; and regardless of the fact that the suit is civil in form, it is held to be so far criminal that defendant may not be compelled to testify against himself,\textsuperscript{11} is entitled to a day in court with a jury trial (where none would otherwise be required),\textsuperscript{12} and may enter a plea of double jeopardy.\textsuperscript{13}
It is apparent that, because of the fact that the penalty or forfeiture suit partakes of both traditionally civil and criminal characteristics,\textsuperscript{14} the problem is one of classifying the proceeding as civil or criminal for double jeopardy purposes. It is equally apparent that courts can plausibly classify the proceeding as either civil or criminal, depending on whether the court favors a policy of promoting effective law enforcement or a policy of protecting the harassed defendant. If the
\textsuperscript{8} United States v. Zucker, 161 U.S. 475, 16 S.Ct. 641 (1896).
\textsuperscript{9} 1 Bishop, Criminal Law, 9th ed., §§32 and 1067 (1923); 11 L.R.A. (n.s.) 667 (1908); State v. Roach, 83 Kan. 606, 112 P. 150 (1910); Helvering v. Mitchell, 303 U.S. 391, 58 S.Ct. 630 (1938) (penalty of 50% of income tax deficiency due to fraud with intent to evade taxes held remedial—to protect the revenue and reimburse the government for expenses of investigation and any loss due to taxpayer’s fraud); United States ex rel. Marcus v. Hess, 317 U.S. 537, 63 S.Ct. 379 (1943) (purpose of a $2,000 penalty plus double damages for fraudulent and collusive bidding on government contracts held to be remedial, not punitive—simply to reimburse government for money lost through fraud); United States v. Physic, (2d Cir. 1949) 175 F. (2d) 338 (proceeding to forfeit claimant’s auto for use in transporting heroin on which no tax had been paid held remedial—purpose being to reimburse government for lost taxes).
\textsuperscript{10} 146 U.S. 657 at 683, 13 S.Ct. 224 (1892).
\textsuperscript{11} Boyd v. United States, 116 U.S. 616, 6 S.Ct. 524 (1886); Lees v. United States, 150 U.S. 476, 14 S.Ct. 163 (1893).
\textsuperscript{12} Lipke v. Lederer, 259 U.S. 557, 42 S.Ct. 549 (1922).
\textsuperscript{13} Chouteau v. United States, 102 U.S. 603, 26 L.Ed. 246 (1880) (suit for double tax on liquor unlawfully removed from a distillery held criminal in substance though civil in form); United States v. La Franca, 282 U.S. 568, 51 S.Ct. 278 (1931) (suit to recover double tax plus fixed sum for non-payment of retail liquor dealer’s taxes held criminal in nature); McKee v. United States, (C.C.Mo. 1887) 4 Dill. 128 (suit to recover double the taxes lost through defendant’s fraud held criminal in nature).
\textsuperscript{14} See 51 Harv. L. Rev. 1092 (1938).
objective guide to classification is, as is so often asserted, whether the purpose of the proceeding is remedial or punitive, it is submitted that penalty and forfeiture suits should be labeled criminal in their essential nature. As Justice Frankfurter pointed out in his concurring opinion in *United States ex rel. Marcus v. Hess*,\(^{15}\) if the penalty were truly remedial the defendant should be allowed to show that the value of the property to be forfeited or sum to be paid will exceed any reasonable compensation figure.\(^{16}\) Since this is not permitted, the proceeding must be punitive in nature and therefore criminal.
Even if the penalty suit is labeled criminal, there remains the further problem of whether the same offense is involved as was prosecuted in the antecedent criminal proceeding. This resolves itself into a question of how far the legislature may go in carving out two or more offenses from the same acts or transaction. It often happens that the criminal punishment and the penalty are imposed by two entirely different statutes which are in no logical way connected with each other. Perhaps the wording is different or different elements are prescribed. In many instances the statutory pattern resembles a patchwork quilt. At any rate, it is clear that the double jeopardy clause in some way limits the power of the legislature to split criminal causes of action. The test most often applied to determine whether two statutes in effect attempt to punish the same offense is the "same evidence" test. If the same evidence will sustain a conviction under either statute, the offenses are identical; if not, the mere fact that they arise out of the same trans-
\(^{15}\) 317 U.S. 537, 63 S.Ct. 379 (1943).
\(^{16}\) In many cases where the purpose of the government's suit was clearly remedial the courts have refused to classify the proceeding as criminal for double jeopardy purposes even though the acts sued for would in fact constitute a crime. See Stone v. United States, 167 U.S. 178, 17 S.Ct. 778 (1897) (suit to recover the reasonable value of timber cut on United States' land); United States v. Schneider, (C.C. Ore. 1888) 35 F. 107 (suit to recover wholesale liquor dealer's tax); Violette v. Walsh, (C.C.A. 9th, 1922) 282 F. 582 (suit to recover tax on manufacture of unlawfully transported liquor); Ferroni v. United States, (C.C.A. 7th, 1931) 53 F. (2d) 1013 (similar facts); United States v. Glidden Co., (C.C.A. 6th, 1941) 119 F. (2d) 235 (suit on bond given to insure proper use of denatured alcohol for industrial purposes); Murphy v. United States, 272 U.S. 630, 47 S.Ct. 218 (1926), noted in 13 Va. L. Rev. 410 (1927) (suit to enjoin maintenance of a nuisance, viz., a place where liquor was manufactured); United States v. Donaldson-Schultz Co., (C.C.A. 4th, 1906) 148 F. 581 (suit to enjoin obstruction of a navigable stream); United States v. U.S. Gypsum Co., (D.C. D.C. 1943) 51 F. Supp. 613 (suit to enjoin monopolistic practices forbidden by the Sherman Anti-Trust Act). Cf. United States v. Salen, (D.C. N.Y. 1917) 244 F. 296 (suit to collect duties lost through defendant's alleged violation of the customs law held barred by defendant's prior acquittal in a criminal prosecution for violation of customs law based on the same acts); State v. Graffenreid, 226 Ala. 169, 146 S. 531 (1933) (impeachment of a state's attorney for moral turpitude held a criminal proceeding for double jeopardy purposes even though the usual theory is that a suit to disbar an attorney or revoke a physician's license is a civil suit to protect the public from practitioners of bad character. Cf. State v. Lewis, 164 Wis. 363, 159 N.W. 746 (1916).
action will not be controlling where Congress has prescribed two offenses.\textsuperscript{17} Accordingly, in \textit{Albrecht v. United States}\textsuperscript{18} it was held that a prosecution for selling liquor was not barred by a previous conviction for possessing the same liquor, for one may possess without selling (and vice versa) and the offenses were made distinct by Congress.\textsuperscript{19}
However, it should be noted that some courts have favored the "same transaction" test of whether the offenses are the same. For example, it has been held that if the possession now charged was only that possession at the time of sale which is necessarily incident thereto, there is but one offense. The state must elect to proceed either for "possession" or "sale" and conviction of sale bars a prosecution for possession.\textsuperscript{20}
Having classified the penalty proceeding as criminal in nature, and having decided that the offense charged is the same as that of which there has been a prior conviction or acquittal, it must still be asked whether the penalty suit is a second attempt to punish that offense. It was early held that it was competent for the legislature to subject any particular offender both to a criminal prosecution (for fine and/or imprisonment) and to a civil suit (to recover a penalty or effect a forfeiture). The theory was that this is but one punishment enforceable in two proceedings—the criminal and civil sanctions being but part and parcel of the same punishment.\textsuperscript{21}
The "comprehensive punishment" theory works out so long as the penalty suit is considered civil in nature; but when it is labeled criminal the double jeopardy clause may operate to limit the power of Congress to impose cumulative punishments. In effect there are two "prosecu-
\textsuperscript{17} Morgan v. Devine, 237 U.S. 632, 35 S.Ct. 712 (1915); 2 Freeman, Judgments, 5th ed., 1370 (1925); 1 Bishop, Criminal Law, 9th ed., 776 (1923); comment to §5, A.L.I., Administration of the Criminal Law (Proposed Final Draft, 1935) (containing citations of many cases to support a comprehensive analysis of the whole "same offense" problem); 31 Col. L. Rev. 291 (1931); 32 Mich. L. Rev. 512 (1934); 22 Minn. L. Rev. 522, 546 ff. (1938).
\textsuperscript{18} 273 U.S. 1, 47 S.Ct. 250 (1927).
\textsuperscript{19} Accord: Bynum v. State, 28 Ala. App. 439, 186 S. 588 (1939) (unlawful transportation and unlawful possession of liquor held distinct offenses).
\textsuperscript{20} Newton v. Commonwealth, 198 Ky. 707, 249 S.W. 1017 (1923). Accord: Savage v. State, 18 Ala. App. 299, 92 S. 19 (1921); Phillips v. State, 109 Tex. Cr. 523, 4 S.W. (2d) 1056 (1928); Port Gardner Investment Co. v. United States, 272 U.S. 564, 47 S.Ct. 165 (1926); Commercial Credit Co. v. United States, 276 U.S. 226, 48 S.Ct. 232 (1928).
\textsuperscript{21} People v. Stevens, 13 Wend. (N.Y.) 341 (1835); In re Leszynsky, (C.C. N.Y., 1879) 16 Blatcf. 9, 15 F. Cas. No. 8279; United States v. Mt. Clemens Beverage Co., (D.C. Mich. 1927) 23 F. (2d) 885. See criticism of this view in Stout v. State, 36 Okla. 774, 130 P. 553 (1913).
tions" to punish one offense.\textsuperscript{22} Justice Frankfurter recognized this problem in his concurring opinion in \textit{United States ex rel. Marcus v. Hess}.\textsuperscript{23} He declared that this vague scheme of classifying penalty suits as civil or criminal would not adequately safeguard those human rights for the protection of which the double jeopardy clause was inserted. However, even though the defendant was being twice punished for the same offense, this peculiar situation should not be considered within the scope of the double jeopardy clause. The legislature had simply prescribed, in advance, two sanctions for the same conduct enforceable in two separate proceedings. Plausibility is given to this theory by the traditional practice of enforcing penalties in civil proceedings, so that it is not illogical for Congress to shape its punishments according to existing procedures.
The view that there can be no double jeopardy since two proceedings are required to enforce but one punishment has been somewhat modified by several cases in the federal courts. In \textit{United States v. Chouteau} and its companion case, \textit{United States v. Ulrici},\textsuperscript{24} it was held that a suit to recover double the normal tax on liquor unlawfully removed from a distillery was barred by a compromise of the indictment or a conviction for removing the same liquor from the distillery without paying taxes. This was so even though the statute specifically provided for fine, imprisonment and double tax for such acts.\textsuperscript{25} Again, in \textit{United States v. McKee},\textsuperscript{26} it was held that double jeopardy prohibited a civil suit to recover double the taxes lost through defendant's fraud because he had previously been convicted of conspiracy to defraud the government of taxes. Implicit in these holdings are the assumptions that the penalty suit is criminal in nature, the offenses are the same, and the penalty suit is an attempt to impose a second punishment for that offense. Although the validity of one or more of these assumptions may be seriously questioned, the result is not illogical for a court eager to protect an accused by a liberal interpretation of the double jeopardy provision.
It is unfortunate that later courts faced such precedents as the \textit{McKee} and \textit{Chouteau} cases without an adequate understanding of their
\textsuperscript{22} "But an action to recover a penalty for an act declared to be a crime is, in its nature, a punitive proceeding, although it take the form of a civil action; and the word 'prosecution' is not inapt to describe such an action." \textit{United States v. La Franca}, 282 U.S. 568 at 575, 51 S.Ct. 278 (1931).
\textsuperscript{23} 317 U.S. 537, 63 S.Ct. 379 (1943).
\textsuperscript{24} 102 U.S. 603 and 612, 26 L. Ed. 246 (1880).
\textsuperscript{25} \textit{Accord}: \textit{United States v. La Franca}, 282 U.S. 568, 51 S.Ct. 278 (1931) (conviction for unlawful sale of liquor bars a suit to recover double the retail liquor dealer's taxes).
\textsuperscript{26} (C.C. Mo. 1887) 4 Dill. 128.
underlying theory and policy. For instead of merely challenging the validity of the assumptions in those cases, too many courts desirous of promoting effective law enforcement sought to work out tenuous distinctions and thereby further confused the law on this subject. For example, in *United States v. Three Copper Stills* the court held that a prior conviction for removal and concealment of non-tax-paid liquor was no bar to a suit to forfeit the same liquor for not having revenue stamps on the casks. The reasoning of the court was that the liquor itself is the offender in an in rem suit and so the owner is not placed in jeopardy at all. By relying on this neat little technicality the court was able to distinguish the *McKee* case, when it would have been perfectly simple to hold that the forfeiture proceeding was merely a necessary mode of enforcing part of a single comprehensive punishment. There is ample authority to sustain the proposition that the owner is the real party interested in a forfeiture suit and is therefore placed in jeopardy as if there were an indictment against him personally.
Another device used to avoid the application of double jeopardy to a forfeiture suit is to find that the parties are different. For example, where the husband was acquitted in the criminal proceeding, his wife, the record owner of the car, could not plead the acquittal in bar.
It has been suggested that if the double jeopardy clause prohibits the imposition of comprehensive punishment for the same offense, a statute which attempts to do so is unconstitutional. In the first place, it should be observed that most courts have accepted the two-suits—
27 (D.C. Ky. 1890) 47 F. 495.
28 *Accord:* United States v. One Machine for Corking Bottles, etc., (D.C. Wash. 1920) 267 F. 501; Various Items of Personal Property v. United States, 282 U.S. 577, 51 S.Ct. 282 (1931) (conviction for conspiracy to manufacture liquor unlawfully no bar to suit to forfeit still for use with intent to defraud government of taxes). Cf. United States v. La Franca, 282 U.S. 568, 51 S.Ct. 278 (1931) (double jeopardy bars a suit for penalty taxes after conviction for same acts. Opinion written by same justice who delivered opinion in Various Items etc. v. United States), noted in 40 YALE L.J. 1319 (1931), 29 MICH. L. REV. 930 (1931).
29 Boyd v. United States, 116 U.S. 616, 6 S.Ct. 524 (1886); Coffey v. United States, 116 U.S. 436, 6 S.Ct. 437 (1886); United States v. One Dodge Sedan, (C.C.A. 3rd, 1940) 113 F. (2d) 552 (expressly disapproving of Coffey v. United States in other respects); State v. Intoxicating Liquor, 72 Vt. 253, 47 A. 779 (1900). Cf. Stout v. State, 36 Okla. 744, 130 P. 553 (1913), apparently rejecting the two-suits-one-punishment theory in favor of the in rem-no jeopardy theory after a careful historical analysis of the double jeopardy protection.
30 United States v. One Dodge Sedan, (C.C.A. 3rd, 1940) 113 F. (2d) 552. *Accord:* United States v. One Pontiac Sedan, (C.C.A. 6th, 1939) 105 F. (2d) 149. See also United States v. Manufacturing Apparatus, Oleo etc. Co., (D.C. Colo. 1916) 240 F. 235 (acquittal of officer-shareholder no bar to suit to forfeit corporate property). *Contra:* United States v. One Distillery, (D.C. Cal. 1890) 43 F. 846.
31 Stout v. State, 36 Okla. 744 at 766, 130 P. 553 (1913); Murphy v. United States, 272 U.S. 630, 47 S.Ct. 218 (1926).
one-punishment theory, which means that double jeopardy should have no part to play and no question of constitutionality is raised. But even if the theory is rejected, the courts have apparently met the problem by either forcing the government to elect between inconsistent procedures or holding the second proceeding barred by the acquittal or conviction in the first, thereby avoiding the constitutional issue.\textsuperscript{32}
It should be noted that there are cases apparently involving double jeopardy issues which will be treated below as res judicata cases. It is submitted that no discussion of double jeopardy is logical once the two-suits—one-punishment theory is accepted, but it is possible that the same results can be reached by applying the broader doctrine of res judicata.
\section*{II. The Operation of Res Judicata}
Even when the penalty or forfeiture proceeding is classified as criminal in nature and the offense held to be the same as that involved in a prior criminal proceeding, if the court recognizes the two-suits—one-comprehensive-punishment theory, a plea of bar on double jeopardy grounds is of no avail. It remains to be seen whether res judicata can apply, particularly to aid the defendant who has been acquitted in the criminal prosecution.
Res judicata is mentioned in these penalty suits much less often than double jeopardy. Perhaps the main reason is that they are, for historical reasons, considered civil proceedings and therefore the quantum of proof necessary to sustain the government's burden will be less than in the criminal prosecution. Although the parties are the same and the issues substantially identical, an acquittal may be the result of a failure to convince the jury beyond a reasonable doubt. Such an acquittal cannot bar a civil suit where the government need display only a fair preponderance of the evidence.\textsuperscript{33}
\textsuperscript{32} United States v. One Distillery, (D.C. Cal. 1890) 43 F. 846; United States v. Torres, (D.C. Md. 1923) 291 F. 138; Port Gardner Investment Co. v. United States, 272 U.S. 564, 47 S.Ct. 165 (1926); National Surety Co. v. United States, (C.C.A. 9th, 1927) 17 F. (2d) 369; Commercial Credit Co. v. United States, 276 U.S. 226, 48 S.Ct. 232 (1928); State v. Graffenreid, 226 Ala. 169, 146 S. 531 (1933).
\textsuperscript{33} 2 Freeman, \textit{Judgments}, 5th ed., §656 (1925); 17 Corn. L.Q. 493 (1932); Stone v. United States, 167 U.S. 178 at 188, 17 S.Ct. 778 (1897); Murphy v. United States, 272 U.S. 630, 47 S.Ct. 218 (1926), noted in 13 Va. L. Rev. 410 (1927); Helvering v. Mitchell, 303 U.S. 391, 58 S.Ct. 630 (1938), noted in 22 Minn. L. Rev. 1054 (1938), 37 Mich. L. Rev. 647 (1939), 25 Va. L. Rev. 839 (1939); United States v. Schneider, (C.C. Ore. 1888) 35 F. 107; United States v. Donaldson-Schultz Co., (C.C.A. 4th, 1906) 148 F. 581; United States v. U.S. Gypsum Co., (D.C. D.C. 1943) 51 F. Supp. 613; State v. Roach, 83 Kan. 606 at 611, 112 P. 150 (1910).
Since the opinion of Justice Holmes in *Oppenheimer v. United States* it is not to be doubted that res judicata applies with full force and effect between criminal prosecutions as well as civil proceedings. So long as the penalty suit is held to be civil in essence as well as form, the difference in burdens of proof appears to raise an insurmountable barrier to the application of res judicata between the criminal and penalty proceedings. However, once the penalty suit is denominated criminal in nature, there is a real possibility of res judicata coming into play. At least a prior conviction should be conclusive on the defendant in the penalty or forfeiture suit, since the government has already proved its case beyond a reasonable doubt.
Perhaps the key to the role of res judicata as between a criminal proceeding and a penalty suit involving the same parties and the same offense is the case of *Coffey v. United States*. The defendant had been acquitted of attempting to defraud the government of liquor taxes. The statutory punishment for such an offense was fine, imprisonment and forfeiture of any distilling equipment used in the defrauding; and forfeiture could have been decreed as an incident of conviction. Defendant's prior acquittal was held a bar to a subsequent civil suit to forfeit his still. The court did not ignore the fact that the forfeiture suit was in rem and civil in form so that the burden of proof was different from that required in the criminal proceeding. "Nevertheless, the fact or act has been put in issue and determined against the United States; and all that is imposed by the statute, as a consequence of guilt, is a punishment therefor. There could be no new trial of the criminal prosecution after the acquittal in it; and a subsequent trial of the civil suit amounts to substantially the same thing. . . ." Thus the court apparently conceived of the civil suit as punitive in nature, to complete the punishment imposed by the statute. But the prior acquittal had involved a finding that the facts which were the basis of both the criminal and civil suits did not exist. "The facts cannot be again litigated between them [the same parties], as the basis of any
---
34 242 U.S. 85, 37 S.Ct. 68 (1916).
35 See United States v. Seattle Brewing and Malting Co., (D.C. Wash. 1905) 135 F. 597, holding that the government, having been defeated in the criminal suit cannot sue to enforce the provisions of a penal statute by forfeiture. However, the government could sue to recover any revenue lost by defendant's acts since that suit would be purely remedial and not punitive.
36 2 Freeman, Judgments, 5th ed., §657 (1925); State v. Intoxicating Liquor, 72 Vt. 253, 47 A. 779 (1900).
37 116 U.S. 436, 6 S.Ct. 437 (1886).
38 Id. at 443.
statutory punishment denounced as a consequence of the existence of the facts."39
The language used by the Court in the Coffey case clearly indicates that it was thinking in terms of res judicata; and reliance was placed on Gelston v. Hoyt,40 a leading case on the subject of res judicata. However, the Court also mentions McKee v. United States, discussed supra, which, we have seen, refused to recognize that the penalty suit was merely to enforce part of a comprehensive punishment. It therefore becomes obvious that the decision is either a product of confused thinking or of a subtle blending of double jeopardy and res judicata theories. It is submitted that the latter is the true interpretation and that the case was correctly decided, although later courts and writers have attempted to classify the case as either a res judicata41 or a double jeopardy decision.42 The Court which decided the Coffey case was unusually sensitive to the policies underlying res judicata and double jeopardy, viz., to end litigation and to protect an accused from the danger of repeated punishment.43 And it is believed that the Coffey case was merely an attempt to solve the problems raised by the peculiar nature of statutory penalties and forfeitures in the light of those policies.
It is possible, as some courts have done, to limit the Coffey case to its peculiar facts. Since the forfeiture could have been decreed as a result of a conviction the second suit to enforce forfeiture is clearly barred by acquittal on double jeopardy grounds.44 But it would seem that the case has much broader implications, and it has been recog-
39 Id. at 444.
40 3 Wheat. (16 U.S.) 246 (1818).
41 See United States v. Three Copper Stills, (D.C. Ky. 1890) 47 F. 495; dissent of Stephens, J. in United States v. U.S. Gypsum Co., (D.C. D.C. 1943) 51 F. Supp. 613 at 617; Von Moschzisker, "Res Judicata," 38 Yale L.J. 299 at 325, note 89 (1929); McLaren, "The Doctrine of Res Judicata as Applied to the Trial of Criminal Cases," 10 Wash. L. Rev. 198 (1935); 31 Col. L. Rev. 291 (1931).
42 See Stone v. United States, 167 U.S. 178, 17 S.Ct. 778 (1897); Helvering v. Mitchell, 303 U.S. 391, 58 S.Ct. 630 (1938); State v. Roach, 83 Kan. 606, 112 P. 150 (1910); 2 Freeman, Judgments, 5th ed., p. 1386 (1925); 47 Harv. L. Rev. 1438 (1934); 25 Va. L. Rev. 839 (1939).
43 See opinion of the same court in Boyd v. United States, 116 U.S. 616, 6 S.Ct. 524 (1886), an unusually liberal interpretation of the operation of the Bill of Rights in penalty suits.
44 See United States v. Donaldson-Schultz Co., (C.C.A. 4th, 1906) 148 F. 581; Various Items of Personal Property v. United States, 282 U.S. 577, 51 S.Ct. 282 (1931); State v. Roach, 83 Kan. 606, 112 P. 150 (1910); State v. Meek, 112 Iowa 338, 84 N.W. 3 (1900).
nized as controlling even where forfeiture was not made an incident of conviction by the statute.\textsuperscript{45}
Courts have been astute in devising ways to circumvent what they conceived to be the holding in the \textit{Coffey} case. For example, it has been held that there could be no bar where a different intent was necessary to support the criminal indictment than need be proved to sustain the civil charge;\textsuperscript{46} where the defendant in the criminal case and the claimant in the forfeiture suit are different parties;\textsuperscript{47} or where there was a prior conviction rather than an acquittal, as was involved in the \textit{Coffey} case.\textsuperscript{48}
Despite this whittling-away, the \textit{Coffey} case is still respectable authority in the federal courts.\textsuperscript{49} An interesting application of the doctrine of the case is seen in \textit{Chantangco v. Abaroa}.\textsuperscript{50} A Philippine statute provided in one section that a person is liable in a civil suit for any damages caused by his own negligence not punishable by law, and in another section that a person who is criminally liable for an act should also be liable in a civil suit for damages. It was held that defendant's acquittal in a criminal prosecution by the state for maliciously and unlawfully burning plaintiff's warehouse barred plaintiff's suit for damages on the authority of the \textit{Coffey} case. The court conceived that the right of action for damages was conferred only in case of criminal liability where the acts were denominated criminal. The compensation sought in the civil suit was, by statute, made part of the punishment attached to the offense of which defendant had been acquitted. It is submitted that the same interpretation should be given our penal statutes, so that the fine, imprisonment and penalty are considered as integral parts of a comprehensive scheme of punishment, though enforced in two suits of different character. There should be no right to enforce part of that punishment without a prior
\textsuperscript{45} United States v. One De Soto Sedan, (D.C. N.C. 1949) 85 F. Supp. 245; Sierra v. United States, (C.C.A. 1st, 1916) 233 F. 37; United States v. Salen, (D.C. N.Y. 1917) 244 F. 296.
\textsuperscript{46} Stone v. United States, 167 U.S. 178, 17 S.Ct. 778 (1897).
\textsuperscript{47} See note 30, supra.
\textsuperscript{48} United States v. Three Copper Stills, (D.C. Ky. 1890) 47 F. 495.
\textsuperscript{49} State courts have been reluctant to accept what they conceived to be the rule of the case. See State v. Roach, 83 Kan. 606, 112 P. 150 (1910); cases in 11 L.R.A. (n.s.) 667 (1908). However, see State v. Intoxicating Liquor, 72 Vt. 253, 47 A. 779 (1900) holding that an acquittal of keeping liquor with unlawful intent bars a suit to forfeit the liquor. The mere fact that one suit is criminal, the other civil does not prevent the application of res judicata.
\textsuperscript{50} 218 U.S. 476, 31 S.Ct. 34 (1910).
conviction (in which case the judgment would operate as an estoppel in the civil penalty or forfeiture suit). This, of course, assumes that the penalty suit is criminal in nature and that the same offense is involved in each proceeding.
Another interesting case applying the rule of the Coffey case is Chin Kee v. United States.\(^{51}\) When the government sought to deport him, the defendant produced the record of a prior habeas corpus hearing in which the court found him to be a native of the United States. Pending defendant's appeal from an adverse ruling in the deportation case, the government prosecuted defendant for forging the record, but he was acquitted. The court held this acquittal a bar to the deportation proceedings on the authority of the Coffey case. This decision highlights the res judicata nature of the Coffey case and indicates its possible broad application.
**Conclusion**
Cases involving penalties and forfeitures provide an excellent battleground for two conflicting policies. One declares that we must have effective law enforcement machinery; the other attempts to protect the accused from continued harassment and to settle once and for all those fact issues which have once been decided between the parties. The second policy has been split into two formalized doctrines, double jeopardy and res judicata, and the mechanics of neither have been sufficiently developed to meet the forces behind the first policy in this peculiar field of statutory penalties and forfeitures.
Recognizing the hybrid nature of penalties and forfeitures, consisting of both civil and criminal elements, the ideal would be to develop a hybrid mechanism to protect defendants, which itself would contain double jeopardy and res judicata elements. In fact, it would seem that the cases on this subject can be reconciled only if we view them as products of the two conflicting policies and as attempts by the courts to work out a satisfactory compromise without satisfactory tools.
The conflict of policies can be resolved early in a case by classifying the penalty as civil in nature, as well as in form, which is just what many courts have done, choosing the course of least resistance. But if the penalty is labeled criminal in nature because of its punitive purpose, double jeopardy applies to prevent a second punishment for the same offense, i.e., where the same evidence would sustain either the "criminal" or the "civil" charge. But even though the offense is
\(^{51}\) (D.C. Tex. 1912) 196 F. 74.
the same, if the fine and/or imprisonment and the penalty or forfeiture are viewed as merely two parts of a single punishment, necessarily enforced in different proceedings, then double jeopardy cannot apply. Principles of res judicata must then be invoked to bar a penalty suit after a prior acquittal and bind defendant after a prior conviction. It is submitted that res judicata, because of its broad nature, is capable of being stretched to meet the situation despite the technical barrier of the difference between the burden of proof in the criminal and the penalty suit (civil in form, but criminal in nature). In this way a compromise between the two fundamentally conflicting policies can be worked out.
Edward W. Rothe, S.Ed.
|
K. PIYACHOMKWAN, A. JARERAT, C. DULSAMPHAN,
C.G. OATES, K. SRIROTH
EFFECT OF PROCESSING ON CASSAVA STARCH QUALITY:
1. DRYING
Abstract
The long-term outlook for cassava starch is uncertain, this is despite the economic advantage afforded to this product by recent decline in cassava price. The problems are due to a restricted portfolio of functional properties coupled with a final product that is variable in-terms of its quality. The quality of extracted cassava starch is dependent on many factors, especially processing. One key problem area is that of drying the dewatered cake. In this study, it was shown that the properties of dried starch were different to those of its non-dried counterpart (cake). After drying swelling power and solubility decreased, these changes were in-line with those exhibited by heat-moisture treated starch prepared by incubating 25% moistened starch at 100°C for 16 hr. Dried starch had higher peak temperature than its original cake but lower pasting temperature, which contrasted to the effect of heat-moisture treatment. Dried starch from moist cake had a broader endothermic peak indicated by a larger gelatinization temperature range and lower peak height index, similar to heat-moisture treated starch. Despite apparent changes in functional properties during drying of cassava starch, the cause of the change is not entirely known. Generally, changes reflect a hybrid of heat-moisture treatment and hydrothermal effect.
Introduction
Starch, unmodified as well as modified, has many properties that collectively contribute to its usefulness in a wide range of food and non-food products. The world market for starch destined for industrial use is continuously on the increase. This trend occurs despite the restricted range of crops from which starch is extracted on a commercial basis. The most important crops are potato, corn, wheat and tapioca. Corn is the main source of starch; of the total world starch production, about 83% is derived
K. Piyachomkwan, A. Jarerat, Cassava and Starch Technology Research Unit, National Center for Genetic Engineering and Biotechnology, Bangkok, Thailand; C. Dulsamphan, K. Sriroth, Department of Biotechnology, Faculty of Agro-Industry, Kasetsart University, Bangkok, Thailand; C.G. Oates, K. Sriroth, Kasetsart Agricultural and Agro-Industrial Product Improvement Institute, Kasetsart University, Bangkok, Thailand.
from corn, 7% from wheat, 6% from potato and 4% from cassava [6]. The dominant position of cornstarch is a reflection of the cost advantages of this crop. However, given the ready availability of cheap cornstarch, products are often adapted to facilitate incorporation of this starch in their formulation. The range of functional properties provided by cornstarch is on the increase as genetically modified corns are developed with specific functionality, and advances are made in starch modification technology. Cornstarch technology is therefore at state of technological sophistication whereby the starch is tailored to the needs of the product rather than the product to the starch. The cornstarch industry has also responded to the need by the user industries for a high, consistent quality product.
The cassava starch industry, in contrast, has not invested in variety development or modification for improved starch functionality. Globally, this industry is still struggling to deal with problems of starch quality variability. Given the chemical composition of cassava root, starch from this source should be more pure than cornstarch. Unfortunately, this is not necessarily the case. Further, the lack of by-products is an impediment to further income generation by the industry.
The portfolio of functional properties (quality) of cassava starch is highly variable between batches. Starch quality is influenced by many factors starting with the process of starch synthesis during plant and root development through to inconsistencies in the starch extraction process. In terms of starch properties, there is a substantial G by E effect. The main environmental effects being mediated by duration of plant growth are the rainfall immediately before harvest. Environmental effects are expressed by differences in the structural and the physicochemical properties of the starch granules deposited in roots [1, 12, 13, 16, 17]. Fluctuation in soil temperature also causes an alteration in starch properties [2, 7]. During the manufacturing process, mechanical grinding of fresh roots can lead to damage of starch granules and subsequent changes in water interactions and enzyme susceptibility. Extraction processes employing sulphur dioxide also lead to alteration in the granule stability [18].
Modification of the granular structure and property of starch can occur, accidentally or intentionally, during processing. In the starch extraction process the final stage is potentially problematic in terms of altering starch quality. The combination of high temperature and moisture can precipitate structural changes in the architecture of starch granules, known as “hydrothermal” treatment [5]. Starch granules in excess water, when subjected to sufficient heating, swell irreversibly becoming water-soluble. This process is associated with a loss of granule integrity and birefringence, a process known as gelatinization. Two further thermal treatments, heating an aqueous suspension of starch granules at a temperature below that at which gelatinization occurs (“annealing” treatment) or heating moistened starch (water content less than 30%) at a higher temperature than that at which gelatinization occurs (“heat-moisture” treatment), do not result in complete loss of starch structure (starch gelatinization). Annealed and heat-moisture treated starch remains as discrete granules that are water-insoluble. Yet, modification of the granule structure and associated properties are evident. Annealed starch is characterized by an alleviate gelatinization temperature and reduced gelatinization temperature range [19]. Heat-moisture treatment also alters both structural and physicochemical properties of starch granules [4, 9, 11].
Extraction of cassava starch involves a dewatering stage consisting of a horizontal centrifugal basket running at a low speed of 800 to 900 rpm. Discharged starch cake is of high moisture content (35–40%; [16, 17]). Final moisture reduction of the moistened cake occurs in a flash dryer. Temperature fluctuation of the flash dryer occurs, often in the range 160 to 180°C. Given the profound influence of heat and moisture on the starch granule structure, inconsistencies at the drying stage could be a responsible for some of the quality variability in the final product. Strategies for eradicating this variability could involve either improvement in the dewatering process such that the final cake has lower moisture content or improvements to the heating system.
This study is part of a larger project that is investigating the influence of processing on starch quality. This paper reports on an investigation to probe the effect of commercial drying on cassava starch properties. Quality of starch cake, before and after factory drying and at different levels of cake moisture, was investigated. Comparison was also made with heat-moisture treated starch.
**Materials and methods**
**Drying process**
A cassava starch factory situated close to a cassava-producing region in the Northeastern part of Thailand was chosen for the study. The factory selected had a production capacity of 200 tons cassava starch per day. No sulphur dioxide was used in the extraction process. Eight sets of locally-made dewatering centrifuges were installed to reduce starch cake moisture. Each centrifuge had a discharge capacity of 239.0±23 kg cake/cycle (10 minutes). Feeding rate of the cake to dryer was 8 tons dry solid/hr. Starch cake was dried in a flash dryer at a temperature of 170°C. Sampling was as a pair of cake and its starch after drying. Time interval for sampling between cake in the feeder and dried starch from cyclone was determined by the dryer manufacturer to be about 10 second after feeding.
**Heat-moisture treatment in laboratory**
Cassava starch was extracted, in water, from fresh cassava roots and dried at 50°C. After sieving through a 90-µm screen, moisture was adjusted to 25% moisture content and samples were equilibrated overnight. One hundred grams of moistened
starch sealed in bottles was subjected to 100°C in hot air oven for 16 hours [5]. After treatment, samples were unsealed, dried at 50°C and kept in cool place.
**Analytical methods**
Moisture content of samples was determined by drying at 105°C to constant weight [3]. Starch content was determined by a polarimetry method [3]. Paste viscosity properties were investigated by a Rapid Visco Analyzer (RVA 4, Newport Scientific, Australia) according to Sriroth et al. [18]. Thermal analysis was determined by Differential Scanning Calorimeter (Perkin Elmer DSC 7, Norwalk, CT;). The peak height index (PHI) is reported as the ratio of enthalpy ($\Delta H$) and the difference between peak and onset temperature ($T_p - T_o$) [10]. Swelling power and solubility at 85°C followed the method of Schoch [14]. Degree of hydrolysis of samples was measured using $\alpha$-amylase and amyloglucosidase, following the method of Wang *et al.* [20]. Reducing sugar was analyzed using Somogyi-Nelson method [15] and total sugars by the method of Dubois *et al.* [8].
All data was statistically analyzed at 95% confidence level by Completely Randomized Design (STATGRAPHICS Plus Version 3.0, USA).
**Results and discussion**
Heat-moisture treatment is believed to cause changes to the physical order within starch granules. These changes do not affect the morphology of the granule visually but influence starch properties. Heat moisture treated samples, compared to native starch, have higher gelatinization temperature, lower peak viscosity but higher cold paste viscosity. Solubility and swelling power are lower [5, 9]. Alteration in the physical properties of cassava starch occurs when starch is moistened (25% moisture content) and kept under controlled heating conditions that are higher than the gelatinization temperature (>66°C). After subjecting cassava starch to heat-moisture treatment, solubility and swelling power were lower. This is similar to the response of wheat and potato starches, to similar treatment. Gelatinization endotherms of heat moisture treated cassava starch are broader; this is because of the final peak temperature is elevated. Despite extension of the final peak temperature enthalpy of gelatinization was lower and hence PHI (Table 1). Change in the RVA paste viscosity profile of treated cassava starch is also evident. On heat-moisture treatment, starch paste viscosity was significantly decreased; peak viscosity of untreated and treated starch were 368 and 304 RVU, respectively. Yet, paste stability during heating was increased, indicated by starch paste breakdown from 234 RVU for untreated starch to only 95 RVU for treated starch. Cold paste viscosity of treated starch was improved; final viscosity of untreated and treated starch was 205 and 330 RVU, respectively (Table 1). Change in paste viscosity of heat-moisture treated cassava starch was similar to those reported in a previous study by Abraham (1993). The susceptibility of heat-moisture treated starch to enzymatic hydrolysis was also lower (Table 1).
**Table 1**
Change in starch property by heat-moisture treatment*.
| Property** | Native starch | Heat-moisture treated starch |
|------------|---------------|------------------------------|
| Swelling power at 85°C | 26.33 | 21.99 |
| % Solubility at 85°C | 48.71<sup>a</sup> | 20.79<sup>b</sup> |
| Gelatinization | | |
| - Onset temperature (°C) | 65.85<sup>b</sup> | 72.18<sup>a</sup> |
| - Peak temperature (°C) | 70.95<sup>b</sup> | 78.63<sup>a</sup> |
| - Gelatinization temperature range (°C) | 10.19 | 12.91 |
| - Peak height index (PHI) | 2.94<sup>a</sup> | 1.48<sup>b</sup> |
| - Enthalpy (J/g) | 15.00<sup>a</sup> | 9.52<sup>b</sup> |
| Paste viscosity | | |
| - Pasting temperature (°C) | 72.90<sup>b</sup> | 81.33<sup>a</sup> |
| - Peak viscosity (RVU) | 368<sup>a</sup> | 304<sup>b</sup> |
| - Trough viscosity (RVU) | 135<sup>b</sup> | 209<sup>a</sup> |
| - Final viscosity (RVU) | 205<sup>b</sup> | 330<sup>a</sup> |
| - Breakdown (RVU) | 234<sup>a</sup> | 95<sup>b</sup> |
| - Setback (RVU) | 71<sup>b</sup> | 121<sup>a</sup> |
| Degree of hydrolysis (%) | 41.65<sup>a</sup> | 33.63<sup>b</sup> |
*Moistened starch (25% moisture content) was kept with completely sealed at 100°C for 16 hr.
**Values in each row with different letters are significantly different at p < 0.05.
**Table 2**
Swelling power* and %solubility*, at 85°C, of cassava starch obtained from cakes with different moisture contents after flash drying in cassava starch factory.
| Moisture content of cake** (%) | Swelling power | %Solubility |
|--------------------------------|----------------|-------------|
| | Cake | Starch | Cake | Starch |
| 30.1-33.0 | 58.08<sup>a</sup> | 44.75<sup>b</sup> | 31.47<sup>a</sup> | 28.04<sup>b</sup> |
| 33.1-36.0 | 60.67<sup>a</sup> | 44.79<sup>b</sup> | 26.82<sup>a</sup> | 23.59<sup>b</sup> |
| 36.1-39.0 | 63.93<sup>a</sup> | 45.03<sup>b</sup> | 25.92<sup>a</sup> | 21.21<sup>b</sup> |
| 39.1-44.0 | 65.30<sup>a</sup> | 46.12<sup>b</sup> | 27.67<sup>a</sup> | 24.23<sup>b</sup> |
*Values in each row with different letters are significantly different at p < 0.05.
**n = 31.
In a cassava starch factory, hydrothermal induced changes may take place between the point at which moistened starch exits the dewatering centrifuge and the flash
dryer. The present study 73 sample pairs (cake and dry starch) were investigated for signs of heat-moisture treatment that may have occurred during the drying process. Care was taken to ensure that samples were only collected when the dryer temperature was around $172 \pm 2.0^\circ C$. The moisture of the cake varied from 30 to 44% and could be categorized into 4 levels including low moisture cake (30.1 to 33.0%), medium moisture cake (33.1 to 36.0%), high moisture cake (36.1 to 39.0%) and very high moisture cake (39.1 to 44.0%). Moisture of the dried starch samples was $10.90 \pm 0.96\%$ and starch content was $97.97 \pm 0.82\%$ for cake and dried starch. Changes in starch properties due to possible hydrothermal effects were evident (Table 2 to 4); these changes were expressed for certain starch properties and were dependent on cake moisture content. The changes were similar to those seen in heat-moisture treated starch produced in the laboratory. Dried starch from the cakes of different moisture content had significantly reduced swelling power (Table 2, Figure 1). Peak viscosity of dried starch was also significantly different comparable to its original cake (except the low moisture cake, Table 3). Surprisingly, viscosity change of dried starch from the factory was incompatible with the heat-moisture treated starch previously observed in the laboratory.

**Fig. 1.** Swelling power of starch cakes with different moisture contents and their equivalent dried starch samples collected from cassava starch factories.
Heat treatment of moistened starch cake in the factory produced dried starch with increased peak and cold paste viscosity; the changes are not as clear as those found for swelling power (Figure 2), but still significantly different. In contrast, to the laboratory
results, dried starch from all moisture cakes exhibited a significant reduction in pasting temperature (Table 3). Changes in gelatinization of the cake and its equivalent starch were also evident. In accordance with pasting temperature, heat treatment resulted in a lower gelatinization temperature of dried starch relative to its original cake (Figure 3). Dried starch collected from the factory also had lower enthalpy than its original cake (Figure 4). The endothermic peak of gelatinization of dried starch was broader but of lower peak height than that of its original cake; the PHI of dried starch was thus lower than that of the cake (Table 4). Presumably, heat treatment during drying induces structural changes in starch granules thus affecting their gelatinization process.
**Table 3**
Paste viscosity* of cassava starch obtained from cakes with different moisture contents after flash drying in cassava starch factory.
| Moisture content of cake** (%) | Peak viscosity (RVU) | Final viscosity (RVU) | Breakdown*** (RVU) | Pasting temperature (°C) |
|-------------------------------|----------------------|-----------------------|--------------------|-------------------------|
| | Cake | Starch | Cake | Starch | Cake | Starch | Cake | Starch |
| 30.1-33.0 | 394 | 393 | 217 | 223 | 240 | 239 | 69.60<sup>a</sup> | 68.71<sup>b</sup> |
| 33.1-36.0 | 390<sup>b</sup> | 397<sup>a</sup> | 227 | 231 | 240 | 243 | 68.74<sup>a</sup> | 68.17<sup>b</sup> |
| 36.1-39.0 | 387<sup>b</sup> | 400<sup>a</sup> | 220<sup>b</sup> | 229<sup>a</sup> | 242 | 247 | 68.68<sup>a</sup> | 68.05<sup>b</sup> |
| 39.1-44.0 | 375<sup>b</sup> | 393<sup>a</sup> | 215 | 221 | 234 | 242 | 68.69<sup>a</sup> | 67.97<sup>b</sup> |
*Values in each row with different letters are significantly different at p < 0.05.
**n = 73.
***Breakdown = Peak viscosity – Trough viscosity
**Table 4**
Thermal analysis* of cassava starch obtained from cakes with different moisture contents after flash drying in cassava starch factorieis.
| Moisture content of cake** (%) | Onset temperature (°C) | Temperature range*** (°C) | Enthalpy (J/g) | Peak height index*** |
|-------------------------------|------------------------|---------------------------|----------------|---------------------|
| | Cake | Starch | Cake | Starch | Cake | Starch | Cake | Starch |
| 30.1-33.0 | 61.00 | 60.39 | 11.03 | 12.19 | 14.58 | 12.21 | 2.66 | 2.01 |
| 33.1-36.0 | 60.97 | 60.29 | 10.24 | 10.67 | 13.42 | 12.66 | 2.70<sup>a</sup> | 2.39<sup>b</sup> |
| 36.1-39.0 | 60.92<sup>a</sup> | 60.43<sup>b</sup> | 9.24 | 10.42 | 14.30<sup>a</sup> | 12.45<sup>b</sup> | 3.13<sup>a</sup> | 2.40<sup>b</sup> |
| 39.1-44.0 | 60.72 | 60.48 | 9.87 | 9.45 | 13.91 | 12.15 | 2.84 | 2.59 |
*Values in each row with different letters are significantly different at p < 0.05.
**n = 21.
*** Temperature range is reported as the difference between the final and onset temperature; peak height index (PHI) is reported as the ratio of enthalpy (ΔH) and the difference between peak and onset temperature (T<sub>p</sub>-T<sub>o</sub>).
Fig. 2. Peak viscosity (RVU), as determined by a Rapid Visco Analyzer, of starch cakes with different moisture contents and their equivalent dried starch samples collected from cassava starch factories.
Fig. 3. Onset temperature (°C), as determined by Differential Scanning Calorimeter, of starch cakes with different moisture contents and their equivalent dried starch samples collected from cassava starch factories.
Fig. 4. Enthalpy (J/g), as determined by Differential Scanning Calorimeter, of starch cakes with different moisture contents and their equivalent dried starch samples collected from cassava starch factories.
Conclusion
Drying is a critical step in the starch extraction process and may account for final product quality inconsistency. In addition to the physical process of drying, when starch cake with 30-44% moisture content is subjected to heat treatment changes in some of the functional properties occurs. Yet, the apparent direction and magnitude of these changes, during drying, of cassava starch are not in agreement with the effects of heat-moisture treatment. It is suggested that cassava starch dried under factory conditions may undergo some form of hydrothermal treatment, which leads to alteration in the functional properties of the starch.
REFERENCES
[1] Asaoka M., Blanshard J.M.V., Rickard J.E.: J Sci. Food Agri., 59, 1992, 53.
[2] Asaoka M., Blanshard J.M.V., Rickard J.E.: Starch/Stärke, 43, 1991, 455.
[3] Association of Official Analytical Chemists (AOAC): Official Method of Analysis. 16th ed. The Association of Official Analytical Chemists, Virginia, 1995.
[4] Bail P. H., Bizot, Buleon I. I.: Carbohydrate Polymers, 21, 1993, 99.
[5] Collado L. S., Corke H.: Food Chemistry, 65, 1999, 339.
[6] DeBaere H.: Starch/Stärke, 51, 1999, 189.
[7] Defloor I., Swennen R., Bokanga M., Delcour J. A.: J Sci. Food Agri., 76, 1998, 233.
[8] Dubois M., Gilles K. A., Hamilton J. K., Rebers P. A., Smith F.: Anal. Chem., 28, 1956, 350.
[9] Hoover R., Manuel H.: Food Research International, 29, 1996, 731.
[10] Krueger B. R., Knutson C. A., Inglett G. E., Walker C. E.: Journal of Food Science, 52(3), 1987, 715.
[11] Maruta I., Kurahashi Y., Takano R., Hayashi K., Yoshino Z., Komaki T., Hara S.: Starch/Stärke, 46, 1994, 177.
[12] Moorthy S.N., Ramanujam T.: Starch/Stärke, 38, 1986, 58.
[13] Santisopasri V., Kurotjanawong K., Chotineeranat S., Piyachomkwan K., Sriroth K., Oates C. G.: Industrial Crops and Products, 2000, Submitted.
[14] Schoch T.J.: Swelling power and solubility of granular starches. In R.L. Whistler, R.J. Smith, and J.N. BeMiller (Eds.). Method in Carbohydrate Chemistry Vol. 4. Academic Press, New York, 1964, pp. 106.
[15] Somogyi M.: J Biol. Chem., 195, 1952, 19.
[16] Sriroth K., Walapaitit S., Chollakup R., Chotineeranat S., Piyachomkwan K., Oates C. G.: Starch/Starke, 51, 1999, 383.
[17] Sriroth K., Santisopasri V., Petchalanuwat C., Kurotjanawong K., Piyachomkwan K., Oates C. G.: Carbohydrate Polymers, 38, 1999, 161.
[18] Sriroth K., Walapaitit S., Piyachomkwan K., Oates C. G.: Starch/Stärke, 50, 1998, 466.
[19] Wang W. J., Powell A. D., Oates C. G.: Carbohydrate Polymers, 33, 1997, 195.
[20] Wang W.J., Powell A. D., Oates C. G.: Carbohydrate Polymers, 25, 1995, 91.
**Wpływ obróbki na jakość skrobi tapiokowej: 1. Suszenie**
**Streszczenie**
Z powodu obniżenia ceny na skrobię tapiokową, długoterminowe prognozy dla tej skrobi są niepewne mimo wielu jej zalet. Wynika to z ograniczonej liczby istotnych właściwości funkcjonalnych tej skrobi i ich niekorzystnych zmian w trakcie przechowywania.
Jakość ekstrahowanej skrobi tapiokowej zależy od wielu czynników, przede wszystkim od sposobu jej wydzielania. Kluczowym problemem jest suszenie odwodnionego placka skrobioowego. W niniejszych badaniach pokazano, że właściwości skrobi suszonej i nie suszonej (placka) różniły się od siebie. Po suszeniu malała zdolność pęcznienia i rozpuszczalność. Zmiany te były liniowe względem zmian zachodzących w trakcie przechowywania skrobi zawierającej 25% wilgoci, w 100°C, przez 16 godzin. Odwrotnie niż w przypadku obróbki temperaturowej wilgotnej skrobi, skrobia suszona miała wyższą temperaturę w punkcie maksimum lepkości i równocześnie niższą temperaturę kleikowania, niż skrobia otrzymana z placka. Skrobia z placka wykazywała szerszy pik endotermiczny wskazujący na szerszy zakres temperaturowy kleikowania. Równocześnie pik ten był niższy, w czym skrobia ta przypominała produkt z termicznej obróbki wilgotnej skrobi. Nie wiadomo co jest przyczyną zaobserwowanych zmian.
|
On the role of the electric dipole moment in the diffraction of biomolecules at nanomechanical gratings
Christian Knobloch\textsuperscript{1}, Benjamin A. Stickler\textsuperscript{2}, Christian Brand\textsuperscript{1}, Michele Sclafani\textsuperscript{1}, Yigal Lilach\textsuperscript{3}, Thomas Juffmann\textsuperscript{1,***}, Ori Cheshnovsky\textsuperscript{3}, Klaus Hornberger\textsuperscript{2}, and Markus Arndt\textsuperscript{1,*}
Received 8 March 2016, accepted 27 June 2016
Published online 7 September 2016
We investigate effects of a permanent electric dipole moment on matter-wave diffraction at nanomechanical gratings. Specifically, the diffraction patterns of hypericin at ultra-thin carbonaceous diffraction masks are compared with those of a polar and a non-polar porphyrin derivative of similar mass and de Broglie wavelength. We present a theoretical analysis of the diffraction of a rotating dipole highlighting that small local electric charges in the material mask can strongly reduce the interference visibility. We discuss the relevance of this finding for single grating diffraction and multi-grating interferometry with biomolecules.
1 Introduction
Nanomechanical gratings [1] have played a major role in matter-wave interferometry since the first diffraction experiments with atoms. They were used to study a variety of atomic [2–5] and molecular systems [6], as well as weakly bound van der Waals clusters [2, 7]. Ring-shaped periodic structures in thin membranes also proved useful as Fresnel zone plates [8, 9]. Such experiments with a single diffraction element were soon complemented by closed interferometers [10–12] used to investigate the influence of decoherence [13–15], inertial forces [16], and the interaction with external fields [17, 18].
Material gratings are often regarded as universal since the periodic beam depletion is seemingly independent of the internal structure of the particle. This generality is, however, impaired by attractive interactions between the grating and the diffracted molecule, such as the Casimir-Polder interaction. The influence of van der Waals forces can be reduced by using gratings with a thickness of a few nanometers, or even masks made of single-layer graphene [19].
Yet, electric charges on the grating may still lead to phase averaging of the de Broglie wave. Earlier studies showed that electron waves above a metal may experience decoherence [20–23]. A broadening of the diffraction orders was observed for slow electrons diffracted at material gratings [24, 25]. For polar molecules, a dipole-charge interaction has also been seen to reduce the quantum fringe contrast in earlier interferometric deflection experiments [26].
In the present article we investigate effects of a permanent electric dipole moment on matter-wave diffraction at material gratings. This is of particular importance for quantum interference experiments with biomolecules, most of which come with a permanent electric dipole moment due to the functional groups required for molecular recognition. We find that the presence of residual charges on the grating mask can give rise to surprisingly effective contrast reduction caused by phase averaging due to the rotating dipole.
We start by demonstrating quantum diffraction of hypericin at a grating made of amorphous carbon. Hypericin is a naturally occurring antiviral, antidepressant and anti-inflammatory substance [27], discussed as a photosensitizer in photodynamic tumor therapy [28, 29]. We compare the molecular diffraction pattern of hypericin to that of a polar and a non-polar tetraphenylporphyrin derivative in order to assess the influence of its dipole moment. We then provide a model for the diffraction of polar particles at thin, charged gratings, accounting for the thermal rotation of the molecular dipole in the electrostatic field of these surface charges. Finally, we discuss...
A continuous desorption laser beam (421 nm, 62 mW) is focused to a spot with a waist of $1.6 \pm 0.1 \mu m$ at the inner surface of the coated window. The small size of the source ensures that the evaporated molecules exhibit a transverse coherence of around 5 $\mu m$ at the position of the grating, 1555 mm downstream. The molecular beam is collimated to 10 $\mu rad$ by two slits before it passes the nanomechanical grating.
Two different gratings were used: A first grating with a periodicity of 100 nm was milled into a 21 nm thick amorphous carbon membrane (TEMwindows) [30]. A second grating was milled into an 8 nm thick SiO$_2$ matrix with a periodicity of 160 nm (TedPella).
After the diffraction process the molecules propagate through vacuum before they are absorbed on a quartz plate 585 mm behind the grating. They are observed in laser-induced fluorescence. In different runs, each molecular species is optically excited by a laser that matches its absorption spectrum. TPP and MeOTPP are excited at 661 nm, while radiation at 532 nm is used for hypericin. The resulting fluorescence is collected by a 20-fold microscope objective and imaged onto a UV-enhanced CCD camera (LOT ORIEL iXon 885-KCS-VP).
### 3 Experimental results
In Fig. 3 a we show the molecular pattern obtained for hypericin diffracted at the carbon grating. Three dominant diffraction peaks can be observed which have an intensity maximum at a molecular velocity around 250 m/s. The height dependence in the separation of the diffraction orders is due to the fact that slower molecules fall deeper in the gravitational field of the earth before they hit the detector. Comparing this diffraction pattern to the one of TPP (Fig. 3 b), one observes that the signal of hypericin is diffusely broadened: the individual diffraction orders are wider than for TPP. This is illustrated by the intensity distributions at different velocities accompanying the diffraction images in Fig. 3.
For TPP the width of the individual diffraction orders increases slightly with velocity while for hypericin the opposite trend is observed. Since TPP and hypericin are chemically rather different, we repeat the experiment with MeOTPP. The molecules TPP and MeOTPP are structurally very similar. The main difference, apart from a small mass difference (TPP: $m = 614.8$ amu, MeOTPP: $m = 672.8$ amu) is that MeOTPP has a permanent electric dipole moment of about 1.8 $D$ due to the benzoic acid methyl ester side group [31], as evaluated numerically. As shown in Fig. 3 c, we observe the broadening of the diffraction peaks also for MeOTPP. For the fast...
velocity class the full width at half maximum (FWHM) of the diffraction peaks is $9.7 \pm 0.1 \, \mu m$ for TPP and $14.5 \pm 0.2 \, \mu m$ for MeOTPP. As stated before, the diffraction orders of slow TPP molecules get narrower which can be explained by their higher transversal coherence. For MeOTPP, however, their width increases from $14.5 \pm 2 \, \mu m$ to $20.0 \pm 0.3 \, \mu m$.
In order to test whether the visibility reduction is related to the interaction with the grating, we repeated the diffraction experiments for TPP and MeOTPP with a grating made of silicon dioxide ($\text{SiO}_2$) instead of amorphous carbon. This is motivated by their difference in the electric conductivity. The respective diffraction patterns and intensity distributions are shown in Fig. 4. The diffraction orders are spaced more closely, because the period of this grating is 160 nm compared to 100 nm for the carbon grating. Nevertheless, one can clearly distinguish the diffraction orders for TPP. For MeOTPP individual diffraction orders can no longer be identified and the observed signal resembles the envelope of the single slit diffraction pattern.
Note that the $\text{SiO}_2$ grating is much thinner than the carbon gratings (8 nm compared to 20 nm) and has a larger geometrical slit width (82 nm) compared to the carbon mask (55 nm). This suggests that van der Waals forces are not responsible for the observed contrast reduction. On the other hand, we observe a stronger population of higher order diffraction peaks in the case of $\text{SiO}_2$, which indicates a more pronounced particle-wall interaction in the case of the dielectric mask.
While our observations suggest that the visibility reduction is due to dipole-grating interactions, other dipole-related mechanisms are also conceivable that give rise to a beam broadening. One possible reason for a contrast reduction is a potential increase in the effective source size. However, both the measured laser focus and depleted molecular traces on the desorption window rule out this effect. Moreover, molecules with a permanent dipole moment might also diffuse more rapidly on the detector plate than non-polar particles. This was tested by depositing molecules through a narrow slit in front of the detector plate. The width of the transmitted...
we present a theory that evaluates the possible role of residual charges in the diffraction of polar molecules.
4 Theory
In order to determine the influence of grating charges on the interference pattern, we consider a polar symmetric-top molecule characterized by the three moments of inertia $I_1 = I_2 = I \neq I_3$ as well as an electric dipole moment $d_0$. We denote its center-of-mass (CM) position by $\mathbf{r}$ and its orientational degrees of freedom (DOFs) by $\Omega$. The molecule is thermally emitted with longitudinal velocity $v_z$ from a point-like source (located at $z=0$), diffracted from an infinitesimally thin, charged grating of transverse extension $\ell_x$ (at $z=L$) and finally detected at the screen (at $z=2L$). Denoting the grating direction as $x$, we can neglect the $y$-dependence since the extension of the grating in this direction exceeds the spatial coherence of the particle. Since the longitudinal kinetic energy exceeds the transverse kinetic energy and the average grating potential by orders of magnitude in a typical experiment, it can be described in the eikonal approximation [35]. Thus, the longitudinal CM coordinate $z = v_z t$ is effectively eliminated and the transverse state operator $\rho_{2L}$ at the detector is related to the transverse state operator $\rho_0$ at the source by successive application of the free time evolution from the source to the grating, the grating transformation $\hat{t}$ and the free time evolution to the detection screen, respectively. Tracing out the orientational DOFs then yields the interference pattern.
The grating transformation operator $\hat{t}$ maps the state $\rho_L$ in front of the grating onto the state $\rho'_L$ directly after the grating, $\rho_L \mapsto \rho'_L = \hat{t} \rho_L \hat{t}^\dagger$. The influence of the grating mask can be modeled with the help of the aperture function $|t(x)| \in \{0, 1\}$, which is unity within the grating slits and zero elsewhere [36]. On the other hand, the phase imprinted onto the molecular state while it is traversing the grating depends on the interaction potential between the grating and the molecule. For simplicity we assume that the dominant interaction is due to homogeneously distributed surface charges on the grating. They create an electric field which reaches out along the $z$-direction. Thus, the interaction potential is
$$V(x, z, \Omega) = -d_0 \mathbf{m}(\Omega) \cdot \mathbf{E}(x, z),$$
(1)
where $\mathbf{m}(\Omega)$ is the orientation of the molecular dipole moment and $\mathbf{E}(x, z)$ is the electric field due to the charges.
In a more realistic description of the interaction between molecule and grating, one would add the
dispersive Casimir-Polder interaction [34] to the potential (1), which is not included in the present model. The electrostatic field $E(x, z)$ can be calculated from Coulomb’s law applied to a thin grating with surface charge density $\sigma_0$, period $d$, opening fraction $f$ and width $\ell_x$ [37]. The resulting field is depicted in Fig. 5. For moderate charge densities the potential energy of the molecular dipole in the field is much lower than the mean rotational energy $k_B T$. Since the orientational DOFs are initially thermally distributed at a high temperature, the molecule rotates freely while traversing the field and we can utilize the free rotor approximation [38]. This means that the phase shift for each rotation state $|\ell m k\rangle$ of the molecule is obtained by integrating the expectation value of the potential energy in this rotation state along the straight eikonal trajectory [38]. For the case of a thin grating the explicit calculation shows that the phase shift takes the particularly simple form
$$\phi(x) = \Delta \phi \left[ \frac{x}{d} - \frac{f}{2} \text{tri} \left( \frac{x}{d} \right) \right],$$
where $\Delta \phi = (1 - f)d_0 \sigma_0 d / \hbar \varepsilon_0 v_z$ is the maximal phase difference between two neighboring slits. Here, we denote by $\text{tri}(x)$ a 1-periodic triangular wave with
$$\text{tri}(x) := \begin{cases}
2x/f & \text{for } 2|x|/f \leq 1, \\
[\text{sgn}(x) - 2x]/(1 - f) & \text{for } 2|x|/f > 1,
\end{cases}$$
for $x \in [-1/2, 1/2]$ and we take the center of the coordinate system to be the center of the grating. The resulting grating transformation operator can thus be written as
$$\hat{t} = |t(\hat{x})| \sum_{\ell=0}^{\infty} \sum_{m,k=-\ell}^{\ell} e^{i \phi(\hat{x}) Q_{\ell m k}} \otimes |\ell m k\rangle \langle \ell m k|,$$
where $Q_{\ell m k} = mk/\ell(\ell + 1)$ is the expectation value of $\cos \theta$ in the state $|\ell m k\rangle$ [39–41], with $\theta$ the angle between the dipole and the grating axis. Here we assumed for simplicity that the molecular dipole is aligned with the molecule’s symmetry axis. Extending the current model to the cases where the dipole is not aligned with the symmetry axis of the molecule is straightforward, although the resulting expressions will be more complicated [38].
The grating transformation (4) is the thermal average of angular momentum dependent grating transformations with angular momentum dependent phase shifts. The phase (2) consists of a linear and a periodic contribution, which compensate each other within each grating slit, so that the maximal phase difference between two neighboring slits is $\Delta \phi$. The periodic contribution $\text{tri}(x)$ to the eikonal phase (2) describes an alternating phase modulation of the matter-wave for each fixed expectation value $Q_{\ell m k}$, while the linear contribution describes a transverse momentum kick, experienced by the molecule due to its interaction with the charged grating. This momentum kick can shift the interference pattern either to the left or the right. In particular, for fixed $Q_{\ell m k}$ the transferred momentum is $(1 - f)d_0 \sigma_0 Q_{\ell m k}/v_z \varepsilon_0$ and its direction is determined by the quantum numbers $m$ and $k$. The average over the quantum numbers $\ell$, $m$ and $k$ then leads to phase averaging and thus the signal contrast is reduced. Relation (2) implies that the effect is relevant as soon as $\Delta \phi \simeq 2\pi$.
In order to evaluate the interference pattern, one successively evaluates the free propagation from the source to the grating, the grating transformation and the flight from the grating to the screen. In the end, the orientational DOFs are traced out. Using that they are thermally distributed at a very high temperature, $T \gg \hbar^2 / \tilde{I} k_B$ with $\tilde{I}$ the dominant moment of inertia of the particle, we can replace the sum over the discrete values $Q_{\ell m k}$ by an integral over the continuous classical mean value $q = \langle \cos \theta \rangle \in [-1, 1]$ [38].
The thermal distribution $p_{\text{th}}(q)$ of $q$ gives the statistical weight of each phase shift $q \Delta \phi$. It is shown in the Appendix that
$$p_{\text{th}}(q) = \frac{1}{2} - \frac{1}{2} \sqrt{\frac{I}{I_3}} \left[ \frac{1}{\sqrt{1 - (1 - I/I_3) q^2}} \right.$$ $$+ \ln \left( \frac{(1 + \sqrt{I/I_3})|q|}{1 + \sqrt{1 - (1 - I/I_3) q^2}} \right) \left. \right],$$
(5)
as depicted in Fig. 6. This distribution is independent of the temperature $T$ and it is a function of the ratio $I/I_3$ between the molecule’s two independent moments of inertia. For $I/I_3 \to 1/2$ (thin disc) the distribution is broad, while it approaches the $\delta$-distribution for $I/I_3 \to \infty$ (linear rotor).
The resulting interference pattern at the detection screen can then be written as
$$S(x) \propto \int_{-1}^{1} dq \, p_{th}(q) \left| \int_{-\infty}^{\infty} dx' \, |t(x')| e^{iq(x')q} \right|^2 \times \exp \left[ -i \frac{2\pi x'(x-x')}{d\Delta x} \right]^2.$$
(6)
Here, $\Delta x = 2\pi \hbar L/Mv_z d$ is the distance between two neighboring diffraction peaks in the far field, i.e. for $\ell_x^2/\Delta x d \ll 1$. In the far field, the position space integration in Eq. (6) can be carried out explicitly and the signal for an $N$-slit grating is
$$S(x) \propto \text{sinc}^2 \left( \frac{\pi f x}{\Delta x} \right) \times \int_{-1}^{1} dq \, p_{th}(q) \frac{\sin^2 \left[ \pi N \left( x/\Delta x - q\Delta \phi/2\pi \right) \right]}{\sin^2 \left[ \pi (x/\Delta x - q\Delta \phi/2\pi) \right]}.$$
(7)
This expression shows that the charge-free $N$-slit interference pattern gets blurred by the distribution $p_{th}$. From this relation it is apparent that phase averaging can become important for $\Delta \phi \gtrsim 2\pi$.
In Fig. 7 we show the theoretically expected interference pattern for a prolate symmetric top molecule for three different charge densities. One observes that the signal contrast is significantly reduced even for moderate charge densities.
5 Discussion
Our experiment demonstrates that polar molecules are more prone to dephasing at nanomechanical diffraction gratings than non-polar but similarly polarizable objects. The effect is more pronounced for slow particles. This is consistent with our model of charged gratings which predicts a velocity-dependent broadening of the diffraction orders. We therefore attribute the leading dephasing mechanism for polar molecules at our nanomechanical gratings to local residual charges. Our observation may also be relevant for earlier experiments with polar particles which did not discuss this effect explicitly [30, 42].
The charges are most likely deposited during the fabrication process. Charge densities up to $10^{13}$ e/cm$^2$ were found in silicon nitride irradiated with a gallium ion beam [33]. While the electric conductivity of amorphous carbon depends on a number of parameters like fabrication, thickness and temperature, its value exceeds the respective value for SiO$_2$ by several orders of magnitude. Hence, electric charges implanted in SiO$_2$ are less likely to be neutralized and should result in stronger electric fields than in amorphous carbon. Eliminating the influence of local charges is ambitious since even...
nanometer-sized metal layers do not suffice to shield the fields and nearby grating support structures may also carry charges.
6 Conclusion
Our experimental results indicate that the interaction between residual charges in a nanomechanical diffraction mask and polar molecules can cause a significant loss of visibility in quantum interference. The comparison between two different grating materials shows that this effect is more dramatic for gratings with higher surface charge density (SiO$_2$). The theoretically predicted phase-averaging due to the rotation of the molecular dipole is qualitatively consistent with the experimental results.
We find that the use of material gratings as coherent manipulation elements in quantum optics becomes increasingly challenging for particles of high polarity. This suggests that future interferometry with large polar biomolecules will require at least one optical grating. However, when working with incoherent molecular sources, at least one absorptive grating is usually required to prepare the required spatial coherence. Nanomechanical masks can still serve this purpose even though they imprint field-induced phase shifts. In Talbot-Lau near-field interferometry [12] phase shifts and attractive forces in the first of three gratings do not scramble the final interference contrast, they may even enhance it by narrowing the effective slit transmission function. Molecules with a very large dipole moment may be removed from the beam only if the local grating attraction becomes too strong.
Acknowledgments. We acknowledge support by the European Commission (304886), the European Research Council (320694) and the Austrian science funds (DK CoQuS W1210–3). CB acknowledges the financial support of the Alexander von Humboldt foundation through a Feodor Lynen fellowship. TJ acknowledges support by the Gordon and Betty Moore Foundation. We thank Stefan Scheel and Johannes Fielder (Univ. Rostock) for fruitful discussions and Lisa Wörner for assistance in measurements.
Appendix
A Statistics of free rotations
In this appendix we derive the classical thermal distribution $p_{\text{th}}(q)$ of the dynamic mean value $q = \langle \cos \theta \rangle$ for symmetric top molecules as required for our purposes. This distribution determines the probability density of deflection angles of polar particles traversing an inhomogeneous field of moderate strength [44–46]. In order to calculate $p_{\text{th}}(q)$, we consider a free symmetric top, with the classical Hamilton function
$$H_{\text{rot}}(\Omega, p_\Omega) = \frac{1}{2I} \left( \frac{(p_\varphi - p_\psi \cos \theta)^2}{\sin^2 \theta} + p_\theta^2 \right) + \frac{p_\psi^2}{2I_3}, \quad (A.1)$$
where $\Omega = (\varphi, \theta, \psi)$ are the Euler angles in the z-y'-z'' convention, $p_\Omega = (p_\varphi, p_\theta, p_\psi)$ are the canonically conjugate momenta and $1/2 < I/I_3 < \infty$. The three conserved quantities of the free rotor dynamics are the rotational energy (A.1) and the two angular momenta $p_\varphi$ and $p_\psi$, and they define the classical rotation state of the body. The dynamic mean value $\langle \cos \theta \rangle$ can be calculated by separation of variables and subsequent integration:
$$\langle \cos \theta \rangle = \frac{1}{\tau_{\text{rot}}} \int_0^{\tau_{\text{rot}}} dt \cos \theta(t) = \frac{p_\varphi p_\psi}{2E_{\text{rot}}I + p_\psi^2 (1 - I/I_3)}, \quad (A.2)$$
where the rotational period $\tau_{\text{rot}}$ of $\theta$-rotations is given by [38]
$$\tau_{\text{rot}} = \frac{2\pi I}{\sqrt{2E_{\text{rot}}I + p_\psi^2 (1 - I/I_3)}}. \quad (A.3)$$
We note the close resemblance between the classical mean value (A.2) and the quantum mechanical expectation value $Q_{\ell m k} = mk/\ell(\ell + 1)$.
The thermal distribution $p_{\text{th}}(q)$ of the mean value $q = \langle \cos \theta \rangle$ is defined by
$$p_{\text{th}}(q) = \int d\Omega dp_\Omega \delta[q - \langle \cos \theta \rangle] p_{\text{th}}(\Omega, p_\Omega), \quad (A.4)$$
where $p_{\text{th}}(\Omega, p_\Omega)$ is the Boltzmann distribution of the rotational energy (A.1). The distribution (A.4) can be evaluated by following the derivation presented in [38], and a straightforward calculation yields the probability density (5). It is normalized, even and independent of the temperature $T$. Thus its first moment $\langle q \rangle$ vanishes and its second moment can be calculated to be
$$\langle q^2 \rangle = \frac{1}{3} \frac{1}{1 - I/I_3} \left( 1 - \sqrt{\frac{I}{I_3} \frac{\arcsin \sqrt{1 - I/I_3}}{\sqrt{1 - I/I_3}}} \right). \quad (A.5)$$
The second moment is strictly decreasing for increasing $I/I_3$ and in the limit $I/I_3 \to \infty$ (linear rotor) it tends towards zero. For instance, for spherical tops, $I/I_3 = 1$, we have $\langle q^2 \rangle = 1/9$. In the linear rotor limit $I/I_3 \to \infty$ the distribution (5) approaches the $\delta$-distribution, $p_{\text{th}}(q) \to \delta(q)$.
and the mean value \( \langle \cos \theta \rangle \) vanishes for all rotation states.
**Key words.** Matter-wave interferometry, biomolecules, dephasing, hypericin, dipole moment.
**References**
[1] D. W. Keith, M. L. Schattenburg, H. I. Smith, and D. E. Pritchard, *Phys. Rev. Lett.* **61**, 1580 (1988).
[2] W. Schöllkopf and J. P. Toennies, *Science* **266**, 1345 (1994).
[3] J. D. Perreault, A. D. Cronin, and T. A. Savas, *Phys. Rev. A* **71**, 053612 (2005).
[4] R. E. Grisenti, W. Schöllkopf, J. P. Toennies, G. C. Hegerfeldt, and T. Köhler, *Phys. Rev. Lett.* **83**, 1755 (1999).
[5] V. P. A. Lonij, C. E. Klauss, W. F. Holmgren, and A. D. Cronin, *Phys. Rev. Lett.* **105**, 233202 (2010).
[6] M. Arndt, O. Nairz, J. Vos-Andreae, C. Keller, G. van der Zouw, and A. Zeilinger, *Nature* **401**, 680 (1999).
[7] R. E. Grisenti, W. Schöllkopf, J. P. Toennies, G. C. Hegerfeldt, T. Köhler, and M. Stoll, *Phys. Rev. Lett.* **85**, 2284 (2000).
[8] R. B. Doak, R. E. Grisenti, S. Rehebien, G. Schmahl, J. P. Toennies, and C. Wöll, *Phys. Rev. Lett.* **83**, 4229 (1999).
[9] T. Reisinger, S. Eder, M. M. Greve, H. I. Smith, and B. Holst, *Microelectron. Eng.* **87**, 1011 (2010).
[10] D. W. Keith, C. R. Ekstrom, Q. A. Turchette, and D. E. Pritchard, *Phys. Rev. Lett.* **66**, 2693 (1991).
[11] J. F. Clauser and S. Li, *Phys. Rev. A* **49**, R2213 (1994).
[12] B. Brezger, L. Hackermüller, S. Uttenthaler, J. Petschinka, M. Arndt, and A. Zeilinger, *Phys. Rev. Lett.* **88**, 100404 (2002).
[13] M. S. Chapman, T. D. Hammond, A. Lenef, J. Schmiedmayer, R. A. Rubenstein, E. Smith, and D. E. Pritchard, *Phys. Rev. Lett.* **75**, 3783 (1995).
[14] L. Hackermüller, K. Hornberger, B. Brezger, A. Zeilinger, and M. Arndt, *Nature* **427**, 711 (2004).
[15] K. Hornberger, S. Uttenthaler, B. Brezger, L. Hackermüller, M. Arndt, and A. Zeilinger, *Phys. Rev. Lett.* **90**, 160401 (2003).
[16] A. Lenef, T. D. Hammond, E. T. Smith, M. S. Chapman, R. A. Rubenstein, and D. E. Pritchard, *Phys. Rev. Lett.* **78**, 760 (1997).
[17] C. Ekstrom, J. Schmiedmayer, M. Chapman, T. Hammond, and D. Pritchard, *Phys. Rev. A* **51**, 3883 (1995).
[18] M. Berninger, A. Stefanov, S. Deachapunya, and M. Arndt, *Phys. Rev. A* **76**, 013607 (2007).
[19] C. Brand, M. Sclafani, C. Knobloch, Y. Lilach, T. Juffmann, J. Kotakoski, C. Mangler, A. Winter, A. Turchanin, J. Meyer, O. Cheshnovsky, and M. Arndt, *Nat. Nanotechnol.* **10**, 845 (2015).
[20] P. Sonnentag and F. Hasselbach, *Phys. Rev. Lett.* **98**, 200402 (2007).
[21] J. R. Anglin, J. P. Paz, and W. H. Zurek, *Phys. Rev. A* **55**, 4041 (1997).
[22] Y. Levinson, *J. Phys. A: Math. Gen.* **37**, 3003 (2004).
[23] P. Machnikowski, *Phys. Rev. B* **73**, 155109 (2006).
[24] G. Gronniger, B. Barwick, H. Batelaan, T. Savas, D. Pritchard, and A. Cronin, *Appl. Phys. Lett.* **87**, 124104 (2005).
[25] B. J. McMorran and A. D. Cronin, *New J. Phys.* **11**, 033021 (2009).
[26] S. Eibenberger, S. Gerlich, M. Arndt, J. Tüxen, and M. Mayor, *New J. Phys.* **13**, 43033 (2011).
[27] Z. Saddique, I. Naeem, and A. Maimoona, *J. Ethnopharmacol.* **131**, 511 (2010).
[28] P. Agostinis, A. Vantieghem, W. Merlevede, and P. A. M. de Witte, *Int. J. Biochem. Cell Biol.* **34**, 221 (2002).
[29] T. A. Theodossiou, J. S. Hothersall, P. A. de Witte, A. Pantos, and P. Agostinis, *Mol. Pharm.* **6**, 1775 (2009).
[30] T. Juffmann, A. Milic, M. Müllneritsch, P. Asenbaum, A. Tsukernik, J. Tüxen, M. Mayor, O. Cheshnovsky, and M. Arndt, *Nat. Nanotechnol.* **7**, 297 (2012).
[31] E. Saiz, J. P. Hummel, P. J. Flory, and M. Plavsic, *J. Phys. Chem.* **85**, 3211 (1981).
[32] K. Walter, B. A. Stickler, and K. Hornberger, *Phys. Rev. A* **93**, 063612 (2016).
[33] S. Yogev, J. Levin, M. Molotskii, A. Schwarzman, O. Avayu, and Y. Rosenwaks, *J. Appl. Phys.* **103**, 064107 (2008).
[34] C. Brand, J. Fiedler, T. Juffmann, M. Sclafani, C. Knobloch, S. Scheel, Y. Lilach, O. Cheshnovsky, and M. Arndt, *Ann. Phys. (Berlin)* **527**, 580 (2015).
[35] S. Nimmrichter and K. Hornberger, *Phys. Rev. A* **78**, 023612 (2008).
[36] M. Born and E. Wolf, Principles of Optics (Pergamon Press, Oxford, 1993).
[37] J. D. Jackson, Classical Electrodynamics (Wiley, New York, 1999).
[38] B. A. Stickler and K. Hornberger, *Phys. Rev. A* **92**, 023619 (2015).
[39] C. H. Townes and L. Schawlow, Microwave Spectroscopy (Dover, New York, 1975).
[40] A. R. Edmonds, Angular Momentum in Quantum Mechanics (Princeton University Press, Princeton, 1996).
[41] D. M. Brink and G. Satchler, Angular Momentum (Oxford Science Publications, Oxford, 2002).
[42] W. Schöllkopf, R. E. Grisenti, and J. P. Toennies, *EPJ D* **28**, 125 (2004).
[43] C. Etzlstorfer and H. Falk, *Chem. Month.* **129**, 855 (1998).
[44] E. Gershnabel and I. Sh. Averbukh, *Phys. Rev. Lett.* **104**, 153001 (2010).
[45] E. Gershnabel and I. Sh. Averbukh, *J. Chem. Phys.* **135**, 084307 (2011).
[46] E. Gershnabel and I. Sh. Averbukh, *J. Chem. Phys.* **134**, 054304 (2011).
|
THE ALUMNÆ BULLETIN
June, 1935
ISSUED BY THE ALUMNAE ASSOCIATION OF THE NEW YORK TRAINING SCHOOL FOR DEACONESSES
OFFICERS OF THE ASSOCIATION
Deaconess Kate Mayer .................................................. President
Grace Church, 802 Broadway, New York City
Miss Florence S. Platt .................................................. Vice-President
St. Paul’s Cathedral, Boston, Massachusetts
Miss Erma Scott .......................................................... Secretary
Church of the Advocate, 181st and Washington Avenue, New York City
Deaconess Edith Booth .................................................. Treasurer
St. Mark’s Mission, Dante, Virginia
MEMBERS OF THE EXECUTIVE COMMITTEE
Miss Marion Holmes
Miss Neville Landstreet
Miss Virginia Zimmerman
EDITOR OF THE BULLETIN
Deaconess Amy Thompson
STANDING COMMITTEES
Membership
Deaconess Dahlgren
Miss Lucille Moore
Scholarship
Miss Mary Frances Bemont, Chairman
COMMENCEMENT AT ST. FAITH'S
Commencement Day, May 16, 1935, was about as perfect as one could ever expect it to be. The weather, which had been whimsical for some time, became ideal. The procession from the house was led by a dignified marshal across the Close to the Cathedral and the School Chapel, St. Ansgarius, where the altar was made more beautiful by the pure calla lilies sent from Florida by a former student (Marian Perkins Blackford), who for the last two years has given this thoughtful and gracious remembrance.
The candidate for the Diaconate, Clara Searle (1924) was Set Apart by Bishop Manning, acting for the Bishop of Bethlehem. The Service, as always, was most impressive, and Bishop Manning's charge to the prospective Deaconess and the graduating class was forceful and inspiring. His interest in and affection for the School is a source of strength to us all.
This short account will revive many memories to those who have shared Commencement joys in the past, and they can picture the animated scene in the Refectory when the Junior Class assumed its first responsibility without Senior assistance. The Alumnae meeting after lunch was well attended, and the present energetic and competent President, Deaconess Mayer, called on the Missionary graduates for reports of their work. We listened with keen interest to Deaconess Riebe from China, Deaconess Fracker from Nevada, Deaconess Dudley from the Virginia Missions, all giving vivid and stirring accounts. The Senior Class gave their immediate plans for the summer, which may interest the members of the Alumnae who were not present.
Mrs. Edith Eldridge Cooper, St. Clement's Church, New York.
Lucy T. Fletcher, Young People's Work, Diocese of Asheville.
Elizabeth Gibble not present on account of illness. Summer work therefore deferred.
Agnes E. Hickson, appointed Director of Religious Education Missionary District of North Dakota.
Matilda L. Keyser, St. Anne's Mission, Alberene, Virginia.
Evelyn Marden, Diocesan Missions, Arcadia, Rhode Island.
Mary Pattee, Counselor, Holiday House, New Jersey.
Rhoda Williams, Recreational Director, Children's Home Summer Camp, Beacon, New York.
Eight of the ten members of the Junior Class will be taking the usual Hospital work at St. Luke's, and are facing their summer of nursing with mingled feelings of apprehension and pleasure. One member of the class, who is a graduate nurse, is going to a Church camp in that capacity, and another to our great regret was called home by the illness of her mother.
The new Deaconess, Clara Searle, received many good wishes and congratulations. It gave us happiness to have a graduate return for her Setting Apart. Each Deaconess Set Apart from this School connects it afresh with its Founder and past history. Into the past goes the year just finished—a happy one, and full of blessings.
June, 1935.
Romola Dahlgren, Deaconess
Jane Bliss Gillespy, Deaconess
LETTER FROM THE PRESIDENT OF THE ALUMNAE ASSOCIATION
Dear Members of the Alumnae Association:
With this thirty-fifth issue of our Bulletin, I want to send you my greetings. This last year has been an interesting experience for me as your President, and most stimulating in its contacts with our Executive Committee and fellow members, and the associations that we have had together at St. Faith's.
The News Letters, so ably edited by Deaconess Thompson, have kept us informed on vital topics of interest to the whole Association, and the Scholarship
Fund has shown not only an increase over what we had this time last year, but even more encouraging, a larger number of contributors.
In the coming year, I trust, we may realize a deeper and fuller fellowship, that will in all respects stand for the ideal and vision of a living Christianity.
The following poem, quoted from The Churchman, and written by Molly Anderson Haley, I am sending with a wish for happy, restful days of recreation.
Thy blessing, Lord, on all vacation days!
For weary ones who seek the quiet ways,
Fare forth beyond the thunder of the street,
The marvel of Emmaus Road repeat;
Thy comradeship so graciously bestow
Their hearts shall burn within them as they go.
Grant those who turn for healing to the sea
May find the faith that once by Galilee
Flamed brighter than the glowing fire of coals.
And when Thou hast refreshed their hungry souls,
Speak the old words again, beside the deep.
Bid all who love Thee, Master, feed Thy sheep!
Be Thou with those who bide where mountains rise,
Where yearning earth draws nearest to the skies!
Give them the peace, the courage that they ask:
New strength to face the waiting valley task,
New light to lead through shrouding valley haze!
Thy blessing, Lord, on all vacation days!
June, 1935.
Faithfully yours,
KATE S. MAYER, Deaconess.
The 65th regular meeting of the Alumnae Association of the New York Training School for Deaconesses was held at St. Faith's House on Commencement Day, Thursday, May sixteenth.
The meeting was opened with prayer by the President, Deaconess Mayer. After which the Warden of the School, Dr. Shepard, spoke briefly of the work of the School and of the comfort and encouragement given to the students by the Alumnae. He also referred to the School as a rallying place for workers of the Church who may come to it for conference and stimulation.
The minutes of the January meeting were read and accepted.
The Treasurer's report was read and ordered on file.
Deaconess Thompson asked for an expression of opinion regarding the desirability of continuing the News Letter another year. This led inevitably to the old question of the Bulletin and whether it might not well be abandoned in favor of the cheaper News Letter. There was some discussion pro and con, which was limited by the President saying that the Executive Committee favored the printing of at least one issue of the Bulletin a year supplemented by the News Letter.
A motion was made by Mrs. Cayley that there be a summer issue of the Bulletin, but at a cheaper rate of printing, if possible.
This motion was seconded and carried.
Miss Mary Frances Bemont was appointed Chairman of the newly formed Scholarship Committee.
Members were asked to fill out questionnaires to be used as guides by the Executive Committee in planning for the monthly meetings of the Alumnae at St. Faith's next winter.
Miss Edith Hopkins read a most interesting paper on the meetings held at St. Faith's last winter.
The chief inspiration of the meeting came with the accounts of work in the field given by three graduates of the School, Deaconess Riebe of China, Deaconess Fracker of Nevada, Deaconess Dudley of the Virginia Missions.
For lack of time, these accounts were all too short, but sounded a real challenge for prayer and gifts and joyful service.
Members of the graduating class introduced themselves and told of plans for summer work.
Deaconess Clara Searle's, class of 1924, Set Apart at the Commencement Service is to be on the staff of St. Faith's.
The meeting closed with prayer.
Respectfully submitted,
Erma G. Scott, Secretary.
TREASURER'S REPORT
January 1 to June 15, 1935
GENERAL FUND
Receipts—
Balance on hand, January 1, 1935 ........................................ $65.57
Dues received, January 1 to June 15, 1935 .......................... 193.00
Total ......................................................... $258.57
Disbursements—
Treasurer's Expenses .................................................. $6.54
Secretary's Expenses ................................................... 5.00
Editing News Letter .................................................... 3.60
Total disbursements .................................................. $15.14
Balance on hand, June 15, 1935 ........................................ $243.43
SCHOLARSHIP FUND
Receipts—
Balance on hand, January 1, 1935 ........................................ $29.68
Donations received—January 1 to June 15, 1935 ..................... 317.50
Interest ............................................................... 1.62
Total Receipts ......................................................... $348.80
Disbursements—
January 5, 1935, paid to New York Training School for Deaconesses to close 1934 Scholarship Fund .......................... $29.68
Balance on hand, June 15, 1935 ........................................ $319.12
REACTION TO CERTAIN RECENT MEETINGS AT ST. FAITH'S
The two evenings spent at St. Faith's as guest of the School gave me the feeling that such gatherings might prove exceedingly worthwhile. Many a woman is most attractive and at her ease in her own or official home. Thus when a group of women, as a corporate body, such as a School, turns hostess and welcomes her Alumnae, a warmth of feeling is engendered which any number of meetings on neutral ground might fail to bring about. Yet for a busy household like that of St. Faith's to devote an evening a month to this cause may not prove easy. On the other hand, such hospitality might result in thorough understanding and friendliness tending to foster that continuity in the history and tradition of the School which would be stimulating to all concerned. The experiment seems decidedly worth trying, and time would show whether enough of the alumnae would avail themselves of the open evening to make hostesses and guests feel that the movement was helpful and constructive.
"To form new friendships and to keep fine old friendships in repair" is no slight task, and requires time and thought. Yet is there anything much more worthwhile?
Dr. Huntington was fond of referring to the elect lady of the second Epistle of St. John, whom, if I recall rightly, he regarded as a Church, whose children were walking in truth. St. John says that he will not write with paper and ink, but trusts shortly to come to them and speak face to face, that our joy may be full. In these days of many problems, would it not be a fine thing if the school, which Dr. Huntington founded, might come to stand for conferences of Christian Friends in Council, such as these evenings at St. Faith's would make possible?
EDITH R. HOPKINS,
PERSONALS
1901—Deaconess Lillian M. Yeo, upon her graduation in 1901, was made Head of the House of Mercy in Washington, D. C. This is a Church institution, where unmarried mothers are accepted in a spirit of understanding, sheltered with their babies and educated. The Deaconess' widely commended administration of the home has continued unbroken for thirty-four years, and during all these years she has provided spiritual leadership and strength for the more than five hundred young women who have been resident at the House of Mercy. The work is supported by endowment and an annual Garden Party, which is one of the events of the Washington season.
1905—Mrs. Cameron McRae (Sallie Woodward) has sent out wedding invitations for her oldest daughter, Elizabeth, who is to marry Mr. Stephen Goddard, of Shanghai. Another daughter is a volunteer at Dante, Virginia, with Deaconess Maria Williams (1911). Her oldest son, Cameron, married two years ago and has just obtained the degree of M. D. Some will remember those famous medical courses given in East Twelfth Street by Sallie Woodward and Gertrude Welton.
* * *
1906—Deaconess Woodward had, for the second year, a Missions Study Class in Lent at the Church of St. Mary the Virgin, New York City. In January Deaconess Woodward paid two brief visits to St. Thomas, Virgin Islands, and had delightful meetings with Deaconess English and Deaconess Grace Smith. It seemed a happy place in which to work and live. Deaconess Woodward is to spend the summer in California.
* * *
1908—Deaconess Frances Affleck is having a delightful vacation in England, Scotland and on the Continent.
* * *
1908—Deaconess Armstrong, who has been ill all winter, has gone to Castleton, Vermont, for the summer, much improved in health, and hopes to be back at work in the Autumn.
* * *
1909—After eleven happy years as Head-worker at St. John’s House, Philadelphia, Deaconess Viola Young’s service at St. John’s House came to an end in June, when the house was closed on account of the lack of funds. Some idea of the work done at St. John’s may be gotten from the fact that since last June about thirty thousand children—colored and of foreign born parentage—have used the house through the Library, the week day Church School, the Girls’ Friendly and other activities, as well as the Church School on Sundays.
* * *
1913—Althea Bremer, head of St. Faith’s School for Girls, Yangchow, China, is sailing from Seattle July 20th, after regular furlough, on the President Jackson.
* * *
1915—Deaconess Gilliland, St. Faith’s House, Salina, Kansas, came East for the Wellesley Conference and to speak at Woman’s Auxiliary groups.
* * *
1921—Deaconess Fracker, of Nevada, has been in the East for vacation and speaking engagements, and on her return to Nevada will be on the staff at the Lake Tahoe Conference.
1922—Alice King Potter is returning from Laramie, Wyoming, for Parish work in Troy, New York. She is to be Director of Religious Education at St. Paul's Church, have charge of the Young People's Fellowship and Student work, under the Reverend A. Abbott Hastings, formerly Warden of St. Michaels's, Ethete, Wyoming.
1925—Deaconess Margaret Bechtol is now in the States recovering from a serious operation. She will return to her work in Puerto Rico in a month or so.
1925—Virginia Zimmerman was an instructor at the newly organized Conference in the Diocese of Newark, and will be on the staff of the Long Island Conference, in her own Diocese.
1927—Since Virginia Cary has been at St. Anne's Mission, Alberene, Virginia, the work has grown greatly. The congregation has increased so that they are to have a new Church, which is being moved from a closed Mission across the mountains. Lillian Brown (1933) deserves much credit for her able part in building up this Mission.
1929—Virginia Bouldin writes—"It is almost impossible to send word of my work at Valle Crucis School. It is so much of the same thing. Finances are not usually very interesting, and that's what much of my time and thought go to, trying to stretch the dollars and keep the family comfortable and happy. During the second semester, I have had a class of small girls for Bible, also reading a group of African stories, and constructing a village as an outcome of these stories. I have also a class in Household Arts. Miss Pember (1929) is doing splendid work. In addition to her classwork, which is going well, she has a combined school and neighborhood Choir. She has also an Altar Guild training class in which the girls are much interested. We had new vestments for Palm Sunday, good music and a very lovely Service, the Altar decorated with palms, and the Choir carrying them in procession. The Easter Service was lovely, too. All of which means a great deal of time and work. So you see there are not many idle moments for Satan to nibble at."
1929—Deaconess Trask had the happiness this month of seeing the Chapel of the Transfiguration at Arcadia, Rhode Island, dedicated by Bishop Perry. This Chapel is a part of the Church Center, Holcomb House, which now among its year-around activities carries on the work begun by Deaconess Dahlgren and Deaconess Gillespy at Austin Priory.
1930—Deaconess Hutton, Pine Grove Hollow, was in a very serious automobile accident; as she is now in a cast, the extent of her injuries have not been determined. Cecilia Nelson (1927) who had been called home on account of the illness of her sister returned immediately to Pine Grove Hollow.
1930—Deaconess Harriet English, on furlough from the Virgin Islands, is due in New York early in August.
* * *
1931—Deaconess Anne Tucker is at the State Industrial Farm at Goochland, Virginia, which is a correctional institution and Virginia’s latest link in her penal system. The work is to rehabilitate and reinstate into society underprivileged women, corrected as far as possible physically, mentally and morally. The Deaconess serves as relief officer, record clerk, acting assistant superintendent and chaplain. The work offers great spiritual possibilities, as the emphasis is on a truer, richer and greater conception of life, and on Our Master’s way as the Way.
* * *
1932—Deaconess Edith Booth, Dante Virginia, has gone to the Conference at Concord, New Hampshire, as a representative of missionary work in the South.
* * *
1932—Esther Matz, who has been taking the place of Deaconess Todd in Moapa, Nevada, writes that the work at Boulder City has increased threefold. The Bishop has made four visitations since December.
* * *
1932—Katherine Jones is with the City Missions Society of New York City as Assistant Parole Officer, and is living at St. Barnabas House.
* * *
1932—Deaconess Lillian Crow, Hawthorne, Nevada, writes—"My work is most interesting and worthwhile. Think of teaching a little group of children, I found in an abandoned town, the wonderful stories of Our Lord for the first time! How they listened with bated breath, and then translated to the Mexican mother, who speaks no English. I stop now every week with them, and the storekeeper, a woman, is trying to get us the title to an abandoned cabin to use for a Chapel. Of course, it is hard, every step has to be won by supreme courage to persevere in face of apparent failure, but so many steps have been taken this year that I’ve been here. We have had to cut our salaries so the work could go on, but we seem to get along all right anyway. God blesses what we have."
* * *
1932—Deaconess Ormerod, Munising, Michigan, besides her regular work of two Parishes and three Missions is to be the Dean of Girls again this year at the Conference of the Diocese of Marquette.
* * *
1932—Eleanor Snyder, Canal Zone, writes, "The people of Panama have shown a splendid interest about the Home and seem always willing and glad to give when possible. During the children’s vacation, February to May, we hold intercessions and a gratifying number of the children attend, although
it is purposely not made compulsory. The Bishop came over a few weeks ago and confirmed two of our children, making eight confirmations altogether."
* * *
1933—Virginia C. Reed (Mrs. Walter) after August 1st will be in Santee, Nebraska, where her husband, the Reverend Walter Reed, is to be in charge of three Chapels, with an Indian catechist in each. This work, although in the State of Nebraska, is included in the Missionary District of South Dakota.
* * *
1934—Mary Hall, Madison, Wisconsin, is to be in New York this summer taking special courses in Religious Education in the Columbia Summer Session.
* * *
1934—Mary Frances Bemont, Grace Church, White Plains, is to be in charge of the Girls' Dormitory at the Valley Forge Conference.
* * *
1934—Deaconess Heath Dudley, Chula, Virginia, writes—"Come and rejoice with me. After six months of boarding, under crowded conditions, a tiny three-room house has been rented for the worker at Grub Hill. Those of you who have lived under uncomfortable conditions will truly rejoice with me that that is ended. It was, however, an experience of much value, since it gave me an insight into some of the problems of the local people that could never be gotten in any other way. No one ever came to see me when I boarded. In March, 339 people came to the Community House, for one purpose or another. In April 334 came, so apparently the house means something to the Community, other than merely the housing place of the deaconess."
DIRECTORY
1908—Affleck, Dss. Frances, 1711 South Grand Boulevard, St. Louis, Missouri.
1908—Armstrong, Dss. Anna R., 419 West 110th St., New York City.
1928-30(Spcl.)—Ashley, Miss Mary Janet, 19 Rowley St., Rochester, N. Y.
1903—Barlow, Dss. Mary, 3052 Kingsbridge Ave., New York City.
1933—Bateman, Dss. Margaret, 1147 15th St., N. W., Washington, D. C.
1915—Baxter, Mrs. Robert (Marion Frascello), 188-41 Keesville Ave., Hollis, New York.
1894—Beard, Dss. Theodora, 94 Fourth Ave., New York City.
1911—Bearse, Miss Mary, 207 East 16th St., New York City.
1925—Bechtol, Dss. Margaret, Box 68, Mayaguez, Puerto Rico.
1906-7—(Spcl.)—Bedell, Dss. Harriet, Everglades, Collier County, Florida.
1922—Beeny, Miss Clara, 22 Richmond St., New Bedford, Mass.
1918—Bellsmith, Mrs. H. W., Jr. (Ethel Bunce), Box 589, Islip, New York.
1934—Bemont, Miss Mary Frances, 33 Church St., White Plains, New York.
1934—Benson, Miss Elizabeth, St. Mark's Church, Mt. Kisco, New York.
1915—Bentley, Mrs. Cedric C. (Elsie Van Vechten), 2471 Glenwood Ave., Toledo, Ohio.
1915—Binns, Dss. Margaret, Nora, Dickinson County, Virginia.
1913-14—(Spcl.)—Blackford, Mrs. Ambler (Maria Perkins). P. O. Box 5007, South Jacksonville, Florida.
1922—Bloodgood, Mrs. F. G. (Jane Cleveland), 1102 Lincoln St., Madison, Wisconsin.
1906—Boorman, Dss. Elizabeth, 31 Prospect St., Hagerstown, Maryland.
1882—Booth, Dss. Edith, Dante, Dickinson County, Virginia.
1929—Bouldin, Miss Virginia, Valle Crucis School, Valle Crucis, North Carolina.
1933—Bowers, Miss Ethel, Blue Ridge Industrial School, Bris, Virginia.
1900—Boyd, Dss. Charlotte, 122 East 82nd St., New York City.
1923—Bradley, Dss. Agnes, Stamford Hospital, Stamford, Connecticut.
1913—Bremer, Miss Althea, American Church Mission, Yangchow, China.
1928-30—Brink, Mrs. S. E. (Edythe Jenkins), 920 East Drinker St., Dunmore, Pennsylvania.
1913—Brown, Miss Annie, Chickering House, Dedham, Massachusetts.
1920—Brown, Miss Elenora, St. Allan's School, Washington, D. C.
1933—Brown, Miss Lillian, St. Anne's Mission, Alberene, Virginia.
1923—Buchanan, Miss Evelyn, 325 Oliver Ave., Pittsburgh, Pennsylvania.
1912—Butts, Dss. Bertha, 35 South Franklin St., Wilkes-Barre, Pennsylvania.
1904—Carroll, Deaconess Mary, 3508 Lowell St., N. W., Washington, D. C.
1927—Cary, Miss Virginia, St. Anne's Mission, Alberene, Virginia.
1930—Cayley, Mrs. Murray (Arline Herting), 228 Elizabeth Ave., Elizabeth, New Jersey.
1927—Chapman, Miss Dennis Scott, American Church Mission, Kyoto, Japan.
1912—Chappell, Miss Edith, 30-43 36th St., Astoria, New York.
1912—(Spcl.)—Chappell, Dss. Elizabeth, 30-43 36th St., Astoria, New York.
1929—Clark, Miss Dorothy, 409 North Charles St., Baltimore, Maryland.
1913—Coe, Dss. Elizabeth, 26 Richards St., Worcester, Mass.
1935—Cooper, Mrs. Edith, 22 North Cowley Road, Riverside, Illinois.
1924—Cowan, Miss Florence, St. Andrew's Mission, Blue Ridge, Harper's Ferry, West Virginia.
1912-13—(Spcl.)—Craig, Miss Louise, 1123 Summit Place, Utica, New York.
1903—Creasey, Mrs. Sidney W. (Catherine Shaw), 328 Colson St., Gainesville, Florida.
1932—Crow, Dss. Lillian, Hawthorne, Nevada.
1914—Dahlgren, Dss. Romola, St. Faith's House, 419 West 110th St., N. Y. C.
1919—Denton, Miss Grace, Caribou, Maine.
1927—Dickson, Miss L. Elizabeth, Tenma Yama, no ne, Nara, Japan.
1922—Dieterly, Dss. Hilda, 1147 15th St., N. W., Washington, D. C.
1915—Diggs, Miss Eveline, Sagada, Mountain Province, Philippine Islands.
1927—Dowding, Dss. Dorothy, 26 West 84th St., New York City.
1911—Drake, Miss Aimee, 1221 Ashland Ave., Wilmette, Illinois.
1934—Dudley, Dss. Heath, Grub Hill, Chula, Amelia County, Virginia.
1916—Duffie, Dss. Dorothy, 1105 Quarrier St., Charlestown, West Virginia.
1930—Dugdale, Mrs. Arthur (Elizabeth Cabell), Ashland, Virginia.
1919—Durston, Mrs. Gilbert (Eleanor Dearing), 2754 Armand Place, St. Louis, Missouri.
1922—Eastwood, Miss Edna, Room 305, 150 Fifth Avenue, New York City.
1930—English, Dss. Harriet, St. Thomas, Virgin Islands.
1913—Flagg, Miss Helen, 91-13 216th St., Queens Village, Long Island.
1935—Fletcher, Miss Lucy, 295 Cumberland Ave., Asheville, South Carolina.
1921—Fracker, Dss. Elizabeth, St. Barnabas Mission, Wells, Nevada.
1910—Fuller, Dss. Helen, 211 South Ashland Boulevard, Chicago, Illinois.
1900—Garvin, Dss. Bertha, 802 Broadway, New York City.
1913—Gillespy, Dss. Jane, St. Faith's House, 419 W. 110th St., New York City.
1915—Gilliland, Dss. Anna, St. Faith's House, 714 North 9th St., Salina, Kansas.
1927—Gledhill, Mrs. Charles (Dorothy Williams), 75 Oakdene Ave., Grantwood, New Jersey.
1927—Gray, Miss Lucy, Box 333, Swannanoa, North Carolina.
1909—Griebel, Dss. Apauline, 13 Trumbull St., New Haven, Connecticut.
1928—Griswold, Miss Priscilla, 343 West 122nd St., New York City.
1934—Hall, Miss Mary I, 1101 University Ave., Madison, Wisconsin.
1930—Harris, Miss Gertrude, 858 West End Avenue, New York City.
1899—Hartshorne, Mrs. Charles (Sarah Stewart), 703 West University Parkway, Baltimore, Maryland.
1923—Harvey, Miss Avis, 1820 Scenic Ave., Berkeley, California.
1914—Hemphill, Dss. Rachel, 86 Sherwood Place, Greenwich, Connecticut.
1924—Hibbard, Miss Margery, 341 East 87th St., New York City.
1935—Hickson, Miss Agnes, Care Bishop Bartlett, 206 8th St., Fargo, North Dakota.
1912—Hiestand, Miss Estelle, 213 West 69th St., New York City.
1928—Hillman, Miss Sophie, South Amboy, New Jersey.
1907—Hobart, Dss. Mabel, 45 Monroe St., Brooklyn, New York.
1911—Holmes, Miss Marion, 225 West 99th St., New York City.
1895—Hopkins, Miss Edith, 130 East 57th Street, New York City.
1930—(Spcl.)—Hutton, Dss. Mary Sandys, St. George's Mission, Pine Grove Hollow, Stanley, Virginia.
1931—Hutton, Mrs. S. Janney (Nancy Chamberlain), Salisbury School, Salisbury, Connecticut.
1902—Hyde, Dss. Harriet, Box 84, Middle Haddam, Connecticut.
1923—Jareaux, Miss Barbara, 203 North Wabash Ave., Chicago, Illinois.
1924—Jackson, Miss Gladys, 56 Third St., Garden City, Long Island.
1932—Jones, Miss Katherine, St. Barnabas House, 304 Mulberry St., New York City.
1919—Kent, Miss Lucy, 140—34 Franklin Ave., Flushing, New York.
1935—Keyser, Miss Matilda, Dover, Ohio.
1915-16—(Spcl.)—King, Miss Jennie, 208 College Ave., Elmira, New York.
1894—Knapp, Dss. Susan, 9 St. Paul's University, Ikebukuro, Tokyo, Japan.
1907—Knepper, Dss. Laura, 211 S. Ashland Boulevard, Chicago, Illinois.
1934—Landstreet, Miss Neville, Church Mission of Help, 27 West 25th St., New York City.
1917—Languedoc, Miss Emily, 21 Church St., Amsterdam, New York.
1910-11—(Spcl.)—Lewis, Mrs. Russel (Harleston Gesner), Kingsport, Nova Scotia.
1931—L'Heureux, Mrs. Sara, 223 East 17th St., New York City.
1902—Lloyd, Dss. Margaret, 28 Warrenton St., Boston, Mass.
1908—Lovell, Dss. Anne, 8 State St., Worcester, Mass.
1896—Lyon, Dss. Josephine, 80 Broadway, New Haven, Connecticut.
1927—McElvaine, Miss Helen, 618 S. Crawford St., Fort Scott, Kansas.
1916—McNulty, Dss. Susanne, St. Johnland, King's Park, Long Island.
1905—McRae, Mrs. Cameron (Sallie Woodward), American Church Mission, Shanghai, China.
1933—Maltby, Miss June, 70 East 3rd St., Corning, New York.
1916-17—Mansfield, Miss Mabel, Dante, Dickinson County, Virginia.
1935—Marden, Miss Evelyn, 45 Friendship St., Newport, Rhode Island.
1908—Massey, Dss. Charlotte, Balbalasang, via Zubuagan, Kalinga, Philippine Islands.
1932—Matz, Miss Esther, Moapa, Nevada.
1932—Mayer, Dss. Kate, 802 Broadway, New York City.
1930—Melville, Mrs. Freda, Christ Church Cathedral, Hartford, Connecticut.
1919—Memory, Mrs. Charles (Elizabeth Dailey), 456 Maplewood Avenue, Maplewood, New Jersey.
1914-15—(Spcl.)—Mills, Dss. Eliza., 419 West 110th St., New York City.
1922—Mockridge, Miss Elisabeth, 132 South 22nd St., Philadelphia, Pennsylvania.
1909—Moffett, Miss Mary, 88 Morningside Drive, New York City.
1923—Moore, Miss Lucille, Holy Trinity Church, Seaman Ave., New York City.
1932—Moore, Miss Winifred, 1017 Maffett St., Muskegon Heights, Michigan.
1921—Morrish, Mrs. F. D. (Olivia Gazzam), 200 Edgewood Drive, West Palm Beach, Florida.
1904—Moulson, Miss Laura, 76 Dartmouth St., Rochester, Pennsylvania.
1913-14—Munroe, Dr Rose, 212 Beacon St., Boston, Mass.
1927—Nelson, Miss Cecilia, Pine Grove Hollow, Stanley, Virginia.
1927—Nevin, Miss Eleanor, 413 Seeley Rd., Syracuse, New York.
1925—Newton, Mrs. Horace, (Letitia Gest), 304 First St., Defiance, Ohio.
1908—Nicholas, Dss. Mabel, 125 DeKalb Ave., Brooklyn, New York.
1903—Nosler, Dss. Myrtle, 627 14th Ave., North, Seattle, Washington.
1932—Ormerod, Dss. Isabel, Munising, Michigan.
1902—Paine, Dss. Theodora 265 Elmira St., Troy, Pennsylvania.
1924—Parker, Miss Eleanor, 30 Glen Road, Brookline, Massachusetts.
1921—Parsons, Dss. Ruth, 211 South Ashland Boulevard, Chicago, Illinois.
1935—Pattee, Miss Mary, 26 Hartford St., Newton Highlands, Mass.
1906—Patterson, Dss. Katrina, 248 Madison Road, Scarsdale, New York.
1895—Patterson, Dss. Mary, 12803 Gregory St., Blue Island, Illinois.
1916—Peatross, Mrs. L. Ashby (Dorothy Norton), 12 East Genesee St., Wellsville, New York.
1929—Pember, Miss Ruth, Delmar, New York.
1905—Phelps, Dss. Katherine, Box 23, Paso Robles, California.
1911—Pier, Miss Ella, Maple Hill, Upper Red Hook, New York.
1921-22—Pitcher, Dss. Caroline, 204 Ira Ave., San Antonio, Texas.
1921—Platt, Miss Florence, Cathedral of Saint Paul, Boston, Massachusetts.
1918—Podmore, Mrs. H. V. (Nina Ledbetter), 50 Bates St., Honolulu, Hawaii.
1922—Potter, Miss Alice King, Martha Memorial House, Troy, New York.
1897—Potter, Dss. Mary, 565 West Montecito Ave., Sierra Madre, Calif.
1932—Pray, Miss Martha, 330 South 13th St., Philadelphia, Penna.
1907—Radford, Dss. Bertha, 119 Harrison Ave., Lynchburg, Virginia.
1932—Ramsay, Dss. Lydia, 419 West 110th St., New York City.
1916—Ranger, Miss Margery, 215 East 73rd St., New York City.
1902—Ranson, Dss. Anna, Isoyama, Fukuda Mura, Fukushima, Ken, Japan.
1933—Reed, Mrs. Virginia, Santee Mission, P. O. Star Route, Niobrara, Nebraska.
1917-18—(Spcl.)—Rich, Miss Louise, Old Synod Hall, 112th and Amsterdam Ave., New York City.
1934—Richardson, Miss Elisabeth, Church of the Ascension, West New Brighton, Staten Island, New York.
1931—Robinson, Miss Catherine, Christ Church, St. Paul and Chase St., Baltimore, Maryland.
1928—Robinson, Miss Olive, 116 First Ave., Alpena, Michigan.
1904—Routledge, Dss. Margaret, 542 South Boyle Ave., Los Angeles, California.
1912-13—(Spcl.)—Saunier, Miss Rylla, 75 High St., Ipswich, Massachusetts.
1912—Schodts, Dss. Louise, 30-43 36th St., Astoria, New York.
1928—Scott, Miss Erma, Church of the Advocate, 181st and Washington Ave., Bronx, New York City.
1934—Scott, Miss Ethel, St. Mary's Convalescent Home, 405 West 34th St., New York City.
1924—Searle, Dss. Clara, 419 West 110th St., New York City.
1911—Shepard, Dss. Mary—134 Fourth East St., Salt Lake City, Utah.
1925—Sime, Dss. Eleanor, Nassau County Sanatorium, Farmingdale, Long Island.
1896—Smith, Dss. Edith, 61 Franklin St., Morristown, New Jersey.
1926—Smith, Deaconess Eleanor, St. Andrew's Cathedral, Honolulu, Hawaii.
1922—Smith, Mrs. Hollis (Anne (Piper)—American Church Mission, Shanghai, China.
1919-20—(Spcl.)—Smith, Mrs. Soren (Mary Bailey), Delsea Drive, R. F. D. Vineland, New Jersey.
1932—Snyder, Miss Eleanor, Box 285, Ancon, Canal Zone, Panama.
1914—Sprague, Miss Mabel, 32 Pennington Ave., Passaic, New Jersey.
1906—Stephenson, Dss. Julia, 24 George St., Cohoes, New York.
1905-06—(Spcl.)—Stewart, Miss Dora, 32 Hubbard St., Cambridge, Mass.
1933—Tarbox, Dss. Alys, 1147 15th St., N. W., Washington, D. C.
1933—Taylor, Miss Dorothy, 141 South Avenue, Syracuse, New York.
1923—Thomas, Mrs. F. W. (Helen Jarvis), Weaversville Road, Asheville, North Carolina.
1914—Thompson, Dss. Amy, 419 West 110th St., New York City.
1929—Trask, Dss. Elizabeth, Arcadia, Hope Valley, R. F. D., Rhode Island.
1931—Tucker, Dss. Anne, State Industrial Farm, Goochland, Virginia.
1925—Turley, Miss Marle, St. John's Church, Youngstown, Ohio.
1933—Viele, Miss Laetitia, Faraway Farm, Dale, New York.
1929—Waddington, Mrs. Sidney (Alys MacIntosh), St. Francis Mission, Upi, Cotabato, Philippine Islands.
1928—Weakley, Mrs. Everett (Mary Vanner), Pine Grove Hollow, Stanley, Virginia.
1909—West, Dss. Mary C., 70 Maple Ave., Morristown, New Jersey.
1925—Williams, Mrs. Charles (Phyllis Dickinson), 41 Lincoln Road, Albany, New York.
1911—Williams, Dss. Maria, Dante, Dickinson County, Virginia.
1934—Williams, Miss Rhoda, 15 High St., Beverly Farms, Mass.
1933—Wilson, Miss Janet, 310 East Erie St., Milwaukee, Wisconsin.
1899—Withers, Dss. Helen, Christ Church Hospital, Philadelphia, Pennsylvania.
1927—Woodruff, Miss Mabel, 18 West 25th St., New York City.
1906—Woodward, Dss. Clarine, 2525 Morris Ave., Fordham, New York City.
1924—Worster, Mrs. Matthew (Nancy Ambler), 343 Fairmount Ave., Jersey City, New Jersey.
1901—Yeo, Dss. Lillian, House of Mercy, Klingle Road, Washington, D. C.
1919—Young, Miss Anne, Grace Church, Hicks St., Brooklyn, New York.
1909—Young, Dss. Viola, 118 Midland Ave., Montclair, New Jersey.
1925—Zimmerman, Miss Virginia, 170 Remsen St., Brooklyn, New York.
|
When Schizophrenia Helps
There are forms of schizophrenic experience that can be positively and creatively constructive. Karl Menninger, in 1959, put it this way: "Some patients have a mental illness and then get well and then they get better! I mean they get better than they ever were... This is an extraordinary and little-realized truth."
A handful of psychiatrists have recognized the validity of this observation—Harry Stack Sullivan, John Perry, R. D. Laing and others. But most psychiatrists find it hard to regard the bizarre disorganization of schizophrenia as anything but ominous, and they see the crazy disturbances as behaviors to be done away with as quickly as possible. When this cannot be done, they prescribe huge doses of antipsychotic drugs.
But there is mounting evidence that some of the most profound schizophrenic disorganizations are preludes to impressive reorganization and personality growth—not so much breakdown as breakthrough. Kazimierz Dabrowski has called it "positive disintegration." It appears to be a natural reaction to severe stress, a spontaneous process into which persons may enter when their usual problem-solving techniques fail to solve such basic life crises as occupational or sexual inadequacy. If this natural process is interrupted by well-intended psychotherapy or by antipsychotic medication, the effect may be to detour the patient away from the acute schizophrenic episode, away from a process as natural and benign as fever. The effect can be disastrous—it can rob him of his natural problem-solving potential.
Make or Break. Anton Boisen was one of the first to recognize the potentially beneficial aspects of psychosis. Boisen was a psychologist and chaplain who went through several brief schizophrenic periods himself. Acute schizophrenic reactions, he wrote, are "not in themselves evils but problem-solving experiences. They are attempts at reorganization in which the entire personality, to its bottom-most depths, is aroused and its forces marshaled to meet the danger of personal failure and isolation.... The acute disturbances tend either to make or break. They may send the patient to the back wards, there to remain as a hopeless wreck, or they may send him back to the community in better shape than he had been for years."
As Boisen indicates, while some patients are likely to recover—even benefit—from their psychotic experiences, others may be severely disturbed for the rest of their lives. There has been extensive research in recent years concerning which patients are which; usually this has involved collecting a quantity of data about many schizophrenic patients, waiting to see which ones get better, then rechecking the data to see if the improved patients were in any way systematically different from the unimproved patients.
One of the most common findings is that the patient who improved had a sudden onset of symptoms; he typically went from a moderately effective lifestyle to severe psychosis in a period of perhaps a few days or weeks. Further, there was typically a precipitating event, some life-crisis that immediately preceded the break. On the other hand the schizophrenic who has been developing his symptoms over a period of years, gradually becoming more withdrawn and out of touch with reality, is more likely to remain in a disturbed condition for many years.
Death. There are other typical characteristics of the "problem-solving schizophrenic." A reaction to personal failure or guilt often starts with high anxiety as the patient searches for any possible way to repair his self-esteem. With increasing emotional turmoil, he takes a highly subjective orientation to the problem and becomes preoccupied, socially isolated and withdrawn. He feels despair and hopelessness. As Sullivan has noted, he may finally think "that he is dead, that this is the state after death; that he awaits resurrection or the salvation of his soul. Ancient myths of redemption and rebirth seem to appear." Ideas of death-rebirth, world catastrophe and cosmic importance are common.
The patient may regress to childish
"In many non-Western cultures the psychoticlike transitional ordeal is accepted—there is no social stigma for the initiate. In our culture, however, the schizophrenic must make his fantastic voyage alone, ashamed, in the hands of hospital-ward personnel whose purpose is to interrupt his schizophrenic trip."
behavior. He may go so far as to simulate the womb by wrapping himself in wet sheets. He may become extremely withdrawn—not eating or drinking, not talking, not blowing his nose, staying in bed all day, perhaps with eyes and mouth tightly closed. He might rock back and forth with strange, rhythmic movements. Occasionally he may pass from his catatonic stupor into violent, random excitement. In this state he may hurt himself or others, but only by accident. He is not mad at anyone else. In fact, persistent outright aggression toward others is a bad sign. It is as if such a patient has aborted his schizophrenic trip, has taken the easy way out by blaming his troubles on someone else. Harry Stack Sullivan has vividly described the implications:
"This is an ominous development in that the schizophrenic state is taking on a paranoid coloring. If the suffering of the patient is markedly diminished thereby, we shall observe the evolution of a paranoid schizophrenic state. These conditions are of relatively much less favorable outcome. They tend to permanent distortions of the interpersonal relations...
"A paranoid systematization is, therefore, markedly beneficial to the peace of mind of the person chiefly concerned, and its achievement in the course of a schizophrenic disorder is so great an improvement in security that it is seldom relinquished.... It is for this reason that the paranoid development in a schizophrenic state has to be regarded as of bad omen."
Interference. Phenothiazine drugs—especially chlorpromazine—have made it possible to control the most difficult, craziest patients. But in certain individuals these drugs may interfere with recovery. In a recent study, Drs. Michael Goldstein, Lewis Judd and their colleagues at U.C.L.A. tested schizophrenic patients who had shown reasonably good psychological adjustments before they were hospitalized. The acute nonparanoid schizophrenic patients treated with chlorpromazine actually showed increases in thought disorder over a three-week period, while a similar group of patients, on placebos, showed decreases in thought disorder during the same period. This relationship did not hold in patients with the paranoid type of schizophrenic reaction.
Tranquilizers seem to reduce regressed and agitated schizophrenic behavior, and most psychiatrists take this as evidence of improvement. Unfortunately, regressed and disorganized behavior may be essential parts of schizophrenia's problem-solving process.
Several research studies have shown that chlorpromazine reduces the clarity of ordinary experience, and it disrupts a person's abilities to see alternatives and solve problems. It is no wonder then that in schizophrenic reactions that are essentially problem-solving processes, the use of chlorpromazine can make the psychosis worse.
Light. This type of schizophrenic reaction bears an interesting relationship to the phenomenon of suicide. Suicide is also a radical response to a life-crisis situation. The suicidal person, unable to die the ritual death that the acute nonparanoid schizophrenic does, actually removes himself completely from this entrapment.
There is fascinating research that relates suicide to the autokinesis test in which one sits in a darkened room and looks at a small spot of stationary light several yards away. After a few minutes in darkness, most persons report that the light is moving erratically. One explanation of this effect is that in darkness, in the absence of external references, we respond more to internal cues. Our eyes normally have a slight vibrating movement that we never notice, but in the darkness we are aware of the movement and conclude that the spot of light is doing it. Harold Voth and his colleagues found that persons who later commit or attempt suicide tend to see the light as stationary. In part this is because they are unable to respond to inner cues—their attention is primarily outside, on the external world. Conflict is not experienced as occurring within oneself but rather outside—between oneself and others. Such individuals find it very difficult to escape into fantasy where they might consider alternative solutions. This reduces the options available for mastering personal distress.
The important point here is that while certain patients with nonparanoid schizophrenia see more autokinetic movement than normal persons do, paranoid schizophrenics are similar to suicidal groups in that they report relatively little movement. As we noted, it is the paranoid schizophrenic who has aborted the natural schizophrenic experience by directing his attention outward.
Trips. Research has indicated several similarities between the schizophrenic trip and the psychedelic-drug trip, with LSD for example. First of all such tranquilizers as chlorpromazine can make a
bad trip worse, possibly in the same way that they interrupt the schizophrenic process. The development of paranoid ideas in a person under LSD is also ominous; they take him away from the ideal subjective orientation to the drug experience. We have also found that persons on either kind of journey have a more undifferentiated perceptual orientation than normal persons. For example, they respond to distracting stimuli which causes them to perform poorly on reaction-time tasks and on complex perceptual tasks.
Further, acute schizophrenics and persons under the influence of psychedelic drugs are highly sensitive to stimuli. Sights and sounds are experienced as brilliant, intense, alive, rich, compelling. This acute sensitivity of schizophrenia has gone unnoticed until recently because it is very hard to test. Schizophrenics do not respond well to complex directions; they are flooded by so many stimuli, and so easily distracted by minor sights and noises, that on many sensitivity tests ("press this button when you see the light") they appear unable to perceive stimuli as well as normal persons can.
Only in recent studies have we learned that certain schizophrenics can detect lights and sounds that are too weak for normal persons to sense. We are beginning to accumulate evidence that supports the acute schizophrenic's description of his overaroused world. He is overwhelmed by stimulation. He has difficulty in focusing attention for very long. While he is expressing an idea, a whole series of complicating ideas may come to his mind. He may be blocked in the act of speaking, or may give up the struggle and go mute.
Apparently the mechanism that filters out nonessential stimuli for the rest of us—the humming of the refrigerator, the rustling of the leaves—has ceased to function in the acute schizophrenic. In this distressed individual, who is groping for any possible answer to a life-crisis dilemma, heightened awareness may allow him to see alternative perspectives for making sense out of the life-crisis situation.
It is the aborted paranoid ideas that, like fever, be benign responses to the deeper trials of life that the patient may never solve if the therapist encourages escape or drugs him into a permanent state of psychic helplessness.
It may be that one day acute schizophrenics of certain types will not go to hospitals but will go instead to asylums or sanctuaries to grapple with their otherwise unsolvable life-crisis problems. One hopes that in this kind of environment the schizophrenic patient who emerges "weller than before" will be more the rule than the exception.
long to grasshoppers but to dogs run over in the early morning, railroad men who lived next door, and skinny bodies with polio. Grasshoppers did not die, they simply fell from view, replaced by the next one that was livelier. And though they lived with us, they were not domestic animals. Neighbors did not rush into the lot crying anything like, "Leave that goddam cat alone!" So we were able to put in long hours. No one ever complained about the sun. No one ever tried to replace an appendage, to put a leg back into its hole.... Hobbies of such classical proportions do not come along every day, and since the passing of the grasshopper I have been largely empty-handed.
"In the dawning of the Age of Aquarius," says Julian Silverman (page 62), "the task for the behavioral scientist is to construct a definition of man which more fully appreciates his irrational nature. Dichotomies such as mind-body or well-sick have outlived their usefulness."
Silverman, who is 37, received a Ph.D. in psychology in 1962 from the University of Michigan. As research specialist with the California Department of Mental Hygiene, he has helped develop a new research project at Agnews State Hospital in San Jose, California. He is working on neurophysiological laboratory techniques for identifying schizophrenic reactions that are likely to be integrating and beneficial. Patients are supported in their regressive psychotic states, and half of them receive no anti-psychotic medication.
Silverman, who is Research Director of the Esalen Institute, has published many research and theoretical papers that deal with altered states of consciousness and with the physiological aspects of subjective experience. Silverman wrote his article while integrating his past and present research into his forthcoming book, The Value of Psychotic Experience, to be published by Science and Behavior Books, Inc.
Input (Continued from page 4.)
divided only on how much each of the two contributes in particular cases. The insight "both the genetic background and the environment in which those genes grow must be considered jointly" (italics by the authors) seems to be an argument of yesterday. The authors seem to be responding to Seneca's (1-64 A.D.) opinion that all useful behavior in animals is innate, rather than to contemporary behaviorists. 3) There is an inconsistency. On page 66 we read "rat-reared animals were capable of fighting." Page 67 then tells us "when mice are reared by rat-mothers the species-specific aggressive-behavior pattern is eliminated" (incidentally, there is not one but a number of behavior patterns). The last paragraph finally says "we may definitely conclude that species-specific behavior patterns... can be modified dramatically by appropriate social experience in early life."
I still would like to know what the authors considered to be present, what to be modified, and what to be eliminated. Does rat-rearing merely raise the threshold for fighting? Do rat-reared mice fight rats (change of addressee of aggression due to early experience)? Clear answers to these questions are the crux of the matter.
Dr. D. Müller-Schwarze
Assoc. Prof., Animal Behavior
Utah State University
Logan, Utah
Richelle's "Biological Clocks" [May] is most interesting, but an appreciation of some of the anthropological work by Paul Bohannon, Edmund R. Leach, and R. I. Pocock might have been useful.
I suppose it would have been either too facile or too trendy to have substituted the "clocked rat" illustration on Page 34 for an equally crucified figure of an urban human (take your choice, male or female)... but more unfortunately, to the point.
Grant McCall
Institute of Social Anthropology
Oxford, England
Where's P-T?
When in heaven's name will the magazine be sent to my new address? What is the matter with you people, or should I ask? I know darn well what the matter is!... Of course since you are now getting your higher rates, those who subscribed at the beginning can go jump in the lake I suppose. It is the same insanity everywhere, the greed for money overcomes any and all morality, and your brothers in the field have created this immorality. Morality does not only apply to sex, dumb dumbs, it also applies to your dealings with people in general, in all areas of life, or did this escape you?
Lily M. Leduc
New York City, N. Y.
ESP
Recently I had a dream in which I was searching for my young son. To my relief, I found him splashing around with some other children at the beach.
At the breakfast table, my husband who had been awake longer than I had, said, "I had an odd dream—I dreamed that J. S. said that Jimmy had drowned. It was as if I had 'rescued' our son. In waking life, neither of us is psychic. And not all our dreams correspond so closely.
I hope Ullman and Krippner ["ESP in the Night," June] run follow-up studies of husbands and wives; the data should be interesting.
Susan Forthman
Whales
I realized your magazine was taking up my, our, cause and you gave me such a psychological lift that I'll be invigorated for a long time to keep up the "blue whale" campaign [June].
Tony Mallin
Chicago, Ill.
The recordings of whale songs are wonderful and terrible, fantastic and frightening—and they evoked a feeling I cannot describe effectively. I felt a great empathy for these magnificent creatures. I know now power and space and depth and something very close to doom in a new way.
Lillie Robinson
Virginia Beach, Va.
Over-Eager Volunteers
It was with a good deal of interest I read your June article "When He Lends a Helping Hand Bite It," by Ralph L. Rosnow. About the paragraph referring to experiments on the social psychology of the volunteers, wondered if they had thought of testing them in a field completely unrelated to the immediate research in question; consequently not so threatening? An eye examination as a case in point: I was really startled at my own "over-achieving" subconscious reaction, in a very thorough testing of very normal eyes. It was a relaxed situation, but my constant reaction to a normal question was, "What is the right answer? What does he really want me to say—Is this a dumb response?" A constant reminder was necessary, only eyes were being tested, no rightness or wrongness involved; no ego strength needed, just a test of eyes.
(Mrs. David) Katharine Foreman
Flint, Mich.
Helen I. Safa
The Achieving Society. David McClelland. Van Nostrand, 1961, $8.75; Free Press, 1967, $2.95.
The Culture of Poverty. Oscar Lewis in *Scientific American*, Vol. 215, No. 4, pp. 19-25, October 1966.
The Experience of Change in Puerto Rico. Arnold S. Feldman and John M. Kendrick in *Howard Law Journal*, Vol. 15, No. 1, pp. 28-46, Fall 1968.
The Female-Based Household in Public Housing: A Case Study in Puerto Rico. Helen I. Safa in *Human Organization*, Vol. 24, No. 2, Summer 1965.
From Shanty Town to Public Housing: A Comparison of Family Structure in Two Urban Neighborhoods in San Juan, Puerto Rico. Helen I. Safa in *Caribbean Studies*, Vol. 4, No. 1, April 1964.
Puerto Rican Adaptations to the Urban Milieu. Helen I. Safa in *The City and Race*, Vol. V of the *Urban Affairs Annual Review*, Peter Orleans and Russell Ellis, eds. Sage Publications, in press, 1971.
The Social Isolation of the Urban Poor: Life in a Puerto Rican Shanty Town. Helen I. Safa in *Among the People: Encounters with the Urban Poor*, Irwin Deutscher and Elizabeth Thompson, eds. Basic, 1968, $10.00.
La Vida: A Puerto Rican Family in the Culture of Poverty. Oscar Lewis. Random, 1965, $10.00; paper, $2.95.
Daniel P. Moynihan
The Behavioral and Social Sciences: Outlook and Needs. A report by the National Academy of Sciences and the Social Science Research Council. Prentice-Hall, 1969, $7.95.
Beyond the Melting Pot. Daniel P. Moynihan and Nathan Glazer. M.I.T. Press, Second Edition, 1970, $10.00, paper, $1.95.
Knowledge into Action: Improving the Nation's Use of the Social Sciences. A report of the Special Commission on the Social Sciences of the National Science Board. National Science Foundation, 1969, available from the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C. 20402, $7.50.
Maximum, Feasible Misunderstanding: Community Action in the War on Poverty. Daniel P. Moynihan. Free Press, 1969, $5.95; 1970, paper, $2.45.
The Professionalization of Reform. Daniel P. Moynihan in *The Public Interest*, Issue 1, pp. 6-16, Fall 1965.
Catherine Caldwell
The Concept of Equality of Educational Opportunity. James S. Coleman in *Harvard Educational Review*, Vol. 38, No. 1, pp. 7-22, Winter 1968.
Equal Schools or Equal Students. James S. Coleman in *The Public Interest*, Issue 4, pp. 70-75, Summer 1966.
Equality of Educational Opportunity. James S. Coleman et al. U.S. Government Printing Office. Available from Superintendent of Documents, Document No. F5S.238-38001, 1966, paper, $4.25.
Racial Isolation in Public Schools. A report of the U.S. Commission on Civil Rights. U.S. Government Printing Office. Available from Superintendent of Documents, Document No. CR1.2:Sch6/12/v.1.2 (Vol. 1 is the report, Vol. 2 the appendix), 1967, $1.00 each.
Sources of Resistance to the Coleman Report. Daniel P. Moynihan in *Harvard Educational Review*, Vol. 38, No. 1, pp. 23-36, Winter 1968.
George W. Albee
Careers in Mental Health. George W. Albee in *Encyclopedia of Mental Health*, Vol. I, Albert Deutsch and Helen Fishman, eds. Watts, 1968.
Conceptual Models and Manpower Requirements in Psychology. George W. Albee in *American Psychologist*, Vol. 23, No. 5, pp. 517-520, 1968. Also in *Community Psychology and Community Mental Health: Introductory Readings*. P. E. Cook, ed. Holden-Day, 1970, tentative price $8.50.
Notes Toward a Position Paper Opposing Psychodiagnosis. George W. Albee in *New Approaches to Personality Classification*. A. R. Maher, ed. Columbia University Press, 1970, $12.50.
The Relation of Conceptual Models of Disturbed Behavior to Institutional and Manpower Requirements. George W. Albee in *Manpower for Mental Health*. F. N. Arnhold, E. A. Rubinstein and J. C. Spelsman, eds. Aldine, 1969, $6.95.
Ralph L. Beals
The Behavioral Sciences and the Federal Government. National Academy of Sciences/National Research Council, Publication 1680, 1968, $3.00.
Cross Cultural Research and Government Policy. Ralph L. Beals in *Bulletin of the Atomic Scientists*, Vol. 23, pp. 18-24, October 1967.
International Research Problems in Anthropology. Ralph L. Beals in *Current Anthropology*, Vol. 8, pp. 470-475, December 1967.
Politics of Social Research. Ralph L. Beals. Aldine, 1969, $6.95.
The Rise and Fall of Project Camelot. Irving Louis Horowitz, ed. M.I.T. Press, 1967, $12.50, paper, $3.95.
The Sociologist as Partisan: Sociology and the Welfare State. Alvin W. Gouldner in *American Sociologist*, Vol. 3, No. 2, pp. 103-116, May 1968.
David Premack
The Acquisition of Language in Infant and Child. Martin D. S. Braine in *The Learning of Language*, Carroll E. Reed, ed. Appleton-Century, in press 1970, tentative price $9.00.
Aspects of the Theory of Syntax. Noam Chomsky. M.I.T. Press, 1965, $7.50; 1969, paper, $2.95.
Behavior of Nonhuman Primates, Vol. III. Alan Schrier and Fred Stollnitz, eds. Academic Press, in press 1971.
Biological Foundations of Language. Eric Lenneberg. Wiley, 1967, $10.00.
A Functional Analysis of Language. David Premack in *Journal of the Experimental Analysis of Behavior*, Vol. 13, No. 4, 1970.
The Genesis of Language. Frank Smith and George A. Miller, eds. M.I.T. Press, 1966, $10.00.
An Inquiry Into Meaning and Truth. Bertrand Russell. Humanities, 1966, $5.50; Penguin, 1963, paper, $1.25.
Psycholinguistics. Roger Brown. Free Press, 1970, $8.95.
Julian Silverman
Conceptions of Modern Psychiatry. Harry S. Sullivan. Norton, 1963, $7.50, paper, $2.95.
Exploration of the Inner World; A Study of Mental Disorder and Religious Experience. Anton Boisen. Harper, 1936.
A Paradigm for the Study of Altered States of Consciousness. Julian Silverman in *British Journal of Psychiatry*, Vol. 114, pp. 1201-1218, 1968.
Politics of Experience. Ronald D. Laing. Pantheon, 1967, $3.95; Ballantine, paper, $.95.
Positive Disintegration. Kazimierz Dabrowski, Jason Aronson, ed. Little, Brown, 1964, $5.00, paper, $1.95.
Psychophysiological and Behavioral Effects of Phenothiazine Administration in Acute Schizophrenics as a Function of Premorbid Status. Michael J. Goldstein, Lewis L. Judd, Elliot H. Rodnick and Anthony LaPolla in *Journal of Psychiatric Research*, Vol. 6, pp. 271-287, May 1969.
Reconstitutive Process in the Psychopathology of the Self. John W. Perry in *Annals of the New York Academy of Science*, Vol. 96, pp. 853-876, 1962.
Shamans and Acute Schizophrenia. Julian Silverman in *American Anthropologist*, Vol. 69, pp. 21-31, February 1967.
Suicidal Solution as a Function of Egocloseness-Ego-Distance. Harold M. Voth, Albert C. Voth and Robert Canfield in *Archives of General Psychiatry*, Vol. 21, pp. 636-645, 1969.
|
ZONING
Senior Independent Living Center
Ref. 783-749
C-33C-04
October 19, 2004
Atlantic Senior Development, LLC
P. O. Box 54
Somerset, VA 22972-0054
Re: Conditional Rezoning Case C-33C-04
Dear Sir:
The Board of Supervisors at its meeting on October 12, 2004, granted your request to conditionally rezone property from B-3C Business District (Conditional), R-5 General Residence District and C-1 Conservation District to R-5C General Residence District (Conditional), on part of Parcel 783-748-5077, described as follows:
To find the true point and place of beginning for parcels 7A and 7B commence at a ½ “ rod set in the west line of U.S. Route 1 (Brook Road), which point is 356.24’ north of the northern end of Wilmer Avenue; thence leaving the west line of said road along a curve to the left with a radius of 45’, an arc length of 76.38’, a chord bearing of N63°22’3”W and a chord distance of 67.54’ to a 5/8” rod found; thence along a curve to the left having a radius of 320.29’, an arc length of 99.89’, a chord bearing of S59°4’14”W and a chord distance of 99.48’ to a ½” rod set; thence S50°8’10”W a distance of 67.67’ to a ½” rod set; thence along a curve to the left having a radius of 1,848.68’, an arc length of 134.89’, a chord bearing of S48°2’45”W and a chord distance of 134.86’ to a ½” rod set; thence along a curve to the right having a radius of 275’, an arc length of 194.61’, a chord bearing of S66°15’4”W and a chord distance of 190.57’ to a ½” rod set; thence along a curve to the right having a radius of 275’, an arc length of 106.60’, a chord bearing of N82°22’16”W and a chord distance of 105.93’ to a point; thence N71°16’00”W 97.76’ to a point; thence N71°16’00”W 16.23’ to a point; thence along a curve to the left having a radius of 255.33’, an arc length of 65.06 and a chord bearing of N78°33’59”W and a chord distance of 64.88’ to a point; thence N85°51’58”W 186.33’ to a point; thence N2°15’12”E 1096.29’ to a point; thence along a curve to the left having a radius of 190’, an arc length of 11.23’, a chord bearing of N00°33’35”E and a chord distance of 11.23’ to a point; thence N01°08’03”W 150.80’ to a point; thence along a curve to the right having a radius of 210, an arc length of 12.42, a chord bearing of N00°33’35”E and a chord distance of 12.41’ to a point; thence
N2°15'12"E 546.96' to a point; thence N87°44'48"W 69.78' to a point; thence N30°57'44"W 85.50' to a point; thence N58°45'51"W 65.26' to the true point and place of beginning.
**Tract One – Parcel 7A**
Commencing at the true point and place of beginning as referenced above, thence N30°17'13"E 144.82' to a point; thence N06°26'08"E 402.83' to a point; thence N28°11'54"E 230.01' to a point; thence S82°19'07"E 208.48' to a point; thence S07°40'35"W 211.92' to a point; thence S82°19'23"E 318.96' to a point; thence S07°40'37"W 205.67' to a point; thence S82°19'23"E 127.67' to a point; thence S07°40'37"W 60' to a point; thence S52°21'06"W 115.46' to a point; thence S78°53'46"W 557.92' to a point; thence S02°15'12"W 99.22' to a point; thence N87°44'48"W 69.78' to a point; thence N30°57'44"W 85.50' to a point; thence N58°45'51"W 65.26' to the true point and place of beginning.
**Tract Two – Parcel 7B**
Commencing at the true point and place of beginning as referenced above, thence N58°45'51"W 206.29' to a point; thence N13°25'25"E 612'± to a point; thence 107'± along the center line of a creek known as Upham Brook; thence N9°15'10"E 290'± to a point; thence S67°43'50"E 39.38' to a point; thence S2°15'12"W 80'± to a point; thence along the center line of a creek known as Upham Brook as it meanders, approximated by a tie line going N86°36'23"E 241.25' to a point; thence S70°00'00"E 268.47' to a point; thence S62°55'55"E 606.24' to a point; which ends the tie line; thence S7°40'37"W 110'± to a point; thence N82°19'23"W 207.29' to a point; thence S07°40'37"W 180' to a point; thence N82°19'23"W 526.29' to a point; thence N07°40'35"E 211.92' to a point; thence N82°19'07"W 208.48' to a point; thence S28°11'54"W 230.01' to a point; thence S06°26'08"W 402.83' to a point; thence S30°17'13"W 144.82' to the true point and place of beginning.
The Board of Supervisors accepted the following proffered conditions, dated October 12 2004, which further regulates the above described property in addition to all applicable provisions of Chapter 24, Code of Henrico (Zoning Ordinance):
1. **Use and Density.** The property shall be used for an age-restricted elderly living facility which shall contain no more than 240 residential living units. No other principal uses allowed in the R-5 use category shall be permitted.
2. **Age Restriction.** In accordance with the regulations promulgated by the United States Department of Housing and Urban Development ("HUD"), all residential living units will be restricted to "housing for older persons" 62 years of age as provided in 42 U.S.C. §3607 and 24 C.F.R. 100.303.
3. **Enforcement of Age Restrictions**. In order to enforce the age restrictions, the developer shall:
a. Record a Declaration of Restrictions in accordance with HUD regulations to restrict the use of the property on which the building(s) are constructed to housing for older persons 62 years of age and above, as provided in 42 U.S.C. §3607 and 24 C.F.R. 100.303.
b. Impose in any residential lease the following restrictions (which may be in these words or words of like purport):
*All permanent residents of any individual unit must be at least 62 years of age. No person under the age of 62 may reside in any unit. No person under the age of 62 may stay with a resident for more than seven (7) consecutive days or, in the aggregate, for more than thirty (30) days in any consecutive twelve (12) month period. Lessee(s) acknowledge that violation of this provision is a material default and constitutes grounds for immediate termination of the lease.*
4. **Buildings: Phased Development**. The development will consist of a maximum of two buildings. Each building will be four (4) stories tall and will contain a maximum of 120 residential living units. The developer shall have the right to develop the property in two phases as indicated on the conceptual plan attached hereto as Exhibit A, dated May 12, 2004 (see case file).
5. **Amenities**. Each building shall offer the following amenities or provide the following areas for use by its residents:
a. **Leisure; Exercise and General Purpose Rooms**. Each building will contain multiple areas where residents can engage in various activities including reading, watching television and using computers. Additionally, each building will have exercise rooms or areas where residents can use fitness equipment or participate in exercise programs established for the residents.
b. **Laundry and Sitting Areas**. Each building will contain laundry facilities and a separate sitting area on at least three floors.
c. **Pull cords**. Each individual unit will have an emergency “pull cord” which will be connected to the front desk or other monitoring system.
d. Storage rooms and individual storage lockers will be available to the residents at an additional charge.
e. Each building shall be wired to provide an electrical connection to permit an emergency generator to be used for running emergency lighting and one elevator.
6. **Conceptual Site Plan.** The Property shall be developed substantially similar to the conceptual plan attached hereto as Exhibit A (see case file), dated May 12, 2004 (the “Conceptual Plan”) which layout plan is conceptual in nature and may vary in detail as approved by the Planning Commission (which shall take into consideration changes designed to accommodate environmental, drainage and topographical conditions, the need for accessory or maintenance buildings, as well as the requirements imposed by various County departments and agencies) at the time of Plan of Development review. Except for utility facilities as approved at the time of Plan of Development review, no improvements shall be made in that portion of the property currently zoned C-1 (as shown on the Conceptual Plan) and such area shall remain in its natural state unless a different landscaping plan is approved at the time of Plan of Development review.
7. **Utilities.** Except for junction boxes, meters and existing overhead utility lines, all new utility lines shall be underground. All junction boxes and meters located at ground level shall be screened.
8. **Trash receptacles.** Trash receptacles, other than convenience cans, shall be screened from public view at ground level at the property line of the property in a manner approved at the time of Plan of Development review. All dumpster enclosures shall comply with the multifamily development standards. Concrete pavement shall be used where the refuse container pad and apron are located.
9. **Exterior Materials.** The exposed portion of each exterior wall surface (front, rear and sides) of any buildings on the property intended for occupancy by persons shall comply with the requirements for the architectural treatment and materials specified herein. All buildings located on the property intended for use for occupancy by persons shall have exposed exterior walls (above finished grade and exclusive of rooftop screening materials for mechanical equipment, architectural features, doors and windows) of face brick, glass, exterior insulating finishing systems (E.I.F.S.), cementitious, composite-type or vinyl siding, or combination of the foregoing, unless different architectural treatment and/or materials are specifically approved with respect to the exposed portion of any such wall at the time of Plan of Development. The architectural elevation of the front façade of the buildings shown on the Conceptual Plan shall be in substantial conformity with the elevations attached to these proffers as Exhibit B (see case file) unless otherwise approved at the time of Plan of Development review. The
architectural elevations of the rear and side façade of each building shown on the Conceptual Plan shall be in substantial conformity with the elevations attached to these proffers as Exhibit C (see case file) unless otherwise approved by the Director of Planning or Planning Commission at the time of Plan of Development; except that the side façade of the northernmost building (i.e., its southern face) shall be in substantial conformity with the front façade elevations attached to these proffers as Exhibit B (see case file) unless otherwise approved by the Director of Planning or Planning Commission at the time of Plan of Development.
10. **HVAC.** Heating and air conditioning equipment on the property shall be screened in accordance with the multifamily development standards in a manner approved at the time of Plan of Development review.
11. **Storm Water Management Ponds.** If the storm water management ponds for the property are wet ponds, they shall be aerated to minimize the risk of West Nile Virus and shall be approved by the Director of Public Works. At least one pond shall be designed as an attractive water feature (which may include a fountain) and an amenity to the project and shall be used for water-oriented decks, walking trails and/or seating (benches or gazebos) appropriate for such a water feature. The decks, walking trails and/or seating shall be located around Relocated Detention Pond #1 in the areas shown on the conceptual plan attached hereto as Exhibit A (see case file). In any case, any storm water management pond located on the property shall be landscaped as approved at the time of any Plan of Development review.
12. **Pedestrian Circulation.** Pedestrian walkways shall be provided along at least one side of major circulation driveways. These walkways shall be hard surface sidewalks with a width of at least five (5) feet.
13. **Landscaping/Street Trees.** There shall be a landscaping buffer between the proposed development and adjacent commercially-zoned properties on the southern and eastern boundaries of the property. Unless otherwise approved at the time of Plan of Development review, the landscaping buffer shall be consistent with that transitional buffer category referenced as “Transitional buffer 25” by the Henrico Zoning Ordinance but shall allow for driveway/roadway and/or utility crossings. Street trees shall be provided at an average of fifty (50) foot intervals along the circulation roadways internal to the property. Such trees shall be minimum of 2 ½ inch caliper and eight (8) feet in height.
14. **Parking Lot Lighting.** Parking lot lighting shall be produced from concealed sources of light and the lighting standards shall not exceed twenty-five (25) feet in height and shall be positioned in such a manner as to minimize the impact of such lighting off site unless otherwise approved at the time of Plan of Development review.
15. **Fencing.** Any fencing required by the approved Plan of Development shall be similar in appearance to other fences in the vicinity (i.e., aluminum with wrought iron appearance) unless otherwise approved at the time of Plan of Development review.
16. **Floodplain Conservation.** The applicant agrees that following final Plan of Development approval, it will file an application seeking to rezone those areas of the Property that are within the 100-year floodplain to C-1 Conservation area. Such rezoning shall have no effect on those areas within the floodplain that are being used for utility lines and/or structures or for drainage purposes (including the enhancing and expanding of any storm water management ponds), or any improvements related thereto.
The Planning Office has been advised of the action of the Board of Supervisors and will revise its records and place a copy of the accepted proffered conditions in the Conditional Zoning Index.
Sincerely,
[Signature]
Virgil R. Hazelett, P.E.
County Manager
pc: Tetra Associates, LLC
Mr. J. Thomas O'Brien, Jr., Esquire
Director, Real Estate Assessment
Conditional Zoning Index
Dr. Penny Blumenthal – Director, Research and Planning
TIMMONS GROUP
BROOK RUN SHOPPING CENTER - LAYOUT AND DRAINAGE
PARCEL 7A & 7B - ELDERLY INDEPENDENT LIVING COMPLEX (AGE 62+)
EXHIBIT A
BROOK RUN SENIOR INDEPENDENT LIVING
© 2003 EDWARD H. WINKS JAMES D. SNOW ARCHITECTS, P.C. RICHMOND VA 10-18-2004
C-33C-04
brown on building represents brick
EXHIBIT B
BROOK RUN
SENIOR INDEPENDENT LIVING
HENRICO COUNTY, VIRGINIA
EDWARD H. WINKS
JAMES D. SNOWA
ARCHITECTS P.C.
C-33C-04
A2.01
|
Roth, A.E. and Rothblum, U.,
"Risk Aversion and Nash's Solution for Bargaining Games with Risky Outcomes,"
*Econometrica*, 50, 1982, 639-647.
The copyright to this article is held by the Econometric Society, http://www.econometricsociety.org. It may be downloaded, printed and reproduced only for personal or classroom use. Absolutely no downloading or copying may be done for, or on behalf of, any forprofit commercial firm or other commercial purpose without the explicit permission of the Econometric Society. For this purpose, contact Julie P. Gordon, Executive Director of the Society, at: firstname.lastname@example.org.
RISK AVERSION AND NASH'S SOLUTION FOR BARGAINING GAMES WITH RISKY OUTCOMES
BY ALVIN E. ROTH AND URIEL G. ROTHBLUM\(^1\)
Recent results have shown that, for bargaining over the distribution of commodities, or other riskless outcomes, Nash's solution predicts that risk aversion is a disadvantage in bargaining. Here we consider bargaining games which may concern risky outcomes as well as riskless outcomes, and we demonstrate that, in such games, risk aversion need not always be a disadvantage in bargaining. Intuitively, for bargaining games in which potential agreements involve lotteries which have a positive probability of leaving one of the players worse off than if a disagreement had occurred, the more risk averse a player, the better the terms of the agreement which he must be offered in order to induce him to reach an agreement, and to compensate him for the risk involved. For bargaining games whose disagreement outcome involves no uncertainty, we characterize when risk aversion is advantageous, disadvantageous, or irrelevant from the point of view of Nash's solution.
1. INTRODUCTION
Several investigators have considered how risk aversion influences the outcome of bargaining, as modelled by Nash's model of bargaining, and related models. Loosely speaking, Kannai [3] noted that when bargaining concerns distribution of a divisible commodity between two risk averse individuals, then Nash's solution assigns a larger share of the commodity to a bargainer as his utility function becomes less risk averse. Thus, risk aversion is a disadvantage in this situation, according to Nash's model. Kihlstrom, Roth, and Schmeidler [5] and Roth [11] generalized this observation to the case where the bargaining concerns selecting a single outcome from a set of riskless outcomes on which the two bargainers each have concave utility functions. Risk aversion is again a disadvantage. This has been elaborated by Sobel [14], who considers the case of bargaining over the distribution of several divisible commodities. Thomson [15] has independently reported related results. All these results find risk aversion to be a disadvantage in bargaining over a set of riskless outcomes. This intuitively plausible relationship between an individual's bargaining ability and his propensity for risk-taking has been established only for bargaining situations whose potential outcomes involve no risk.
This paper concerns the more general case, in which bargaining may be over risky as well as riskless outcomes (however, we consider only the case in which the disagreement outcome is riskless). In some cases, risk aversion continues to be a disadvantage in bargaining; in some cases, it has no influence; and in some cases, risk aversion turns out to be an advantage. Intuitively, for bargaining games in which potential agreements involve lotteries having a positive probabil-
\(^1\)It is a pleasure to acknowledge the stimulating conversations which we have had on this topic with Robert Aumann, Richard Kihlstrom, Stephen Ross, and David Schmeidler. The present form of the paper also reflects the comments of an anonymous referee. This work was supported by National Science Foundation Grant No. SOC 78-09928 to the University of Illinois and No. ENG-78-25182 to Yale University.
Nash’s model of bargaining is reviewed in Section 2, together with the previous results concerning risk aversion. Section 3 studies a class of games introduced in Roth and Malouf [12], from which examples can be drawn in which risk aversion is disadvantageous, irrelevant, or advantageous. The general case of games whose disagreement outcome is certain is then considered, and the effect of risk aversion in arbitrary games is characterized.
2. PREVIOUS RESULTS
Following Nash, we consider two-player bargaining games defined by a pair \((S, d)\) where \(d\) is a point in the plane, and \(S\) is a compact convex subset of the plane containing \(d\) and at least one point \(x\) such that \(x > d\). The interpretation is that \(S\) is the set of feasible expected utility payoffs to the players, any one of which will result if agreed to by both players. If no agreement is reached, the disagreement point \(d\) results. Let \(P(S)\) be the Pareto optimal subset of \(S\).
Nash proposed that bargaining between rational players be modelled by a function called a solution, which selects a feasible outcome for every bargaining game. If \(B\) denotes the class of all two-player bargaining games, a solution is a function \(f: B \rightarrow R^2\) such that \(f(S, d)\) is in \(S\). Nash also proposed that a solution should have the following properties: Pareto optimality, symmetry, independence of irrelevant alternatives, and independence of equivalent utility representations. Nash proved the following.
**Theorem 1:** There is a unique solution which possesses these four properties. It is the solution \(F\) defined by \(F(S, d) = x\) such that \(x > d\) and
\[
(x_1 - d_1)(x_2 - d_2) > (y_1 - d_1)(y_2 - d_2)
\]
for all \(y\) in \(S\) such that \(y \neq x\) and \(y > d\).
---
\(^2\)We use the notation \(x > d\) to mean that \(x_1 > d_1\) and \(x_2 > d_2\). Similarly, \(x \geq d\) will mean \(x_1 \geq d_1\) and \(x_2 \geq d_2\).
\(^3\)Property 1 (Pareto Optimality): If \(f(S, d) = x\) and \(y \geq x\), then either \(y = x\) or \(y \not\in S\).
Property 2 (Symmetry): If \((S, d)\) is a symmetric game (i.e., if \((x_1, x_2) \in S\) implies \((x_2, x_1) \in S\) and if \(d_1 = d_2\)), then \(f_1(S, d) = f_2(S, d)\).
Property 3 (Independence of Irrelevant Alternatives): If \((S, d)\) and \((T, d)\) are bargaining games such that \(T\) contains \(S\), and if \(f(T, d) \in S\), then \(f(S, d) = f(T, d)\).
Property 4 (Independence of Equivalent Utility Representations): If \((S, d)\) and \((\hat{S}, \hat{d})\) are bargaining games such that
\[
\hat{S} = \{(a_1 x_1 + b_1, a_2 x_2 + b_2) | (x_1, x_2) \in S\}
\]
and
\[
\hat{d} = (a_1 d_1 + b_1, a_2 d_2 + b_2)
\]
where \(a_1, a_2, b_1,\) and \(b_2\) are numbers such that \(a_1\) and \(a_2 > 0\), then
\[
f(S, d) = (a_1 f_1(S, d) + b_1, a_2 f_2(S, d) + b_2).
\]
These properties have been discussed amply elsewhere (cf. Nash [7], Luce and Raiffa [6], Harsanyi [2], Roth [9, 11]).
The above description follows the usual custom in describing bargaining games solely in terms of the feasible utility payoffs available to the players, without specifying the particular bargains which yield those utilities. To consider the effects of risk aversion, we need to consider the alternatives over which bargaining is conducted.
One approach is to consider each game \((S, d)\) as arising from bargaining over the set \(L\) of all finite lotteries over some set of certain alternatives \(C\) contained in \(R^n\), by individuals with (arbitrary) utility functions \(u_1\) and \(u_2\). (Denote \(u = (u_1, u_2)\).) Then the feasible set of utility payoffs is the concave set
\[
(1) \quad S = \{(u_1(l), u_2(l)) \mid l \text{ is an element of } L\},
\]
and the disagreement point \(d\) is the point
\[
(2) \quad d = (u_1(\bar{c}), u_2(\bar{c}))
\]
where \(\bar{c} \in C\) is the (deterministic) alternative which results in the case of disagreement. An (extended) bargaining model is a quintuplet \((S, d, C, \bar{c}, u)\) where \(S\) and \(d\) are defined by (1) and (2) and \((S, d) \in B\). Such a bargaining model is deterministic if \(S = \{u(c) \mid c \in C\}\), i.e., if every payoff can be achieved by a deterministic outcome.
Now consider the effect of replacing one of the players, say player 2, in a bargaining model \((S, d, C, \bar{c}, u)\) with a more risk averse player. Let \(u_2 = w\), and let \(\hat{w}\) be a more risk averse utility function than \(w\), i.e., \(\hat{w}(c) = k(w(c))\) for all \(c\) in \(C\), where \(k\) is an increasing,\(^4\) concave function (c.f. Arrow [1], Pratt [8], or Kihlstrom and Mirman [4]). Consider the bargaining model \((\hat{S}, \hat{d}, C, \bar{c}, u)\) derived from the original one by replacing individual\(^5\) \(w\) with the more risk averse individual \(\hat{w}\). We can state the following theorem (c.f. Roth [11, Theorem 5], or Kihlstrom, Roth, and Schmeidler [5]).
**Theorem 2:** In deterministic bargaining models, the utility which Nash’s solution assigns to a player increases as his opponent becomes more risk averse. That is, \(F_1(\hat{S}, \hat{d}) \geq F_1(S, d)\), where \((\hat{S}, \hat{d})\) is obtained from \((S, d)\) by replacing player 2 with a more risk averse player.
In Roth [10] it was shown the Nash’s solution could be interpreted as the utility function for a certain kind of individual, reflecting his preferences for bargaining in different games. Interpreted in this way, Theorem 2 states that such a player prefers to bargain against the more risk averse of any pair of possible
---
\(^4\)We use the word “increasing” to denote a function such that \(k(a) > k(b)\) if \(a > b\). If the first inequality need not be strict, the function will be called nondecreasing.
\(^5\)Since an individual is represented in this model only by his utility function, an individual whose utility function is \(w\) will sometimes be referred to as individual \(w\).
opponents. Another interpretation of this result can be obtained by looking at the second player's utility. When bargaining against a given player 1 over a fixed set of outcomes, Nash's solution predicts that a less risk averse bargainer $w$ obtains a more desirable outcome than does a more risk averse bargainer $\hat{w}$, in terms of the common preferences of both $w$ and $\hat{w}$.
Theorem 2 states that Nash's solution $F$ has a plausible sensitivity to risk posture, in deterministic models. We call an arbitrary solution $f$ risk sensitive if it satisfies the conclusion of Theorem 2 for all deterministic bargaining models. It has been established (c.f. Roth [12, Theorem 6]) that if a solution $f$ is both Pareto optimal and risk sensitive, then $f$ is independent of equivalent utility representations. Thus risk sensitivity can replace independence of equivalent utility representations in a characterization of Nash's solution, or of any solution which is both risk sensitive and Pareto optimal. Theorem 2 can in fact be proved for any bargaining games in which the disagreement point and all of the Pareto optimal payoffs can be achieved by riskless events.
In the following sections we will see that this simple relationship between risk aversion and Nash's solution fails to carry over to the case of games with Pareto optimal payoffs which can only be achieved by lotteries.
3. RISK AVERSION IN A SIMPLE FAMILY OF GAMES WITH RISKY OUTCOMES
Consider the family of bargaining models $(S, d, C, \bar{c}, u)$ where $C$ contains exactly three-elements, $a^1$, $a^2$, $\bar{c}$, where $\bar{c}$ is the disagreement outcome, and $a^i$ is the outcome most preferred by player $i$. Since $(S, d) \in B$, some lottery between $a^1$ and $a^2$ is preferred by both players to the disagreement outcome $\bar{c}$. The set $S$ equals the convex hull of the three points $d = u(\bar{c})$, $u(a^1)$, and $u(a^2)$, and the Pareto set $P(S)$ equals the line segment joining the latter two points. Only the endpoints of $P(S)$ can be achieved by riskless outcomes; all other Pareto optimal points are achieved only by lotteries.
The effect of risk aversion in games of this type depends on the position of the disagreement point. Let $(S, d)$ be a game derived from a three-element set $C$ as described above, with $u_2 = w$, and let $(\hat{S}, \hat{d})$ be a game obtained by replacing player 2 with a more risk averse player, with utility function $u_2 = \hat{w}$ such that $\hat{w}(c) = k(w(c))$ for all $c$ in $C$, where $k$ is an increasing concave function. Then we have the following parallel to Theorem 2.
**Theorem 3:** (i) If $w(\bar{c}) \leq w(a^1)$, then $F_1(\hat{S}, \hat{d}) \geq F_1(S, d)$. (ii) If $w(\bar{c}) \geq w(a^1)$, then $F_1(\hat{S}, \hat{d}) \leq F_1(S, d)$.
**Proof:** Since $F$ is independent of equivalent utility representations, it is sufficient to prove the theorem for games with $u_1$, $w$, and $\hat{w}$ normalized so $u_1(a^1) = w(a^2) = \hat{w}(a^2) = 1$, and $u_1(a^2) = w(a^1) = \hat{w}(a^1) = 0$. For an arbitrary
game \((T, d)\) normalized in this way,
\[
F(T, d) = \begin{cases}
(0, 1) & \text{if } d_1 - d_2 \leq -1, \\
\left( \frac{1 + d_1 - d_2}{2}, \frac{1 - d_1 + d_2}{2} \right) & \text{if } -1 \leq d_1 - d_2 \leq 1, \\
(1, 0) & \text{if } d_1 - d_2 \geq 1.
\end{cases}
\]
In case (i), \(d_2 = w(\bar{c}) \leq w(a^1) = 0\), and since \(\hat{w}\) is a concave transformation of \(w\) which keeps \(\hat{w}(a^1) = w(a^1) = 0\) and \(\hat{w}(a^2) = w(a^2) = 1\), it follows that \(\hat{d}_2 = \hat{w}(\bar{c}) \leq w(\bar{c}) = d_2\). Equation (3) therefore implies that \(F_1(\hat{S}, \hat{d}) \geq F_1(S, d)\) in this case. In case (ii), \(d_2 = w(\bar{c}) \geq w(a^1) = \hat{w}(a^1) = 0\) and \(d_2 < w(a^2) = \hat{w}(a^2) = 1\). Since \(\hat{w}\) is a concave transformation of \(w\), \(\hat{d}_2 = \hat{w}(\bar{c}) \geq w(\bar{c}) = d_2\), and so equation (3) implies \(F_1(\hat{S}, \hat{d}) \leq F_1(S, d)\). Note that when \(|d_1 - d_2| < 1\), the inequality in the conclusion of the theorem is strict. This completes the proof.
Note that any Pareto optimal payoff can be identified with a lottery between \(a^1\) and \(a^2\). Theorem 3 could be reformulated in terms of these lotteries. Part (i) of the Theorem states that the probability which Nash’s solution assigns to \(a^1\) is higher in \((\hat{S}, \hat{d})\) than in \((S, d)\), and the reverse holds in part (ii). Thus, according to Nash’s solution, risk aversion is a disadvantage to a player in games where he prefers his opponent’s favorite outcome to the disagreement outcome (case (i)). The reverse is true in case (ii): risk aversion is an advantage to a player in games where he prefers the disagreement outcome to his opponent’s favorite outcome. In games where he is indifferent between these two outcomes, a player’s risk aversion has no influence. (This last property made such games appropriate for the experimental study of bargaining reported in Roth and Malouf [12] since the risk aversion of the players need not be controlled for when such games are used to test predictions of Nash’s solution.)
4. GENERAL GAMES WITH CERTAIN DISAGREEMENT OUTCOMES
Here we consider arbitrary bargaining models \((S, d, C, \bar{c}, u)\). That is, we no longer restrict the set \(C\) to be finite, as in the previous section. Let \(P(L)\) denote the Pareto optimal subset of lotteries. We say \(x \in S\) is \(u\)-supported by \(c^1, c^2 \in C\) if for some \(p \in (0, 1)\), \(x = pu(c^1) + (1 - p)u(c^2)\) and if there is no other point \(c^3 \in C\), distinct from \(c^1\) and \(c^2\), such that \(c^3 = qu(c^1) + (1 - q)u(c^2)\) for \(q \in (0, 1)\). So \(x\) is \(u\)-supported by \(c^1\) and \(c^2\) if it can be achieved by a lottery between them, and if they are the “closest” certain outcomes by which \(x\) can be achieved. (If \(x = u(c)\) for \(c \in C\), then \(x\) is \(u\)-supported by \(c^1 = c^2 = c\).)
If \(x \in S\) is \(u\)-supported by \(c^1, c^2\), then it is favorably \(u\)-supported for player \(i\) if \(u_i(c^j) \geq u_i(\bar{c})\) for \(j = 1, 2\), and unfavorably \(u\)-supported for player \(i\) if either \(u_i(c^1) < u_i(\bar{c})\) or \(u_i(c^2) < u_i(\bar{c})\). Thus, \(x\) is favorably \(u\)-supported for player \(i\) if it is \(u\)-supported by outcomes \(c^1\) and \(c^2\) both of which player \(i\) likes at least as well as the disagreement outcome.
as the disagreement outcome, and unfavorably $u$-supported if player $i$ prefers the disagreement point $\bar{c}$ to at least one of the supports $c^1$ and $c^2$. Note that every Pareto optimal point in $S$ can be achieved by a lottery between no more than two certain Pareto optimal outcomes. Hence a point $x \in P(S)$ is unfavorably $u$-supported for player $i$ if and only if it is not favorably $u$-supported.
We can now consider the effects of risk aversion in games of this form. As before, results are phrased in terms of games $(S,d)$ and $(\hat{S},\hat{d})$, where the latter game is derived from the former by replacing player 2, whose utility function is $u_2$, with a more risk averse player whose utility function is $\hat{u}_2$. (Denote $\hat{u} = (u_1, \hat{u}_2)$.)
**Theorem 4:** (i) If $F(S,d)$ is favorably $u$-supported for player 2, then $F_1(\hat{S},\hat{d}) \geq F_1(S,d)$. (ii) If $F(\hat{S},\hat{d})$ is unfavorably $u$-supported for player 2, then $F_1(\hat{S},\hat{d}) < F_1(S,d)$.
Theorem 4 gives sufficient conditions for a bargainer’s risk aversion to be advantageous or disadvantageous to his opponent. For small changes in risk aversion, Lemma 2 will show these conditions are necessary as well as sufficient. Part (i) of Theorem 4 generalizes Theorem 2, since for deterministic models, $F(S,d)$ must be favorably $u$-supported for both players, since $F$ is individually rational.
### 5. PROOFS
For $\lambda \in [0,1]$ let $u^\lambda_2 = (1 - \lambda)u_2 + \lambda \hat{u}_2$, and let $u^\lambda = (u_1,u^\lambda_2)$, $d^\lambda = u^\lambda(\bar{c})$, and $S^\lambda = \{u^\lambda(l) | l \in L\}$. It is straightforward to verify that $u^\lambda_2$ is an increasing concave transformation of $u_2$ on the set $C$, and that as $\lambda$ increases, $u^\lambda_2$ becomes increasingly risk averse (i.e., if $\alpha < \beta$, then $u^\beta_2$ is a concave transformation of $u^\alpha_2$ on the set $C$). As $\lambda$ increases from 0 to 1, the game $(S^\lambda,d^\lambda)$ is transformed from $(S^0,d^0) = (S,d)$ to $(S^1,d^1) = (\hat{S},\hat{d})$ in a way which allows the “local” effects of a small change in player 2’s risk aversion to be examined. First, we establish that when a player is replaced by a more risk averse player, every certain outcome which was Pareto optimal in the old game is also Pareto optimal in the new game.
**Lemma 1:** Let $P(L)$ and $\hat{P}(L)$ denote the set of Pareto optimal lotteries in the games $(S,d)$ and $(\hat{S},\hat{d})$ respectively. Then $\hat{P}(L) \cap C$ contains $P(L) \cap C$.
**Proof:** We show that if $c \in C$ is not Pareto optimal in $(\hat{S},\hat{d})$, then it is not Pareto optimal in $(S,d)$. If $c \notin \hat{P}(L) \cap C$ then there is an $l \in L$ such that $\hat{u}(l) \geq \hat{u}(c)$ and either $u_1(l) > u_1(c)$ or $\hat{u}_2(l) > \hat{u}_2(c)$. But $\hat{u}_2 = k \circ u_2$ on $C$, so $\hat{u}_2(l)$ equals the expected value of $k \circ u_2$ on the lottery $l$; i.e., $\hat{u}_2(l) = E(k \circ u_2)(l) \leq k(E(u_2(l))) = k(u_2(l))$, where the inequality follows from the concavity of $k$.
---
6That is, $\hat{u}_2$ is equal to the function $k$ composed with the function $u_2$ (denoted $k \circ u_2$) on the set $C$.
Thus \( k(u_2(l)) \geq \hat{u}_2(l) \geq \hat{u}_2(c) = k(u_2(c)) \), and, since \( k \) is an increasing function, \( u_2(l) \geq u_2(c) \), with a strict inequality if \( \hat{u}_2(l) > \hat{u}_2(c) \). This completes the proof.
Lemma 1 establishes that, as \( \lambda \) increases from 0 to 1, the set \( P^\lambda(L) \cap C \) of Pareto optimal certain outcomes is nondecreasing. The next part of the proof proceeds by establishing results for games generated by a finite set of certain alternatives \( C \), and the proof concludes by establishing the general case.
We use the following proposition whose proof follows from the fact that Nash’s solution is continuous in the Hausdorff topology on the (open) set \( B \) of bargaining games (see Roth and Rothblum [13]).
**Proposition 1:** For games generated by a finite set \( C \) of certain outcomes, \( F(S^\lambda, d^\lambda) \) is a continuous function of \( \lambda \).
With Lemma 1, Proposition 1 allows us to prove the following.
**Lemma 2:** For games generated by a finite set \( C \) of certain outcomes, and for any \( \lambda \in [0, 1] \), there exists a \( \delta > 0 \) such that:
(a) If \( F(S^\lambda, d^\lambda) \) is favorably \( u^\alpha \)-supported for player 2, then for \( \alpha \in [\lambda, \lambda + \delta] \), \( F_1(S^\alpha, d^\alpha) \) is a nondecreasing function of \( \alpha \), and \( F(S^\alpha, d^\alpha) \) is favorably \( u^\alpha \)-supported for player 2 in \( (S^\alpha, d^\alpha) \).
(b) If \( F(S^\lambda, d^\lambda) \) is unfavorably \( u^\lambda \)-supported for player 2, then for \( \alpha \in [\lambda, \lambda + \delta] \), \( F(S^\alpha, d^\alpha) \) is a strictly decreasing function of \( \alpha \), and \( F(S^\alpha, d^\alpha) \) is unfavorably \( u^\alpha \)-supported for player 2 in \( (S^\alpha, d^\alpha) \).
**Proof:** First suppose \( F(S^\lambda, d^\lambda) \) is \( u^\lambda \)-supported by outcomes \( c^1, c^2 \in C \) such that \( u(c^1) \neq u(c^2) \). Lemma 1 implies \( c^1 \) and \( c^2 \) remain Pareto optimal as \( \lambda \) increases, and Proposition 1 therefore implies that there exists a positive \( \delta \), sufficiently small so that for \( \alpha \in [\lambda, \lambda + \delta] \), \( F(S^\alpha, d^\alpha) \) remains \( u^\alpha \)-supported by \( c^1 \) and \( c^2 \). Let \( E = \{c^1, c^2, \bar{c}\} \), and let \( (T^\alpha, d^\alpha) \) be the game generated by the three element set \( E \) and the utility functions \( u_1 \) and \( u_2^\alpha \). Then \( F(S^\alpha, d^\alpha) \in T^\alpha \) for \( \alpha \in [\lambda, \lambda + \delta] \), and so \( F(S^\alpha, d^\alpha) = F(T^\alpha, d^\alpha) \) since \( F \) is independent of irrelevant alternatives (Property 3). But the behavior of \( F_1(T^\alpha, d^\alpha) \) for \( \alpha \in [\lambda, \lambda + \delta] \) is given by Theorem 3: that is, if \( \alpha < \beta \) for \( \alpha, \beta \in [\lambda, \lambda + \delta] \), then \( (T^\alpha, d^\alpha) \) and \( (T^\beta, d^\beta) \) play the roles of \( (S, d) \) and \( (\hat{S}, \hat{d}) \), respectively, in Theorem 3. So Lemma 2 is proved when \( u(c^1) \neq u(c^2) \). If, instead, \( F(S^\lambda, d^\lambda) \) is \( u^\lambda \)-supported by \( c^1 = c^2 \), then \( F(S^\lambda, d^\lambda) = u^\lambda(c^1) \). In this case \( F(S^\lambda, d^\lambda) \) must be favorably supported, since \( F \) is individually rational. If there is some \( \delta > 0 \) such that \( F(S^\alpha, d^\alpha) = u^\alpha(c^1) \) for \( \alpha \in [\lambda, \lambda + \delta] \), then Lemma 2 follows immediately. Otherwise, Proposition 1 and the finiteness of \( C \) imply that there is a \( \delta > 0 \) and an outcome \( c^3 \) such that, for all \( \alpha \in [\lambda, \lambda + \delta] \), \( F(S^\alpha, d^\alpha) \) is \( u^\alpha \)-supported by \( c^1 \) and \( c^3 \). If we now let \( E = \{c^1, c^3, \bar{c}\} \), then the argument of the previous paragraph assures that \( F(S^\alpha, d^\alpha) = F(T^\alpha, d^\alpha) \), where \( (T^\alpha, d^\alpha) \) is the game generated by \( E \). So Theorem 3 implies that in this case also, \( F_1(S^\beta, d^\beta) \geq F_1(S^\alpha, d^\alpha) \) for \( \alpha, \beta \in [\lambda, \lambda + \delta] \) such that \( \alpha < \beta \). Since \( F_1(S^\alpha, d^\alpha) = u^\alpha(c^1) \) for \( \alpha \in [\lambda, \lambda + \delta] \), it follows that \( u^\alpha(c^1) \leq u^\beta(c^1) \) for \( \alpha, \beta \in [\lambda, \lambda + \delta] \) such that \( \alpha < \beta \). Thus \( u^\alpha(c^1) \) is a nondecreasing function of \( \alpha \) for \( \alpha \in [\lambda, \lambda + \delta] \), and Lemma 2 follows.
that $\alpha \leq \beta$. This completes the proof of Lemma 2, which shows that a sufficiently small increase in the risk aversion of one of the players in a game $(S^\lambda, d^\lambda)$ does not change the nature of the support of $F(S, d)$—i.e., there is an interval in which $F(S, d)$ will remain favorably or unfavorably $u^\lambda$-supported. The following lemma establishes a global result.
**Lemma 3:** For games generated by a finite set $C$ of certain outcomes, if $F(S, d)$ is favorably $u$-supported for player 2, then $F(S^\lambda, d^\lambda)$ is favorably $u^\lambda$-supported for player 2 for any $\lambda \in [0, 1]$.
**Proof:** Suppose Lemma 3 is false, and let $\beta = \inf\{\lambda | F(S^\lambda, d^\lambda) \text{ is unfavorably } u^\lambda\text{-supported for player 2}\}$. Then $F(S^\beta, d^\beta)$ is unfavorably supported for player 2, since otherwise Lemma 2 implies there is a $\delta > 0$ such that $F(S^\alpha, d^\alpha)$ is favorably $u^\alpha$-supported for $\alpha \in [\beta, \beta + \delta]$, contradicting the definition of $\beta$. But $\beta > 0$ since $(S^0, d^0) = (S, d)$. Since the set $P^\lambda(L) \cap C$ cannot become larger as $\lambda$ decreases (Lemma 1), Proposition 1 and the arguments of Lemma 2 imply that, if $F(S^\beta, d^\beta)$ is unfavorably $u^\beta$-supported for player 2, there exists a $\delta > 0$ such that for $\alpha \in [\beta - \delta, \beta]$, $F(S^\alpha, d^\alpha)$ is unfavorably $u^\alpha$-supported for player 2. But this contradicts the definition of $\beta$, and completes the proof.
**Proof of Theorem 4:** First consider games generated by lotteries over a *finite* set $C$ of certain outcomes. If $F(S, d)$ is favorably $u$-supported for player 2, then by Lemma 3, $F(S^\lambda, d^\lambda)$ is favorably $u^\lambda$-supported for every $\lambda \in [0, 1]$, and Lemma 2 therefore implies that, for every $\lambda$, there exists an interval $[\lambda, \lambda + \delta]$ in which $F_1(S^\alpha, d^\alpha)$ is a nondecreasing function of $\alpha$. Together with Proposition 1, this implies $F_1(S^\lambda, d^\lambda)$ is a nondecreasing function of $\lambda$ for all $\lambda \in [0, 1]$, completing the proof of part (i) when $C$ is finite. If $F(\hat{S}, \hat{d})$ is unfavorably $\hat{u}$-supported for player 2, then Lemma 3 implies $F(S^\lambda, d^\lambda)$ is unfavorably $u^\lambda$-supported for player 2 for every $\lambda \in [0, 1]$. Using Lemma 2 and Proposition 1 as above, it follows that $F_1(S^\lambda, d^\lambda)$ is a decreasing function of $\lambda$ for $\lambda \in [0, 1]$, completing the proof of part (ii) when $C$ is finite.
For an arbitrary (possibly infinite) set $C$ of certain outcomes, let $F(S, d)$ be $u$-supported by $c^1, c^2 \in C$, let $F(\hat{S}, \hat{d})$ be $\hat{u}$-supported by $c^3, c^4 \in C$, and let $E = \{c^1, c^2, c^3, c^4, \bar{c}\}$. Let $(T, d)$ and $(\hat{T}, \hat{d})$ be the games generated by the finite set $E$ as in equations (1) and (2), using utility functions $u = (u_1, u_2)$ and $\hat{u} = (\hat{u}_1, \hat{u}_2)$, respectively. Then, since Nash’s solution possesses the property of independence of irrelevant alternatives, $F(T, d) = F(S, d)$ and $F(\hat{T}, \hat{d}) = F(\hat{S}, \hat{d})$. But Theorem 4 has already been proved for the finitely generated games $(T, d)$ and $(\hat{T}, \hat{d})$, and so it holds for $(S, d)$ and $(\hat{S}, \hat{d})$ as well, completing the proof.
Note that, when a given bargainer is replaced by a more risk averse individual, the set of Pareto optimal certain events may grow in such a way that Nash’s solution will become favorably supported, even if it were unfavorably supported in the original game. When this happens, a bargainer’s risk aversion stops being
disadvantageous to his opponent, and starts being advantageous. This is why part (ii) of Theorem 4 is only able to compare games whose Nash solution remains unfavorably supported. Lemma 3, however, shows that a favorably supported solution remains favorably supported when a bargainer is replaced by a more risk averse individual. So part (i) of Theorem 4 establishes that, when Nash’s solution is favorably supported, an individual’s risk aversion is always advantageous to his opponent.
University of Illinois
and
Yale University
Manuscript received September, 1980; revision received February, 1981.
REFERENCES
[1] Arrow, Kenneth J.: *Essays in the Theory of Risk Bearing*. New York: American Elsevier, 1971.
[2] Harsanyi, John C.: *Rational Behavior and Bargaining Equilibrium in Games and Social Situations*. Cambridge: Cambridge University Press, 1977.
[3] Kannai, Yakar: “Concavifiability and Constructions of Concave Utility Functions,” *Journal of Mathematical Economics*, 4(1977), 1–56.
[4] Kihlstrom, Richard E., and Leonard J. Mirman: “Risk Aversion with Many Commodities,” *Journal of Economic Theory*, 8(1974), 361–388.
[5] Kihlstrom, Richard E., Alvin E. Roth, and David Schmeidler: “Risk Aversion and Nash’s Solution to the Bargaining Problem,” *Game Theory and Mathematical Economics*, ed. by O. Moeschlin and D. Pallaschke. Amsterdam: North-Holland, 1981.
[6] Luce, R. Duncan, and Howard Raiffa: *Games and Decisions: Introduction and Critical Survey*. New York: John Wiley, 1957.
[7] Nash, John F.: “The Bargaining Problem,” *Econometrica*, 28(1950), 155–162.
[8] Pratt, J. W.: “Risk Aversion in the Small and in the Large,” *Econometrica*, 32(1964), 122–136.
[9] Roth, Alvin E.: “Independence of Irrelevant Alternatives and Solutions to Nash’s Bargaining Problem,” *Journal of Economic Theory*, 16(1977), 247–251.
[10] ———: “The Nash Solution and the Utility of Bargaining,” *Econometrica*, 46(1978), 587–594, 983.
[11] ———: *Axiomatic Models of Bargaining*. Berlin and New York: Springer, 1979.
[12] Roth, Alvin E., and Michael W. K. Malouf: “Game-Theoretic Models and the Role of Information in Bargaining,” *Psychological Review*, 86(1979), 574–594.
[13] Roth, Alvin E., and Uriel G. Rothblum: “Risk Aversion and Nash’s Solution for Bargaining Games With Risky Outcomes,” Working Paper (preliminary version), University of Illinois, Urbana, 1981.
[14] Sobel, Joel: “Distortion of Utilities and the Bargaining Problem,” *Econometrica*, 49(1981), 597–620.
[15] Thomson, William: “The Manipulability of the Shapley Value,” Mimeo, University of Minnesota, Minneapolis, 1980.
|
ARIADNE: Agnostic Reconfiguration In A Disconnected Network Environment
Konstantinos Aisopos\textsuperscript{†}\textsuperscript{§}, Andrew DeOrio\textsuperscript{†}, Li-Shiuan Peh\textsuperscript{§}, and Valeria Bertacco\textsuperscript{†}
\textsuperscript{†}Princeton University \textsuperscript{§}Massachusetts Institute of Technology \textsuperscript{‡}University of Michigan
Abstract—Extreme transistor technology scaling is causing increasing concerns in device reliability: the expected lifetime of individual transistors in complex chips is quickly decreasing, and the problem is expected to worsen at future technology nodes. With complex designs increasingly relying on Networks-on-Chip (NoCs) for on-chip data transfers, a NoC must continue to operate even in the face of many transistor failures. Specifically, it must be able to reconfigure and reroute packets around faults to enable continued operation, i.e., generate new routing paths to replace the old ones upon a failure. In addition to these reliability requirements, NoCs must maintain low latency and high throughput at very low area budget.
In this work, we propose a distributed reconfiguration solution named Ariadne, targeting large, aggressively scaled, unreliable NoCs. Ariadne utilizes up*/down* for fast routing at high bandwidth, and upon any number of concurrent network failures in any location, it reconfigures to discover new resilient paths to connect the surviving nodes. Experimental results show that Ariadne provides a 40%-140% latency improvement (when subject to 50 faults in a 64-node NoC) over other on-chip state-of-the-art fault tolerant solutions, while meeting the low area budget of on-chip routers with an overhead of just 1.97%.
Keywords-NoC; resilience; reconfiguration; distributed
I. INTRODUCTION
Aggressive transistor scaling continues to increase integration capacity with each new technology node. With more transistors comes the need for modular communication architectures. Networks-on-Chip (NoCs), which offer distributed communication via a set of connected routers, are becoming more popular, as indicated by some recent designs, such as the Tile64 [4] and TERAFlops [30]. The NoCs of such many-core chips have to meet tough latency and throughput targets, in the face of stringent area and power budgets. Unfortunately, as critical dimensions shrink, reliability degrades as well. This highly scaled, unstable silicon demands new solutions that can gracefully handle permanent failures, occurring due to transistor wear-out, at any time during the life of the chip [8]. As the sole medium of communication, it is critical that a failure in the network does not cause an entire chip to fail.
Most current approaches for NoC reliability are only effective in overcoming a limited number of failures and specific fault patterns. Yet, a large number of transistors can fail in a many-core chip at advanced technology nodes [7, 8], resulting in many faults in the NoC. In addition, the locations of these faults are unpredictable and irregular in nature [11] and can thus lead to deadlock-prone irregular network topologies. Thus, any viable solution for failure-prone networks requires that the surviving nodes coordinate to reconfigure and replace the routing algorithm with a new one upon each new failure, by discovering the underlying topology and establishing deadlock-free routes.
Contributions. A fault-tolerant interconnect should address three critical metrics: reliability, performance, and area. In this work we propose Ariadne\textsuperscript{1}, a novel network reconfiguration algorithm, providing unlimited system robustness and high performance, within a very low area budget. Ariadne is agnostic to the underlying topology: it can operate on any irregular topology resulting from an initial regular topology where any number of links have failed. Ariadne discovers paths among all connected nodes, and then utilizes up*/down* to provide highly adaptive and deadlock-free routing. It achieves this through a novel distributed algorithm implemented in hardware. This algorithm is designed to minimize communication among nodes, thus lowering its silicon area footprint. Ariadne addresses the above critical metrics as follows:
- **Reliability**: Ariadne guarantees connectivity among all surviving nodes in the network, in the face of unlimited faults at any location. That is, if a path between two nodes exists, the algorithm will enable at least one deadlock-free route between them.
- **Performance**: During normal operation (after recovering from faults), Ariadne achieves low latency and high throughput routing without Virtual Channels (VCs). This implies that no VC needs to be reserved for deadlock avoidance; all VCs can be used by all available routing paths, resulting in a 40%-140% latency improvement over state-of-the-art fault tolerant solutions [14, 25] (latency improvement measured at 50 faults in a 64-node NoC).
- **Area**: Our distributed design requires only a small amount of hardware modifications and additional wiring to realize reconfiguration. This results in a low 1.97% area overhead over a baseline, non-reliable NoC.
This paper is organized as follows: Section II motivates the need for resilient NoCs that can handle many wear-out faults, and Section III presents the related work. Section IV details Ariadne, while Section V discusses the architectural modifications required to implement it. Then, Section VI offers our experimental results. Section VII discusses how deadlock-free execution is guaranteed in the face of runtime faults, as well as off-chip implementations of up*/down*. Finally, Section VIII concludes the paper.
\textsuperscript{1}The name originates from Princess Ariadne from Greek mythology, who gave Theseus a ball of thread to help him find his way in the Minotaur’s labyrinth. Similarly, our algorithm helps packets find their way in the labyrinth-like topology of a faulty network.
II. MOTIVATION
Recent studies project that there will be many transistor failures during the lifetime of many-core chips fabricated at advanced technology nodes. Researchers have characterized the impact of technology scaling on device reliability in processors [28] and Networks-on-Chips (NoCs) [3], and indicate that the number of permanent failures is expected to increase. Borkar of Intel expects that at future technology nodes 20% of transistors in chip multiprocessors will be unusable due to variations of the manufacturing process, while an additional 10% of transistors will eventually fail over the lifetime of the chip due to wear-out [7, 8].
To demonstrate the architectural impact of such high number of transistor failures in the NoC, we developed an architecture-level fault model, similar to the fault model used in Vicis [15], that maps gate-level injected faults to link-level faults. Initially, we synthesize and place-and-route a router design similar to that of [15], consisting of 20,413 gates. Then, we inject faults to its netlist using a random distribution, weighted by the area of each gate, in order to model the increased vulnerability of complex gates comprising more transistors, consistently with the breakdown patterns found experimentally by Keane, et al. [19]. For statistical confidence, we inject a total of 1,000 different fault configurations over a wide range of simultaneous gate faults. Then, we test each faulty netlist obtained to determine which links remained functional. This way, our fault model does not only model wiring failures, but also any failure within the router that results in non-functional communication links. For instance, if a gate failure disables a flit buffer of an output port, the corresponding output link will also be marked as faulty. Figure 1 shows the average number of link faults as a function of gate faults (error bars showing min and max) when applying our fault model to an 8x8 mesh of routers. Note that even for just 30 gate faults, 5 to 50 network links are expected to fail, resulting in a highly impaired network. These wear-out link failures have no predictable pattern and can occur anywhere on-chip [11], thus leading to highly irregular network topologies.
In short, there is a strong need for low-overhead reliable solutions for NoCs that are capable to deliver low latency and high throughput in the presence of many irregular link failures and potentially complete node failures, since routers will become disconnected once all adjacent links have failed. However, we note that the duration of the reconfiguration process does not need to be optimal. A reconfiguration latency even in the order of milliseconds will not affect overall performance, since it only occurs upon a permanent link failure.
III. RELATED WORK
There has been substantial research on resilient NoCs [11, 15]. Here, we focus on prior works that tackle routing in the face of link failures. First, we classify them based on the type of fault targeted and whether a bounded or unbounded number of faults can be sustained. Then, we focus on solutions that can tolerate unbounded faults without pattern constraints and discuss current off-chip and on-chip approaches, indicating how they differ in their design constraints. Our focus in this paper is to achieve resilient routing on-chip in the face of many permanent faults, by providing a solution that reconfigures the network and updates the routing algorithm upon each failure.
**Bounded number of faults.** Early work on reliable routing targeted few link failures, such as 1-fault tolerance in Dally’s reliable router [12], (n-1) fault tolerant n-dimensional meshes [16]. Works as in [17, 18, 29] leverage additional virtual channels to work around a few link failures. There has also been prior art that tackles the problem by flooding packets to random neighbors, hoping that they will eventually reach their destination [6, 24]. Such approaches result in very low throughput during normal operation.
**Unbounded number of faults with pattern constraints.** Other approaches place requirements on the configuration of faults, requiring them to be contained in convex regions [9, 31], L or T shape regions [9], or faulty polygons [20]. Oftentimes the faulty region must be expanded to include non-faulty routers; as a result, non-faulty routers and links are disabled only to satisfy the algorithm’s constraints.
**Unbounded number of faults, no pattern constraints.** Table I qualitatively compares resilient routing algorithms that can tolerate unbounded faults without pattern constraints, based on critical evaluation metrics, which heavily depend on the reconfiguration algorithm. Reconfiguration is the process of generating a new routing algorithm (whenever a new link fault is detected) to replace the current one. Given a faulty topology, the reliability and performance of a routing algorithm can be optimized with a sophisticated reconfiguration algorithm, but complexity translates to high area overhead. Off-chip reconfiguration algorithms run in software and can perform powerful optimizations, but communicating the global view of the surviving topology to a central node to run the reconfiguration software requires expensive dedicated hardware. By contrast, on-chip solutions must be designed to meet tight on-chip area budgets, thus achieving high performance or high reliability is a challenge.
**a) Off-chip solutions.** Off chip networks, such as clusters and Networks-Of-Workstations, first tackled the reliability challenge of unconstrained faults. A number of resilient routing algorithms that can be applied to any irregular topology (i.e., a topology that survived after a number of random faults in network links) has been proposed in this domain, including up*/down* (introduced in Autonet) [27], segment-based routing [23], FX routing [26], L-turn [21], and smart-routing [10]. During reconfiguration, the surviving topology is communicated to a central node, which runs the recon| bounded faults | early work[12, 16], VCs[17, 18, 29] flooding [6, 24] | reliability | performance | area |
|----------------|------------------------------------------------------|------------|-------------|------|
| pattern constraints | convex [9, 31], L or T [9], polygons [20] | limited | n/a | n/a |
| unbounded faults without pattern constraints | off-chip routing [10, 21, 23, 26, 27] | high | good | excessive |
| on-chip routing | Immunet [25] | high | bad | high |
| | Vicis [14] | limited | bad | low |
| | Ariadne (proposed) | high | good | low |
TABLE I: Resilient routing landscape. Qualitative comparison of resilient solutions.
configuration algorithm in software. Using global knowledge of the functional links, the software computes new routing tables and communicates them back to each node. As we discuss in Section VII-C, centralized approaches lead to excessive area overhead for on-chip routers (estimated at 23.2%).
b) On-chip solutions. On chip networks have a tight area and power budget, necessitating simple router structures. Reconfiguration is implemented completely in hardware and thus must be achievable with a simple Finite State Machine (FSM). There are two recent on-chip proposals that tackle the problem of unconstrained faults: Immunet [25] and Vicis [15]:
**Immunet** [25] routes packets fully adaptively towards their destinations, based on buffer availability. If necessary, packets switch to a reserved escape VC that guarantees that they will reach their destination and avoid faulty links. This VC is aware of the fault locations and routes deterministically in a ring through every node. Upon reconfiguration, a new ring that connects all surviving nodes is formed with a single broadcast, and all in-transit packets drain out via this ring, before updating the routing tables. While the ring guarantees delivery, it dramatically increases latency, since it must remain active during normal operation to ensure deadlock freedom. In addition, the design requires three routing tables per router, resulting in high area (storage) overhead.
**Vicis** [15] proposes a low overhead routing algorithm [14] to cope with an unbounded number of faults, by using a heuristic solution that makes exceptions to the odd-even turn model to maximize connectivity in meshes and tori. It utilizes the turn model during fault-free operation, but upon the occurrence of a fault, reconfiguration re-enables turns that were previously disabled by the routing algorithm to re-connect nodes that have been disconnected by the fault. As we show in Section VI, these exceptions sometimes result in deadlocked routing paths, especially in situations with large numbers of faults. Moreover, its deterministic nature does not exploit all possible routes, thus limiting performance during normal operation.
In this work, we propose the **Ariadne** reconfiguration algorithm to realize up*/down* on-chip in a fully-distributed manner. Up*/down* offers the unlimited robustness of Immunet, and higher performance (low latency and high throughput) than both Immunet and Vicis during normal operation, since no virtual channels are restricted to deterministic routing. Using a synchronization mechanism that leverages the global clock to guarantee atomicity and minimize communication among nodes, Ariadne reconfigures up*/down* (upon a fault) in an area budget three times lower than Immunet and comparable to Vicis (Section VI-D). Its implementation requires only a few gates and a single wire per port.
IV. ARIADNE ALGORITHM
Permanent transistor faults may cause link failures that modify the topology of a NoC. Though the initial topology of a NoC is usually regular, after a number of link failures, nodes will be connected through a random irregular topology. Ariadne is a reconfiguration algorithm that is invoked upon a permanent link failure (*e.g.*, due to transistor wear-out), and it is agnostic to the topology, since it includes a discovery phase of the underlying network to update the routing tables with new deadlock-free paths. In a network of N nodes, the reconfiguration procedure consists of N broadcasts, taking up to $N^2$ cycles. The procedure may run in a partially or fully connected network, and guarantees that after $N^2$ cycles every node will know the output port(s) to route to any connected destination\(^2\). Thereafter, network operation resumes normally.
Ariadne leverages up*/down* routing\(^3\), a deadlock-free algorithm that can operate on any irregular topology [27]. Up*/down* requires each link to be assigned a direction: *up* or *down*. It then disallows those paths that include traversing a *down* link followed by an *up* link. This way, all cyclic dependencies are broken. In Section IV-A, we describe our distributed reconfiguration algorithm that assigns a direction to each link, thus allowing up*/down* routing to be applied after reconfiguration. The algorithm then explores the new topology and fills the routing tables with resilient paths connecting all surviving nodes. A key cornerstone of our reconfiguration algorithm is that it is fully distributed, relying on a single atomic broadcast by each node to assign all link directions and to explore the underlying topology, as we describe in Section IV-B. This simple broadcast scheme makes for a very lightweight hardware implementation (detailed in Section V).
A. Reconfiguration Algorithm
The reconfiguration algorithm works as follows: each node, in turn, broadcasts a 1-bit reconfiguration flag to all nodes. The first node to broadcast is the node that detected the fault in the network, and it becomes the initiator (root node) of the reconfiguration process. Upon receiving the reconfiguration flag broadcasted from the root node, each node performs the following actions (Figure 2):
\(^2\)Reconfiguration delay is not a concern, since the performance overhead induced by reconfiguration can only occur as many times as the link count, throughout the lifetime of the chip.
\(^3\)Up*/down* is a routing algorithm designed for irregular networks, thus not optimal for regular networks (*i.e.*, a mesh). A potential optimization is to leverage DOR routing while the NoC is fault-free (regular), and switch to up*/down* once the first fault occurs. Note that transitioning from a routing algorithm to another may introduce deadlocks, which can be prevented as discussed in Section VII-B.
**Action1: Entering Recovery.** Upon receiving the flag for the first time (initial broadcast from the root node), each node invalidates its routing table, freezes the pipeline progress of head flits, and sets its state as “recovering”. The state will automatically switch back to “normal” in exactly $N^2$ cycles, since the reconfiguration process is guaranteed to have completed by then. Once the node gets into recovery mode, each subsequent incoming flag will only invoke Actions 3, 4.
**Action2: Tagging Link Directions.** Once a node gets into recovery mode, up*/down* routing restrictions have to be applied. In up*/down*, links towards the root node (connecting to a node which is closer to the root) are *up* links, while links away from the root are *down* links. Links to a node of equal distance to the root (as the current node) can be either. During the initial broadcast by the root, a port that receives a flag connects to a node that is closer to the root (since the flag arrived there first), thus it is marked as *up*. Similarly, a port that forwards/sends a flag on is marked as *down*. The only conflict occurs when neighboring nodes with equal distance to the root node attempt to send the flag to each other in the same cycle. In this case, each node will receive the flag from a port while trying to send it to the same port. When this happens, up*/down* suggests that the direction of the corresponding link can be either *up* or *down*, so we set it based on the statically assigned nodeIDs: the node with the higher nodeID will mark the link as *up*; the other node will mark the link as *down* (shown in the Action2 diagram of Figure 2).
After this assignment, all the *down→up* turns are disabled. This restriction inherits the deadlock freedom of up*/down*, as explained in Section VII-A. Though a number of paths are disabled, there is always at least one deadlock-free path that connects all nodes reachable from the root node. That is because the minimal route from any node to the root (*up*) and from there to any destination node (*down*) is always available.
**Action3: Routing Table Update.** During each broadcast, the broadcasting node communicates to other nodes how it can be reached. When nodes receive the broadcasted flag, they record the ports where the flag was received from in their own routing table, to learn how the broadcasting node can be reached. This requires the broadcasts to spread via enabled turns only, so that the recipient of the flag can follow the opposite path to reach the broadcasting node. The third Action of Figure 2 shows that the flag from node D’s broadcast arrives to the current node via its North and East ports, thus these ports are marked in the routing table to lead to D.
**Action4: Flag Forwarding.** In the next cycle, the node broadcasts the flag only to those port(s) from which it did not receive a flag earlier and that correspond to enabled turns (a flag received from *up* link(s) is never broadcasted to *up* links, because this will enable a routing path with a *down→up* turn, as shown in the last diagram of Figure 2). Since forwarding a flag takes a single cycle, each broadcast will deterministically complete in at most N cycles\(^4\).
---
\(^4\)Each broadcast is bounded to N cycles. The worst case scenario occurs when all nodes are connected in an open ring, and the longest broadcast from one end to the other requires N-1 cycles.
Completion of Reconfiguration. Reconfiguration is deterministically completed within $N^2$ cycles since each node broadcasts once, and each broadcast takes at most $N$ cycles. During this time, all routing tables are updated with resilient paths to any connected node, thus communication may be resumed and all nodes set their state back to “normal”. After this point, any node can initiate a new broadcast upon detection of a link failure and invoke the reconfiguration process again. The head flits can now proceed in the pipeline, but since routes have changed (routing tables have been updated), they must restart from the route compute stage to find an alternative port that leads to the desired destination.
Walkthrough Example. An example of the reconfiguration process is shown in Figure 3. In this figure, we show a 7-node network connected in an irregular topology. Node1 detected a link failure, and initiates a broadcast. Figure 3a shows how each node receives the flag during the initial broadcast, marks the link(s) the flag was received from as up (Action2), fills the entry corresponding to the broadcasting node in its routing table (Action3), and then forwards it to its neighbors (Action4), while marking those link(s) as down (Action2). Node2 and Node6 receive the flag during the 1st cycle and forward it to each other during the 2nd cycle. To resolve this conflict, the node with the higher NodeID (i.e., Node6) marks the link as up and the other node (i.e., Node2) as down. We note that there is an implicit unique node ordering, shown in the table, which will be leveraged in our deadlock freedom discussion in Section VII-A.
Each subsequent broadcast can only follow paths that are consistent with the up*/down* restriction. As shown in Figure 3b, when Node5 is broadcasting, Node4 does not forward the flag to its North port (1st cycle, Action4), because this would result in Node0 following an illegal path to reach Node5 (the Node0→Node4→Node5 turn is down→up). During Node5’s broadcast, all nodes only perform Actions 3 and 4: they mark which port they should follow to route to Node5 in their own routing table, and then they forward the flag to all legal directions. Note that all nodes can reach Node5 via the root (up to the root Node1, then down to Node5). Also note that some nodes may use multiple output ports to reach Node5 (e.g., Node3 can either use its West or South port, since it received the flag from both ports during the 3rd cycle), enabling adaptive routing. In this work, we only considered minimal paths for simplicity in adaptive routing. Thus, nodes that have already filled a routing table entry, ignore future flags for the same broadcasting node (for example, Node4 and Node6 do not record their North port as a potential path to Node5 during the 3rd cycle, since this would lead to a non-minimal route). At runtime, the port with the highest number of available virtual channels among all valid (recorded) ports is selected to balance traffic density.
B. Timing and Synchronization
Section IV-A presented the reconfiguration algorithm. What has not been detailed so far is the timing of the reconfiguration: How does each node know when to broadcast so that there is no overlap between broadcasts? How does the recipient of a broadcast know the broadcasting node, since the only data broadcasted is a 1-bit flag? How do nodes know when the reconfiguration is completed? If two nodes concurrently detect a new fault, can they both become roots and initiate a broadcast? This section deals with these timing issues by introducing the notion of atomic broadcasts, where the cycle number\(^5\) indicates the ID of the broadcasting node.
Atomic Broadcasts. The idea of atomic broadcasts is to correlate the cycle number at which a broadcast is initiated to the broadcasting node’s nodeID. Using the cycle number as a common reference point, all nodes are assigned different cycles for broadcasting, during which the remaining nodes are prevented from broadcasting for a window of $N$ cycles (every broadcast is guaranteed to complete in $N$ cycles, where $N$ is the number of nodes). Each node will have to wait
\(^5\)We assume a single synchronized global clock across the entire system. By the term “cycle”, we refer to the count of the positive clock edges in a node (i.e., router and computation unit). It is not necessary that the communication system is controlled by the same clock as the computation units, but it has to be driven by a single clock.
for its slot to broadcast, but during that slot no other node would be broadcasting, causing a collision. N slots need to be provided for all N nodes to broadcast, with slots looping around throughout execution. In other words, node(X)’s N-cycle slot is always followed by node((X+1)modN)’s N-cycle slot. The idea is similar to time-division multiplexing, where a number of signals physically take turns to transfer data on the same communication channel. Once the root node broadcasts, every node will deterministically broadcast during its first available slot; assuming that R is the root, nodes will broadcast in the order: R, R+1, ..., N-1, 0, 1, ..., R-1. Since N slots of N-cycles are required for the reconfiguration to complete, reconfiguration requires precisely $N^2$ cycles. Thus, all nodes can resume operation $N^2$ cycles after receiving the initial broadcast, since by that time the process has been completed.
We note that if more than one node concurrently detects a fault, only the one who first receives a broadcast slot will become the root. The others will resign from becoming the root once they receive the root’s broadcast. Our atomic synchronization ensures that no two nodes will ever simultaneously become the root. Once reconfiguration has been initiated, Ariadne will consider all detected faults in the network, independently of the node that managed to become root. Also, if a node is disconnected, it will not receive the root’s broadcast, and will thus remain silent during its broadcast slot. Other nodes will thus not fill the corresponding entry in their routing tables (marking this node as unreachable).
V. ROUTER ARCHITECTURE
To evaluate our solution, we considered a baseline router with a 5-stage pipeline\textsuperscript{6}, as illustrated in the left part of Figure 4, with the following units: a route-compute unit to determine the output port for a packet based on the final destination, a virtual channel allocator to reserve a virtual channel in the selected output port, a switch allocator to reserve the switch bandwidth, and the switch itself.
To implement Ariadne, we added a few gates, registers, and wires to the baseline router, as highlighted in the shaded portion of Figure 4. Assuming four bi-directional ports (four routing directions), Ariadne requires eight 1-bit wires (one for each port’s direction) to transmit and receive the reconfiguration flag from all directions. Ariadne logic consists of a 1-bit status register to record the state (“recovering” or “normal”), and one 1-bit register per port to remember the port’s direction (\textit{up} or \textit{down}). The logic to update the status register (Action1), to implement up*/down* (Action2), to fill the routing tables (Action3), and to forward the reconfiguration flag (Action4) are shown in the right part of Figure 4.
Figure 5 shows how these logic blocks are implemented. The up*/down* logic (5a) updates the port direction registers: receiving the flag from that port implies \textit{up}, receiving it from another port implies \textit{down}, and sending and receiving at the
\textsuperscript{6}Ariadne is orthogonal to the router pipeline. During normal operation, Ariadne is not active. During reconfiguration, the router pipeline is frozen, while Ariadne utilizes separate hardware to receive/broadcast flags.
same time implies that the direction should be determined by the routers’ NodeIDs. The logic that compiles the routing tables (5b) sends to the routing unit the ID of the broadcasting node together with the input port(s) from which the flag was received, which indicate(s) the routing direction to the broadcasting node. The broadcasting node is extracted from $\log N$ bits of the cycle counter, as will be detailed below. The routing tables consist of $N$ Boolean entries (one for each destination) of four bits (indicating the four output ports). We note that during reconfiguration, all flags corresponding to minimal paths to a specific destination node will concurrently arrive at the same cycle; consequently, once a routing table record is set, it cannot be overwritten in later cycles (\textit{i.e.}, if the bit for at least one port is set to true in the routing table, all future records for that same entry are ignored). Finally, the logic responsible for forwarding the flag (5c) always forwards the flag if received from an input port marked as \textit{down}; if the input port was marked as \textit{up}, then it only forwards if the corresponding output port is marked as \textit{down} (Section IV-A).
\textbf{Extracting the broadcaster ID from the cycle number.}
The hardware to implement this functionality is fairly simple. The $\log N$ least significant bits of the cycle counter indicate the cycle number of the current broadcast slot (1 to $N$). The next $\log N$ most significant bits indicate which node is authorized to broadcast during the current slot (1 to $N$). We note that a cycle counter does not cost additional overhead, since it is already available in the performance counters of each processor core [1]. An example for $N=4$ is shown in Figure 6, where the node that broadcasted the flag is indicated by bit$_3$bit$_2$. A node can only initiate a broadcast at the first cycle (bit$_1$bit$_0=00$) of its turn (bit$_3$bit$_2=\text{NodeID}$) and it is guaranteed to complete this broadcast within 4 cycles (by bit$_3$bit$_2=\text{NodeID}$ and bit$_1$bit$_0=11$).

\section{Experimental Results}
In this section, we evaluate Ariadne to determine its effectiveness in faulty networks. We model faults with an accurate gate-level fault model described in Section II, where we map gate-level faults to link-level faults. Each simulation is performed 1,000 times, each time using a different injected fault configuration generated by our fault model (for 0 to 100 gate faults). We assume an ideal fault-detection mechanism, since fault-detection is orthogonal to reconfiguration. In our experimental setup, we evaluate the runtime performance (Subsection VI-A), reliability (Subsection VI-B), reconfiguration duration (Subsection VI-C), and hardware overhead (Subsection VI-D) of our solution. Ariadne is implemented in the Wisconsin Multifacet GEMS simulator [22] as part of the GARNET network model [2]. Additionally, two state-of-the-art routing algorithms are implemented for comparison: the routing algorithm [14] of Vicis [15] and Immunet [25]. We configure GARNET to use an 8x8 mesh network (our simulation infrastructure is shown in Table II) and simulate synthetic traffic (Uniform Random and Transpose traffic), as well as the PARSEC benchmark suite [5].
\subsection{Performance Evaluation}
We consider two metrics to measure the performance of Ariadne on a faulty network: average latency and throughput. Latency is defined as the delay experienced by a packet from source to destination, while throughput demonstrates the rate of packets delivered (per cycle) in the entire network. First, we look at the zero-load latency for each of the three routing algorithms (Vicis, Immunet, and Ariadne), reflecting the steady-state latency of a lightly loaded network (0.01 flits injected per cycle per node, well below saturation). Each data point in Figure 7a is the average zero-load latency of 100 different topologies with the same number of faults.
We note that Ariadne’s latency is consistently the lowest, at 43 cycles on average, compared to 58 cycles for Immunet and 97 cycles for Vicis. At 50 faults, the difference increases, with Ariadne showing a 43% latency improvement over Immunet and a 142% improvement over Vicis. Moreover, we note that the latency trend is strongly dependent on the algorithm, but not greatly affected by the traffic type, as shown by the closeness of the lines for each given algorithm. Vicis shows some interesting behavior here, increasing in latency as the number of faults increases. Upon further investigation, we found that this was caused by the occasional deadlocks encountered by the Vicis algorithm, which in turn trigger a
5,000-cycle timeout before dropping the deadlocked packets\textsuperscript{7}. We note that Ariadne and Immunet never drop a packet. Ariadne maintains a reasonable constant latency, outperforming Immunet, since all of its virtual channels can route adaptively. In contrast, Immunet has one escape virtual channel restricted to route deterministically in a high-latency ring that goes through all surviving nodes two times (both directions) on average, independently of a packet’s destination.
Figure 7b plots the saturation throughput as a function of faults. We used numerical analysis to find the throughput value within a precision of 0.01. For each number of faults, we performed this calculation for 100 different configurations. Notice that for fault rates up to about 50, saturation throughput decreases as the number of faults increases, as it can be expected, since the number of available paths decreases. This effect changes as the number of faults increases past 50, when throughput begins to increase. That happens because the network becomes partitioned, thus packets are routed a few hops within small subnetwork partitions, so overall throughput actually improves. For the same reason, the type of traffic does not critically affect the saturation throughput at high number of faults, since packets are restricted to route only within these subnetworks. Note that the impact of traffic type on saturation throughput is not strong for Ariadne and Immunet. However, since Vicis is based on the turn model, which has naturally a higher saturation point, it more evenly spreads random traffic, particularly in few-faults situations, where the routing algorithm is closer to the turn model. We note that although based on the turn model, Vicis is deterministic, and uses a heuristic that chooses a minimal subset of available turns to reduce the probability of deadlock occurrence.
In addition to synthetic traffic, we also investigate the performance of our reconfigured network when running the suite of PARSEC [5] benchmarks. First, we ran each PARSEC benchmark on networks injected with 50 faults, the midpoint of our fault injection range. Measuring the average latency for packet delivery, we show the results for each algorithm in Figure 8a. Exhibiting trends similar to the random workload results, Ariadne outperforms Immunet and Vicis. We also observe that benchmarks that scale poorly beyond 16 cores (we simulated a 64-core system), such as \textit{bodytrack} and \textit{streamcluster} [5], entail more frequent messages between nearby nodes, resulting in lower average latency for Ariadne. This occurs because fewer threads, mapped to near-by cores, coordinate to complete the workload. In those cases, the latency difference between Ariadne and Immunet is highest (45%), since Immunet leverages a fixed virtual ring that snakes through all nodes, so nodes that are physically close may have
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{zero_load_latency.png}
\caption{\textbf{Zero Load Latency.} The average latency with a flit injection rate of 0.01, plotted for varying number of faults injected.}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{saturation_throughput.png}
\caption{\textbf{Saturation Throughput} (flits/node/cycle) reaches a minimum at 50 faults for all three algorithms. Then, it increases since packets route within small partitioned subnetworks.}
\end{subfigure}
\caption{Performance on synthetic traffic}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{parsec_latency.png}
\caption{\textbf{Latency for 50 faults.} Each bar shows the average latency for each benchmark and algorithm. The average latencies are shown in the rightmost bar.}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{parsec_variable_faults.png}
\caption{\textbf{Latency for variable number of faults.} As the number of faults increases, Ariadne gives the best latency, while Vicis’ latency quickly increases due to deadlocks.}
\end{subfigure}
\caption{Performance for PARSEC benchmarks}
\end{figure}
Fig. 9: **Packet delivery rate**. The percent of packets delivered decreases as the number of faults increases. Undelivered packets in Ariadne and Immunet are due to not fully connected networks.
Fig. 10: **Probability of deadlock** over different fault configurations, Ariadne, Immunet are deadlock free (lines overlap at the 0% value), while Vicis is not.
to route through many hops via the ring. On the other hand, Vicis has its highest latency for benchmarks that scale well, such as *dedup* and *vips*, since long communication patterns are prone to creating cyclic routes and causing deadlocks.
In deepening our analysis while running PARSEC benchmarks, we also evaluate performance as the number of faults increases. Figure 8b shows the average latency over all PARSEC benchmarks, as the number of injected faults ranges from 0 to 100. Ariadne consistently provides the lowest average latency. The performance difference is most significant between Ariadne and Vicis under a large number of faults. We note that this figure is very similar to the zero load latency of synthetic traffic: that is because PARSEC benchmarks inject very low network traffic leading to low network stress.
### B. Reliability Evaluation
Faulty networks are frequently not fully connected. The ability of an algorithm to maximize the connectivity of a faulty network is critical, since if there is no viable route between two locations, or if the routing algorithm has eliminated the only possible route to avoid deadlock, the packet can not be injected in the network. In Figures 9a and 9b we show the packet delivery rate for both synthetic and PARSEC workloads. Packet delivery rate captures the number of packets delivered over the number of packets generated, and reflects the reliability of the network.
First, we observe that for all workloads, Ariadne and Immunet demonstrate identical delivery rates, delivering all packets injected in the network. Undelivered packets are solely due to the network being partitioned (see Section VII-E). Vicis, on the other hand, begins with performance similar to the other algorithms but, as faults increase, its delivery rate decreases. This occurs because packets are occasionally routed in circular routes, causing deadlocks and being dropped after a timeout. Figure 10 shows the probability of deadlock for an increasing number of faults (the lines for Immunet and Ariadne are constant on the 0% trend line).
### C. Reconfiguration Evaluation
During reconfiguration, all packets in the network experience an additional delay of 4,096 cycles ($N^2$ cycles, for $N=64$). That is because the route computation stage is stalled, so all head flits are frozen in the pipeline. Figure 11 shows the average latency (intervals of 1,000 cycles) over time for three reconfiguration procedures initiated in cycle 20,000: starting from a fault-free network, the curves correspond to the reconfiguration procedure for 1, 20, 100 concurrent faults.
As shown in the figure, the average latency initially drops to 0, since no packets are received for 4,096 cycles. After that, around cycle 25,000, the routing tables have been updated and all packets in the network can resume routing towards their destination. Since the Network Interface Controllers (NICs)
were not stalled during reconfiguration, all injection queues will probably be full, resulting in increased contention right after normal operation is resumed. This explains the peaks in all three curves. The more the faults, the fewer the available paths to connect nodes, thus the higher the latency that these additional packets will experience (due to contention). Thus, we note that curves corresponding to topologies with more faults show longer times to drain packets generated during reconfiguration, and also stabilize at a higher average latency, since the new topology has lower capacity.
Figure 12 shows how the reconfiguration duration (time to drain packets generated during reconfiguration and stabilize to a new operating point) is affected by a variable number of faults and different injection rates. A low injection rate (zero load) will not generate a large number of packets during reconfiguration, so even with more faults (fewer available paths), there will be no congestion and the reconfiguration duration will only slightly increase (less than 1,000 cycles). For higher injection rates (comparable to the network capacity), the number of faults causes contention that lasts from 4,000 cycles (few faults) to 40,000 cycles (many faults, highly impaired network). Each point of the graph is an average over ten faulty (but fully connected) topologies.
Note that we do not anticipate link faults to be sufficiently frequent for reconfiguration duration to affect overall performance. The purpose of this graph is to provide quantitative insights on the reconfiguration process. Also, 50 concurrent transistor failures are more realistic in future 1000-core designs. The takeaway message of the figures is that Ariadne is viable under many concurrent faults, with reconfiguration duration scaling linearly.
D. Hardware Overhead
To determine the area impact of Ariadne in a router, we implemented its hardware structures in Verilog HDL. Beginning with a baseline 5-stage router design (Table II), we added the Ariadne logic illustrated in Figure 4. We also implemented the Vicis routing algorithm [14] and Immunet [25], whose major hardware additions are summarized in Table III. The Vicis routing algorithm requires the addition of a 64-entry static routing table, as well as modified routing logic to perform the routing heuristic. Immunet requires 3 additional routing tables (an adaptive, a deterministic, and a small safe table) and 4 wires per link. Each design was synthesized using Synopsys Design Compiler with an IBM 130nm target library. The total area for the router equipped with Ariadne was measured at 2.761mm$^2$, which corresponds to an overhead of 1.97% (baseline router’s area is 2.708mm$^2$); this overhead is slightly more than Vicis and three times less than Immunet.
| | wires | routing tables | overhead | %overhead |
|-------|-------|----------------|----------|-----------|
| Ariadne | 1 | adaptive | 0.053mm$^2$ | 1.97% |
| Vicis | 1 | deterministic | 0.040mm$^2$ | 1.48% |
| Immunet | 4 | adapt., determ, small | 0.162mm$^2$ | 5.98% |
TABLE III: Additional hardware summary.
VII. DISCUSSION
In this section, we discuss how deadlock is avoided during normal operation (when using up*/down* routing, subsection VII-A) and during reconfiguration (Ariadne algorithm, subsection VII-B). Then, we elaborate on why existing off-chip implementations of up*/down* would lead to excessive overhead if adapted on-chip (subsection VII-C). Finally, we explore whether Ariadne itself can fail (subsection VII-D) and what happens when nodes disconnect (subsection VII-E).
A. Deadlock Avoidance (Normal Operation)
Up*/down* has been proven in the literature to be deadlock free [27]. This is achieved by assigning a unique number (node order) to each node and disabling all increasing-order turns (down links), followed by decreasing order turns (up links). Note that the meaning of the terms “up” and “down” is reversed. Since the orders are unique, there can be no cycle without an increasing order followed by a decreasing order, thus the paths are deadlock free. Here, we prove that the way we assign link directions implies such a unique order. Assuming N nodes, each with a pre-assigned nodeID [0,N), and a root node with order 0, we define the order of each NodeX that is connected to the root as:
$$\text{order}(X) = \text{distance(root, X)} * N + \text{nodeID}(X)$$
The distance is measured in hops, and corresponds to the minimal path that connects NodeX to the root. The table in Figure 3 shows an example of this order assignment, where Node1 is the root. This definition forces nodes that are closer to the root node to always have a lower order than nodes that are more distant (due to the multiplying factor N). For nodes that are equally close to the root, their relative order is decided based on their statically assigned nodeID (since the first term of the equation above is the same): the node with the higher nodeID is considered to have higher order. These are the only two assumptions we used to assign a direction to each port, thus our direction assignment complies with the unique node ordering defined in the equation. Consequently, our nodes have a unique order and our routing paths are deadlock free.
B. Deadlock Avoidance (Reconfiguration)
The reconfiguration process replaces one routing function with another, by updating the routing tables in each node. As described in Section IV-A, all head flits freeze during this transition, thus the routing tables are not accessed and no packet may compute its route using the new routing algorithm until reconfiguration is completed. However, the body and tail flits may follow the route that the head flit has already reserved with the pre-fault routing paths. This section explores the dependencies between the old and the new routing function, which may create deadlocks once normal operation is resumed. The necessary condition for deadlock is a cyclic dependency of buffers, which is the result of routing in a cycle. During normal operation, cyclic routing is prevented by up*/down* as explained in VII-A, since a packet cached in an input buffer
of an *up* link can never route to another *up* link (this would correspond to a *down*→*up* turn, see Action4 in Figure 2).
However, once the routing function is replaced, the directions of links may change. Thus, an input buffer of a *down* link, caching a packet that could be sent to any output (based on the old routing algorithm), may become an input buffer of an *up* link. If the only available route to destination is via an *up* link, this creates the condition for a deadlock. The deadlock occurs because of the post-fault routing algorithm inheriting a *down*→*up* dependency from a pre-fault routing path. We resolve such cases by ejecting the packet requesting the illegal turn to the NIC of the local router and re-injecting it to the network upon buffer availability. An additional dedicated flitbuffer per NIC is necessary to achieve this. This problem is present in all reconfiguration solutions: Immunet uses the same technique to prevent deadlock, while Vicis drops any packets requesting illegal turns. Using this technique, we save in-transit data and demonstrate a viable full-system reconfiguration solution incorporating Ariadne. In our simulations, we have verified that no packets are lost during reconfiguration and deadlock free operation is indeed preserved.
**C. Are Off-Chip Up*/Down* Schemes Applicable On-Chip?**
Section III mentions several off-chip resilient routing algorithms, most of which use up*/down* [27] as a baseline and modify the structure of its directed graph to balance traffic and enhance performance [10, 21, 26]. The simplest approach, the baseline up*/down* introduced by Autonet [27] in 1991, has been implemented in many high performance interconnects, such as Myrinet, InfiniBand, and Advanced Switching. However, these off-chip approaches cannot be adapted for on-chip implementation, due the high area overhead of its centralized reconfiguration algorithm.
In all prior implementations of up*/down*, the reconfiguration algorithm computes the routing tables in software with global knowledge of the surviving network. This software runs in a centralized management entity which is called mapper (in Myrinet), subnet manager (in InfiniBand), or fabric manager (in Advanced Switching). The underlying topology and spanning tree are communicated to this management entity, which performs the task of computing the new routing function in software. This procedure requires disabling a number of turns consistently with the spanning tree, and then iterating on the topology to find all the paths that each node can follow to route to every potential destination. Once a group of routing paths that guarantees deadlock-freedom and connectivity is found, the management entity sends a message to each node including its updated routing table.
We note that in earlier implementations of up*/down*, such as Autonet, the surviving topology is aggregated to a central node (root switch) and, from there, the topology and spanning tree are sent to every node to compute its routing table locally. This approach is still not distributed (we note that its authors name it “distributed”). We define as “distributed” a reconfiguration solution where: (1) all nodes are peers executing the same function, (2) there is only local knowledge of the surviving links. In contrast, in Autonet’s reconfiguration (1) there is a single node (root-switch) assigned with the task of aggregating the surviving topology and delivering it to all nodes, while (2) each node requires global knowledge of all surviving links to compute its routing table.
Since these solutions require communicating the entire topology (in a reconfiguration packet) to a centralized location, each router has to maintain a reserved buffer and a reserved VC per input port to receive and forward this packet (the regular input buffers might be full holding in-transit packets during reconfiguration). This buffer should also incorporate additional logic to append to the reconfiguration packet the local router’s functional links, their direction, and the router’s ID. We note that any node might serve as the “management entity” (we cannot predict which nodes may become disconnected), thus every node should have reserved buffering for the entire reconfiguration packet, which accounts for 4 flits in a 64-node mesh, before interrupting the operating system to run up*/down*. We implemented this logic in Verilog HDL, synthesized it, and measured a 23.2% area overhead for our baseline router (Table II).
To avoid this overhead, we propose to reconfigure in a fully-distributed mode, where each node observes only a signal bit flag from its neighbors to fill its routing table, without any knowledge of the paths that the flags followed to arrive there, the topology, or spanning tree. Such a lightweight distributed implementation is not possible off-chip, since we require a global clock to provide specialized synchronization mechanisms for atomicity (detailed in Section IV-B).
**D. Can Ariadne Itself Fail?**
There is no resilient hardware that can guarantee 100% resilience. All resilient solutions we have described in this paper can fail if the resilient/recovery logic itself fails. Ariadne’s Finite State Machine (FSM) uses a few gates to generate flags and update the routing tables. This FSM itself is susceptible to gate faults, while the 1-bit wires used to communicate the flags to neighbors can be corrupted by transient or permanent faults, resulting in the generation of a deadlock-prone routing algorithm. The most effective way to further protect small hardware structures, such as Ariadne, from failures is triple modular redundancy (TMR). In TMR, multiple identical structures perform the same operation and a voting system is used to feed the output with the result that the majority of the structures generated. That requires multiple copies of Ariadne’s FSM in every router. Similarly, multiple identical flags (or ECC codes) must be transmitted to each neighboring router, which can utilize voting logic to recover the correct value of the flag in the case of faulty wires. We note that Ariadne’s low overhead implementation makes TMR a very viable solution, since protecting all its hardware components with triple-redundancy would lead to an overall area overhead of less than 6%. On the other hand, centralized software reconfiguration approaches would require the replication of the entire dedicated network for
communicating the reconfiguration information and of the processor core executing it, which is not viable.
E. What Happens When Nodes Become Disconnected?
After a certain number of faults, there is a high probability that the network will become partitioned (split into disjoint clusters). Our reconfiguration solution will ensure that packet delivery is guaranteed within each partition, but cannot transfer packets among partitions. A network-only solution can identify this problem, but cannot recover the system. In such cases, the operating system will typically choose the largest partition of nodes and migrate all active threads there, marking all nodes that belong to other partitions as disconnected. To realize this without interruption of execution, the state and data of each active thread needs to be transferred from disconnected nodes to nodes of the active partition, while no available paths to connect these nodes exist in the underlying network. This is a complementary problem, recently tackled by DRAIN [13]. DRAIN is a recovery algorithm that uses dedicated emergency links to save the architectural state and cached data of disconnected nodes via cache-to-cache transfers.
VIII. CONCLUSIONS
We have presented Ariadne, an agnostic reconfiguration algorithm for NoCs, capable of circumventing large numbers of simultaneous faults, and able to handle unreliable future silicon technologies. Ariadne utilizes up*/down* for high performance and deadlock-free routing in irregular network topologies that result from large numbers of faults, and offers performance gains ranging from 40% to 140% (for 50 faults) during normal operation, compared to state-of-the-art fault tolerant solutions. It guarantees that if a path between two nodes exists, the reconfiguration algorithm will enable at least one deadlock-free path between them. Ariadne is implemented in a fully distributed mode, since nodes coordinate to explore the surviving topology, resulting in very simple hardware and low complexity. At 1.97% area overhead, Ariadne is a parsimonious solution for many-core processor designs of the future, enabling a trade-off between performance and reliable functionality on unreliable silicon.
ACKNOWLEDGMENT
The authors acknowledge the support of the Gigascale Systems Research Center and Interconnect Focus Center, research centers funded under the Focus Center Research Program (FCRP), a Semiconductor Research Corporation entity.
REFERENCES
[1] Intel core i7 processor. www.intel.com/products/processor/corei7.
[2] N. Agarwal, T. Krishna, L.-S. Peh, and N. K. Jha. Garnet: A detailed on-chip network model inside a full-system simulator. Proc. ISPASS, 2009.
[3] K. Aisopos, C.-H. O. Chen, and L.-S. Peh. Enabling system-level modeling of variation-induced faults in networks-on-chip. In Proc. DAC, 2011.
[4] S. Bell and et. al. Tile64 processor: A 64-core SoC with mesh interconnect. Proc. ISSCC, 2008.
[5] C. Bienia, S. Kumar, J. P. Singh, and K. Li. The PARSEC benchmark suite: Characterization and architectural implications. In Proc. PACT, October 2008.
[6] P. Bogdan, T. Dumitras, and R. Marculescu. Stochastic communication: A new paradigm for fault-tolerant networks-on-chip. Hindawi VLSI Design, 2007.
[7] S. Y. Borkar. Microarchitecture and design challenges for gigascale integration. Proc. MICRO, 2004.
[8] S. Y. Borkar. Designing reliable systems from unreliable components: The challenges of transistor variability and degradation. IEEE Micro, 25(6):10–16, 2005.
[9] S. Chalasani and R. Boppana. Communication in multicomputers with non-convex faults. IEEE Trans. Computers, 1997.
[10] L. Cherkasova, V. Kotov, and T. Rokicki. Fibre channel fabrics: Evaluation and design. In International Conference on System Sciences, 1995.
[11] K. Constantinides, S. Plaza, J. Blome, B. Zhang, V. Bertacco, S. Mahlke, T. Austin, and M. Orshansky. Bulletproof: a defect-tolerant CMP switch architecture. Proc. HPCA, Feb 2006.
[12] W. J. Dally, L. R. Dennison, D. Harris, K. Kan, and T. Xanthopoulos. The reliable router: A reliable and high-performance communication substrate for parallel computers. In Proc. PRCRW, 1994.
[13] A. DeOrio, K. Aisopos, V. Bertacco, and L.-S. Peh. DRAIN: Distributed recovery architecture for inaccessible nodes in multi-core chips. In Proc. DAC, 2011.
[14] D. Fick, A. DeOrio, V. Bertacco, D. Sylvester, and D. Blaauw. A highly resilient routing algorithm for fault-tolerant NoCs. Proc. DATE, 2009.
[15] D. Fick, A. DeOrio, J. Hu, V. Bertacco, D. Blaauw, and D. Sylvester. Vicis: a reliable network for unreliable silicon. In Proc. DAC, pages 812–817, 2009.
[16] C. J. Glass and L. M. Ni. Fault-tolerant wormhole routing in meshes without virtual channels. IEEE Trans. Parallel and Distributed Systems, 7(6), 1996.
[17] M. E. Gomez, J. Duato, J. Flich, P. Lopez, A. Robles, N. A. Nordbotten, O. Lyse, and T. Skeie. An efficient fault-tolerant routing methodology for meshes and tori. IEEE Computer Architecture Letters, 3(1), 2004.
[18] C.-T. Ho and L. Stockmeyer. A new approach to fault-tolerant wormhole routing for mesh-connected parallel computers. IEEE Trans. Computers, 53(4), 2004.
[19] I. Keane, S. Venkatraman, P. Butzen, and C. Kim. An array-based test circuit for fully automated gate dielectric breakdown characterization. In IEEE Custom Integrated Circuits Conference, pages 121–124, 2008.
[20] S.-P. Kim and T. Han. Fault-tolerant wormhole routing in mesh with overlapped solid fault regions. Parallel Computing, 23(13), 1997.
[21] M. Koibuchi, A. Funahashi, A. Jouraku, and H. Amano. L-turn routing: An adaptive routing in irregular networks. In International Conference on Parallel Processing, 2001.
[22] M. Martin, D. Sorin, B. Beckmann, M. Marty, M. Xu, A. Alameldeen, K. Moore, M. Hill, and D. Wood. Multifacet’s general execution-driven multiprocessor simulator (GEMS) toolset. ACM SIGARCH Computer Architecture News, 33(4), 2005.
[23] A. Mejia, J. Flich, and J. Duato. Segment-based routing: An efficient fault-tolerant routing algorithm for meshes and tori. In Proc. IPDPS, 2006.
[24] M. Pirretti, G. Link, R. Brooks, N. Vijaykrishnan, M. Kandemir, and M. Irwin. Fault tolerant algorithms for network-on-chip interconnect. In Proceedings of the IEEE Computer Society Annual Symposium on VLSI, pages 46–51, 2004.
[25] V. Puente, J. A. Gregorio, F. Vallejo, and R. Beivide. Immunet: A cheap and robust fault-tolerant packet routing mechanism. ACM SIGARCH Computer Architecture News, 32(2), 2004.
[26] J. C. Sancho, A. Robles, and J. Duato. A flexible routing scheme for networks of workstations. In Proc. HiPC, 2000.
[27] M. Schroeder, A. Birrell, M. Burrows, H. Murray, R. Needham, T. Rodheffer, E. Satterthwaite, and C. Thacker. Autonet: a high-speed, self-configuring local area network using point-to-point links. IEEE Journal on Selected Areas in Comm., 9(8), 1991.
[28] J. Srinivasan, S. Adve, P. Bose, and J. A. Rivers. The impact of technology scaling on lifetime reliability. Proc. DSN, pages 177–186, 2004.
[29] C.-C. Su and K. G. Shin. Adaptive fault-tolerant deadlock-free routing in meshes and hypercubes. IEEE Trans. Computers, 45(6), 1996.
[30] S. R. Vangal et al. An 80-title sub-100-W teraflops processor in 65-nm CMOS. IEEE Journal of Solid-State Circuits, 2008.
[31] J. Wu. A fault-tolerant deadlock-free routing protocol in 2D meshes based on odd-even turn model. IEEE Trans. Computers, 52(9), 2003.
|
A subscriber identity module (SIM) card includes: a printed circuit board; a circuit provided on the printed circuit board and capable of executing SIM and RFID functions; a set of circuit contacts provided on the printed circuit board, electrically connected to the circuit via conductive paths, and conforming to a SIM specification; a set of antenna contacts provided on the printed circuit board and electrically connected to the circuit via conductive paths; and a protective plate fixed to the printed circuit board and conforming to standard dimensions of the SIM card. An assembly of the SIM card and a RFID antenna is also disclosed.
22 Claims, 4 Drawing Sheets
FIG. 1
PRIOR ART
FIG. 2
PRIOR ART
FIG. 3
FIG. 4
FIG. 5
FIG. 6
FIG. 7
FIG. 8
ASSEMBLY OF SIM CARD AND RFID ANTENNA
CROSS-REFERENCE TO RELATED APPLICATION
This application claims priority of U.S. Provisional Application No. 60/810,862, filed on Jun. 5, 2006.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to a subscriber identity module (SIM) card, more particularly to a SIM card that employs a printed circuit board and that has RFID circuits implemented on board.
2. Description of the Related Art
In recent years, handset manufacturers have strived to provide handsets with mobile wallet functionality. For example, in the i-mode mobile wallet proposed by NTT Docomo of Japan, a Felica contactless integrated circuit chip is soldered onto a main board of an NTT third generation (3G) handset module to realize the mobile wallet function. Other methods can be used to build the function into the mobile phone, such as using the near field communication (NFC) integrated circuit chip offered by Philips, or the use of an infrared connection port, and have been proposed to achieve the same function. However, the above methods require customers to use new handsets, thereby resulting in lower consumer appeal.
Therefore, in view of practical considerations and to result in lower consumer costs, it has been proposed heretofore to integrate a mobile wallet module into a subscriber identity module (SIM) card, such as the second generation (2G) global system for mobile communication (GSM) SIM card or the 3G universal subscriber identity module (USIM) card. This approach only requires the user to change the SIM card and not the handset. Since the handset is provided with the mobile wallet function by simply installing a new SIM card, better consumer acceptance is possible.
Referring to FIGS. 1 and 2, a conventional SIM card 1 is shown to include a chip 119 mounted on or embedded in a polyvinyl chloride (PVC) plastic card 12. A surface of the plastic card 12 is provided with a metal foil 110 having eight contact pads C1-C8. The SIM card 1 is usually available in two sizes, i.e., an ISO-sized card and a plug-in sized card. The dimensions of an ISO-sized card are generally similar to those of credit cards. On the other hand, a plug-in sized card is generally 25 mm long, 15 mm wide and less than 0.9 mm thick.
When integrating the radio frequency identification (RFID) function of the mobile wallet module into a plug-in sized SIM card 1, there are several technical problems that should be addressed.
First, there is a need to provide an antenna on the SIM card 1. In view of the required length of the antenna, it is usually in the form of a looped or winding pattern. However, because of the small dimensions of the plug-in sized SIM card 1, not enough space is available to accommodate an effective RFID antenna. Therefore, to solve this problem, a coil antenna is installed on a back casing of the handset, and two antenna signal feed points on the handset main board are electrically connected to the C4 contact pad 114 and the C8 contact pad 118 of the SIM card 1. Alternatively, the coil antenna may be formed on a flexible printed circuit (FPC) board, and the FPC board antenna is subsequently connected to the SIM card 1. However, according to the ISO-7816-12 specification, the C4 contact pad 114 and the C8 contact pad 118 are allocated for electrical connection to a universal serial bus (USB) port. In addition, most handsets currently available on the market are not built with contacts for electrical connection with the C4 contact pad 114 and the C8 contact pad 118. As a result, connection of the antenna to the C4 contact pad 114 and the C8 contact pad 118 is impractical at this time. Furthermore, the conventional SIM card 1 with the PVC plastic card 12 is not suited for fixing a FPC board thereon. While it is possible to mount directly the FPC board on the C4 contact pad 114 and the C8 contact pad 118 at the top surface of the SIM card 1, the mounting process is prone to errors. In addition, when mounting the FPC board on the SIM card 1, not only is it not possible to automate the mounting process, automated testing is not possible as well.
SUMMARY OF THE INVENTION
Therefore, one object of the present invention is to provide a SIM card that uses a printed circuit board instead of a metal foil to facilitate expansion of functions thereof.
Another object of the present invention is to provide a SIM card with RFID functionality so as to be suitable for mobile wallet applications.
According to one aspect of the present invention, a SIM card includes: a printed circuit board; a circuit provided on the printed circuit board and capable of executing SIM and RFID functions; a set of circuit contacts provided on the printed circuit board, electrically connected to the circuit via conductive paths, and conforming to a SIM specification; a set of antenna contacts provided on the printed circuit board and electrically connected to the circuit via conductive paths; and a protective plate fixed to the printed circuit board and conforming to standard dimensions of the SIM card.
Preferably, the printed circuit board has a first surface and a second surface opposite to the first surface. One of the first and second surfaces is provided with the circuit contacts that conform to the SIM specification and that are electrically connected to the circuit. The antenna contacts are provided on the other one of the first and second surfaces.
Preferably, the circuit is provided on the printed circuit board using chip-on-board techniques, and is covered by the protective plate.
According to another aspect of the present invention, an assembly includes a SIM card and a RFID antenna. The SIM card includes: a printed circuit board; a circuit provided on the printed circuit board and capable of executing SIM and RFID functions; a set of circuit contacts provided on the printed circuit board, electrically connected to the circuit via conductive paths, and conforming to a SIM specification; and a set of antenna contacts provided on the printed circuit board and electrically connected to the circuit via conductive paths. The RFID antenna is connected electrically to the antenna contacts.
In one embodiment, each of the antenna contacts is connected electrically to matching contacts of the RFID antenna via conductive paste.
In the present invention, a printed circuit board is used to replace the metal foil employed in the prior art. Since the printed circuit board can be formed with highly precise and complex conductive paths, it is suitable for electrical connection with different chips or for addition of antenna contacts for expanded functionality. As a result, the RFID antenna can be electrically connected to the added antenna contacts to achieve the RFID function so as to be suitable for mobile wallet applications. The printed circuit board makes it easy to add a compensation capacitor to the RFID antenna. This technique greatly enhances RFID reading performance.
BRIEF DESCRIPTION OF THE DRAWINGS
Other features and advantages of the present invention will become apparent in the following detailed description of the preferred embodiment with reference to the accompanying drawings, of which:
FIG. 1 is a sectional view of a conventional SIM card;
FIG. 2 is a schematic view to illustrate eight contact pads of the conventional SIM card;
FIG. 3 is a sectional view of the preferred embodiment of a SIM card according to the present invention;
FIG. 4 is a schematic view to illustrate an integrated circuit, antenna contacts, a RFID antenna and a compensation capacitor of the preferred embodiment;
FIG. 5 is a schematic view to illustrate eight circuit contacts conforming to a SIM specification of the preferred embodiment;
FIG. 6 is a sectional view to illustrate an assembly of the SIM card of the preferred embodiment and a RFID antenna;
FIG. 7 is an exploded view to illustrate the manufacture of the SIM card of the preferred embodiment; and
FIG. 8 is a sectional view to illustrate relative positions of plate bodies used in the manufacture of the SIM card of the preferred embodiment.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
Referring to FIGS. 3, 4 and 5, the preferred embodiment of a SIM card 61 according to the present invention is shown to include a printed circuit board (PCB) 2, a set of antenna contacts 3, an integrated circuit 4, and a protective plate 5. Integrated circuit 4 can be a traditional, single, monolithic IC, or may be implemented as a connected collection of ICs or other circuit elements.
The PCB 2 has a board body 20 with a first surface 21 and a second surface 22 opposite to the first surface 21. The second surface 22 is provided with a set of circuit contacts 231-238 (C1-C8) conforming to a SIM specification. Contact pads 24 are provided on the first surface 21. Some of the contact pads 24 are each connected to a respective one of the circuit contacts 231-238 via a corresponding conductive path 25. The antenna contacts 3 are provided on the first surface 21 and are each connected to a corresponding one of the contact pads 24 via a corresponding conductive path. It should be noted herein that the circuit contacts 231-238 conforming to the SIM specification and the antenna contacts 3 may be provided on the same surface of the board body 20 in other embodiments of this invention. In this embodiment, there are eight of the circuit contacts 231-238 and two of the antenna contacts 3. In addition, one end portion of the board body 20 adjacent to the antenna contacts 3 is indented inwardly at two lateral edges thereof, thereby configuring the board body 20 with wider and narrower end portions. However, in other embodiments of this invention, the board body may have a shape corresponding to that of a plug-in sized SIM card, and may be formed with a cut corner to guide proper insertion of the PCB into an electrical connector.
The integrated circuit 4 is provided on the PCB 2 but may be replaced by other circuits with the same functionality. The integrated circuit 4 includes a subscriber identity chip 41 and a RFID chip 42. The two chips 41, 42 are provided on the first surface 21 of the board body 20 using chip-on-board (COB) techniques, and are connected electrically to the contact pads 24 using bonding wires.
In the present invention, the PCB 2 is used to replace the metal foil employed in the prior art. Since the PCB 2 can be formed with highly precise and complex conductive paths 25, it is suitable for electrical connection with different chips or for addition of the antenna contacts 3 to expand functionality. In addition, since the antenna contacts 3 can be made larger than the circuit contacts 231-238 conforming to the SIM specification, connection to a RFID antenna can be facilitated. In other words, it is not necessary to use two of the circuit contacts 231-238 conforming to the SIM specification for connection to the RFID antenna. Therefore, assuming that the eight circuit contacts 231-238 conform to the ISO-7816-12 specification, two of the circuit contacts 234, 238 (C4, C8) could be used for electrical connection to a universal serial bus (USB) port.
The protective plate 5 is made of plastic and has a shape corresponding to that of a plug-in sized SIM card. The dimensions of the protective plate 5 conform to standard dimensions of the SIM card 61. The protective plate 5 has a cut corner 55 to guide proper insertion into an electrical connector. The protective plate 5 is mounted on the first surface 21 of the board body 20 to cover the chips 41, 42 while exposing the antenna contacts 3. The protective plate 5 is mounted on the PCB 2 by applying adhesive to the first surface 21 of the board body 20. When the SIM card 61 is installed in a mobile communications device, such as a GSM handset or a Personal Handy phone System (PHS) handset, it can provide the mobile communications device with basic functions of a conventional handset, such as call making, storage of phone numbers, etc.
Referring to FIG. 6, when it is desired to use a mobile wallet function that is integrated into the SIM card 61, it is required to use an assembly 60 of the SIM card 61 and a RFID antenna 62. First, the RFID antenna 62 that includes a flexible printed circuit board is mounted on the PCB 2 and is electrically connected to the antenna contacts 3. The assembly 60 of the SIM card 61 and the RFID antenna 62 is then installed in the casing of a mobile communications device such that the circuit contacts 231-238 (see FIG. 5) are connected to a main board in the casing, thereby enabling the mobile communications device to execute mobile wallet functions.
Two methods are available for mounting of the RFID antenna 62. In the first method, solder material 72 is employed to fix the RFID antenna 62 to the antenna contacts 3. In the second method, a conductive paste is applied to the contacts of the RFID antenna 62 and/or the antenna contacts 3, followed by a heat-pressing operation to bond adhesively the RFID antenna 62 to the antenna contacts 3.
Referring to FIG. 4, key design parameters of the RFID antenna 62 is disclosed, such as the size of the antenna coil pattern. A size of 22.3 mm by 38 mm is chosen so that the RFID antenna 62 can fit into most mobile handsets that commonly have an area of 25 mm by 40 mm above battery, which is sufficient to accommodate the RFID antenna 62. This is much smaller than the antenna used in typical RFID cards. A smaller antenna size results in sub-optimal RFID reading performance. A key technique deployed in this invention is to place a compensation capacitor (Ca) on the PCB 2. In this embodiment, the compensation capacitor (Ca) is a 22 pF capacitor connected across the two antenna contacts 3. Optimal RFID reading parameters of 15 MHz resonant frequency and Q factor of 35 are achieved with eight turns of the antenna coil pattern. In the manufacture of commonly used RFID or contactless smart card, no capacitor is used because it is very difficult to add one into the plastic card. Therefore, the ability to add the compensation capacitor (Ca) is another major advantage of this invention.
Referring to FIGS. 7 and 8, in the manufacture of the SIM card 61, a first plate body 501, a second plate body 502, and
a third plate body 503 are arranged in a stack to form a protective plate 5'. The protective plate 5' is a 0.8 mm-thick plastic card having dimensions corresponding to those of a standard credit card and larger than those of the PCB 2. The protective plate 5' is configured with a protective region corresponding to the PCB 2 in dimensions and to be disposed above the PCB 2. Each of the first and second plate bodies 501, 502 is formed with a closed tear line 53' that surrounds the protective region. The first and second plate bodies 501, 502 are further formed with aligned access holes 58 for exposing the antenna contacts 3 on the PCB 2. The second plate body 502 is further formed with a chip receiving space 56 in the protective region for receiving the chips 41, 42 (see FIG. 4). The third plate body 503 is formed with a board receiving hole 57 for receiving the PCB 2. After the plate bodies 501, 502, 503 are bonded together in a stack, the tear lines 53' in the plate bodies 501, 502 are superimposed one above the other, and the chip receiving space 56 is in spatial communication with the board receiving hole 57.
Subsequently, the PCB 2 provided with the antenna contacts 3 and the integrated circuit 4 (see FIG. 4) is mounted adhesively to the protective plate 5' such that the chips 41, 42 are received in the chip receiving space 56, such that the board body 20 of the PCB 2 is disposed in the board receiving hole 57, and such that the antenna contacts 3 are exposed by the access holes 58 in the first and second plate bodies 501, 502. In use, the protective plate 5' is torn at the tear lines 53' to result in the SIM card 61 shown in FIG. 3. The RFID antenna 62 is then mounted on the SIM card 61 if required, and the SIM card 61 is installed in a mobile communications device to permit use of the latter.
In this embodiment, each of the plate bodies 501, 502, 503 is made from polyethylene terephthalate (PET) having a melting point of 120° C. The plate bodies 501, 502, 503 are bonded together by heating at a temperature of more than 100° C. However, other plastic materials capable of withstanding high temperatures may be used for the plate bodies 501, 502, 503.
While the present invention has been described in connection with what is considered the most practical and preferred embodiment, it is understood that this invention is not limited to the disclosed embodiment but is intended to cover various arrangements included within the spirit and scope of the broadest interpretation so as to encompass all such modifications and equivalent arrangements.
What is claimed is:
1. A subscriber identity module (SIM) card comprising:
- a printed circuit board, having a first surface and a second surface opposite to said first surface;
- a circuit provided on said printed circuit board and capable of executing SIM and radio frequency identification (RFID) functions, said circuit provided on one of said first and second surfaces of said printed circuit board;
- a set of circuit contacts provided on said printed circuit board, electrically connected to said circuit via conductive paths, and conforming to a SIM specification, said circuit contacts being provided on the other one of said first and second surfaces of said printed circuit board; and
- a set of antenna contacts provided on said printed circuit board and electrically connected to said circuit via conductive paths, said antenna contacts being provided on said one of said first and second surfaces of said printed circuit board.
2. The SIM card as claimed in claim 1, wherein said circuit is an integrated circuit that includes a subscriber identity module and a RFID module, each of said subscriber identity module and said RFID module being implemented as one of an independent chip and a function module within a chip.
3. The SIM card as claimed in claim 2, further comprising a capacitor connected to said antenna contacts.
4. The SIM card as claimed in claim 3, wherein said circuit and said antenna contacts are provided on said first surface of said printed circuit board, said circuit contacts being provided on said second surface of said printed circuit board, said printed circuit board further having a plurality of contact pads provided on said first surface, each of said conductive paths connecting a respective one of said contact pads to a respective one of said circuit contacts and said antenna contacts, said subscriber identity module and said RFID module being electrically connected to corresponding ones of said contact pads.
5. The SIM card as claimed in claim 4, further comprising a protective plate fixed to said first surface of said printed circuit board to cover said circuit while exposing said antenna contacts.
6. The SIM card as claimed in claim 5, wherein said protective plate has dimensions conforming to standard dimensions of a SIM card.
7. The SIM card as claimed in claim 5, wherein said protective plate has dimensions larger than those of said printed circuit board, said protective plate being configured with a protective region corresponding to said printed circuit board in dimensions and to be disposed above said printed circuit board, said protective plate being formed with a tear line surrounding said protective region.
8. The SIM card as claimed in claim 7, wherein said protective plate includes a first plate body, a second plate body and a third plate body arranged in a stack, said tear line being formed in said first and second plate bodies to surround said protective region, said first and second plate bodies being further formed with aligned access holes for exposing said antenna contacts, said second plate body being further formed with a chip receiving space for receiving said circuit, said third plate body being formed with a board receiving hole for receiving said printed circuit board.
9. The SIM card as claimed in claim 8, wherein said protective plate is made of a plastic material.
10. The SIM card as claimed in claim 9, wherein the plastic material is polyethylene terephthalate.
11. The SIM card as claimed in claim 1, comprising eight of said circuit contacts and two of said antenna contacts.
12. The SIM card as claimed in claim 11, wherein two of said circuit contacts are adapted for electrical connection to a USB port.
13. An assembly of a subscriber identity module (SIM) card and a radio frequency identification (RFID) antenna, comprising:
- a SIM card including
- a printed circuit board having a first surface and a second surface opposite to said first surface,
- a circuit provided on said printed circuit board and capable of executing SIM and RFID functions, said circuit being provided on one of said first and second surfaces of said printed circuit board,
- a set of circuit contacts provided on said printed circuit board, electrically connected to said circuit via conductive paths, and conforming to a SIM specification, said circuit contacts being provided on the other one of said first and second surfaces of said printed circuit board, and
- a set of antenna contacts provided on said printed circuit board and electrically connected to said circuit via
conductive paths, said antenna contacts provided on said one of said first and second surfaces of said printed circuit board; and
a RFID antenna electrically connected to said antenna contacts on said printed circuit board.
14. The assembly as claimed in claim 13, wherein said RFID antenna includes a flexible printed circuit board.
15. The assembly as claimed in claim 13, further comprising solder material for fixing said RFID antenna to said antenna contacts.
16. The assembly as claimed in claim 13, further comprising conductive paste for fixing said RFID antenna to said antenna contacts.
17. The assembly as claimed in claim 13, wherein said circuit is an integrated circuit.
18. The assembly as claimed in claim 17, wherein said integrated circuit includes a subscriber identity chip and a RFID chip.
19. The assembly as claimed in claim 18, wherein said circuit and said antenna contacts are provided on said first surface of said printed circuit board, said circuit contacts being provided on said second surface of said printed circuit board, said printed circuit board further having a plurality of contact pads provided on said first surface, each of said conductive paths connecting a respective one of said contact pads to a respective one of said circuit contacts and said antenna contacts, said subscriber identity chip and said RFID chip being electrically connected to corresponding ones of said contact pads.
20. The assembly as claimed in claim 19, further comprising a protective plate fixed to said first surface of said printed circuit board to cover said circuit while exposing said antenna contacts.
21. The assembly as claimed in claim 20, wherein said protective plate is made of a plastic material.
22. The assembly as claimed in claim 13, wherein said SIM card includes eight of said circuit contacts and two of said antenna contacts.
|
Asymptotically scale-invariant occupancy of phase space makes the entropy $S_q$ extensive
Constantino Tsallis*†‡, Murray Gell-Mann*§, and Yuzuru Sato*
*Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501; and †Centro Brasileiro de Pesquisas Físicas, Rua Xavier Sigaud 150, 22290-180 Rio de Janeiro, Brazil
Contributed by Murray Gell-Mann, July 25, 2005
Phase space can be constructed for $N$ equal and distinguishable subsystems that could be probabilistically either weakly correlated or strongly correlated. If they are locally correlated, we expect the Boltzmann–Gibbs entropy $S_{BG} = -k \sum_i p_i \ln p_i$ to be extensive, i.e., $S_{BG}(N) \propto N$ for $N \to \infty$. In particular, if they are independent, $S_{BG}$ is strictly additive, i.e., $S_{BG}(N) = NS_{BG}(1)$, $\forall N$. However, if the subsystems are globally correlated, we expect, for a vast class of systems, the entropy $S_q = k[1 - \sum_i p_i^q]/(q - 1)$ (with $S_1 = S_{BG}$) for some special value of $q \neq 1$ to be the one which is extensive [i.e., $S_q(N) \propto N$ for $N \to \infty$]. Another concept which is relevant is strict or asymptotic scale-freedom (or scale-invariance), defined as the situation for which all marginal probabilities of the $N$-system coincide or asymptotically approach (for $N \to \infty$) the joint probabilities of the $(N - 1)$-system. If each subsystem is a binary one, scale-freedom is guaranteed by what we hereafter refer to as the Leibnitz rule, i.e., the sum of two successive joint probabilities of the $N$-system coincides or asymptotically approaches the corresponding joint probability of the $(N - 1)$-system. The kinds of interplay of these various concepts are illustrated in several examples. One of them justifies the title of this paper. We conjecture that these mechanisms are deeply related to the very frequent emergence, in natural and artificial complex systems, of scale-free structures and to their connections with nonextensive statistical mechanics. Summarizing, we have shown that, for asymptotically scale-invariant systems, it is $S_q$ with $q \neq 1$, and not $S_{BG}$, the entropy which matches standard, clausius-like, prescriptions of classical thermodynamics.
The entropy $S_q$ (1) is defined through
$$S_q \equiv k \frac{1 - \sum_{i=1}^{W} p_i^q}{q - 1} \quad (q \in \mathbb{R}; \quad S_1 = S_{BG} \equiv -k \sum_{i=1}^{W} p_i \ln p_i),$$
where $k$ is a positive constant ($k = 1$ from now on) and BG stands for Boltzmann–Gibbs. This expression is the basis of nonextensive statistical mechanics (16–18) (see http://tsallis.cat.cbpf.br/biblio.htm for a regularly updated bibliography), a current generalization of BG statistical mechanics. For $q \neq 1$, $S_q$ is nonadditive (hence nonextensive) in the sense that for a system composed of (probabilistically) independent subsystems, the total entropy differs from the sum of the entropies of the subsystems. However, the system may have special probability correlations between the subsystems such that extensivity is valid, not for $S_{BG}$, but for $S_q$ with a particular value of the index $q \neq 1$. In this paper, we address the case where the subsystems are all equal and distinguishable. Their correlations may exhibit a kind of scale-invariance. We may regard some of the situations of correlated probabilities as related to the remark (see refs. 19–23 and references therein) that $S_q$ for $q \neq 1$ can be appropriate for nonlinear dynamical systems that have phase space unevenly occupied. We return to this point later.
We shall consider two types of models. The first one involves $N$ binary variables ($N = 1, 2, 3, \ldots$), and the second one involves $N$ continuous variables ($N = 1, 2, 3$). In both cases, certain correlations that are scale-invariant in a suitable limit can create an intrinsically inhomogeneous occupation of phase space. Such systems are strongly reminiscent of the so called scale-free networks (24, 25), with their hierarchically structured hubs and spokes and their nearly forbidden regions.
Discrete Models
Some Basic Concepts. The most general probabilistic sets for $N$ equal and distinguishable binary subsystems are given in Fig. 1 with
$$\sum_{n=0}^{N} \frac{N!}{(N-n)!} \pi_{N,n} = 1$$
$$(\pi_{N,n} \in [0, 1]; \quad N = 1, 2, 3, \ldots; \quad n = 0, 1, \ldots, N).$$
Let us from now on call Leibnitz rule the following recursive relation:
$$\pi_{N,n} + \pi_{N,n+1} = \pi_{N-1,n} \quad (n = 0, 1, \ldots, N-1; \quad N = 2, 3, \ldots).$$
This relation guarantees what we refer to as scale-invariance (or scale-freedom) in this article. Indeed, it guarantees that, for any value of $N$, the associated joint probabilities $\{\pi_{N,n}\}$ produce marginal probabilities which coincide with $\{\pi_{N-1,n}\}$. Assuming $\pi_{10} + \pi_{11} = 1$, and taking into account that the $N$th row has one more element than the $(N - 1)$th row, a particular model is characterized by giving one element for each row. We shall adopt the convention of specifying the set $\{\pi_{N,0}\} \in [0, 1], \forall N\}$. Everything follows from it. There are many sets $\{\pi_{N,0}\}$ that satisfy Eq. 3. Let us illustrate with a few simple examples:
(i) $\pi_{N,0} = (2\pi_{10})^N/N + 1 \quad (0 \leq \pi_{10} \leq 1/2; \quad N = 1, 2, 3, \ldots)$. We have that all $2^N$ states have nonzero probability if $0 < \pi_{10} \leq 1/2$.
---
*To whom correspondence may be addressed. E-mail: firstname.lastname@example.org or email@example.com.
§In the field of cybernetics and control theory, the form $S_n = 2^n - 1/2^{n-1} - 1/(1 - \Sigma_i p_i^n)$ was introduced in ref. 2, and was further discussed in ref. 3. With a different prefactor, it was rediscovered in ref. 4, and further commented in ref. 5. More historical details can be found in refs. 6–8. This type of entropic form was rediscovered once again in 1988 (16–18) and it was postulated as the basis of a possible generalization of Boltzmann–Gibbs statistical mechanics, nowadays known as nonextensive statistical mechanics.
¶Many entropic forms are related with $S_q$. A special mention is deserved by the Renyi entropy $S_R^n = (\ln \Sigma_i p_i^n)/(1 - q) = \ln[1 + (1 - q)S_q]/(1 - q)$, and by the Landsberg–Vedral–Abe–Rajagopal entropy (or just normalized $S_q$ entropy) $S_L^{AVR} = S_q/\Sigma_i^n p_i^n = [1 - (\Sigma_i^n p_i^n)^q - 1]/(1 - q) = S_q/[1 + (1 - q)S_q]$. The Renyi entropy was, according to ref. 9, first introduced in ref. 10, and then in ref. 11. The Landsberg–Vedral–Abe–Rajagopal entropy was independently introduced in ref. 12 and in ref. 13. Both $S_R^n$ and $S_L^{AVR}$ are monotonic functions of $S_q$; consequently, under identical constraints, they are all optimized by the same probability distribution. A two-parameter entropic form was introduced in ref. 14 which reproduces both $S_q$ and Renyi entropy as particular cases. This scheme has been recently enlarged elegantly in ref. 15. $S_{BG}$ and $S_q$ (as well as a few other entropic forms that we do not address here) are concave and Lesche-stable for all $q > 0$, and provide a finite entropy production per unit time; $S_R^n$, $S_L^{AVR}$, the Sharma–Mittal, and the Masi entropic forms (as well as others that we do not address here) violate all of these properties.
© 2005 by The National Academy of Sciences of the USA
The particular case $\pi_{10} = 1/2$ recovers the original Leibnitz triangle itself (26) (see Fig. 2).
(ii) $\pi_{N,0} = (\pi_{10})^N$ ($\alpha \geq 0; N = 1, 2, 3...$). The $\alpha = 1$ instance corresponds to independent systems, i.e., $\pi_{N,n} = (\pi_{10})^{N-n} (1 - \pi_{10})^n$. If $0 < \pi_{10} < 1$, then all $2^N$ states have nonzero probability. The $\alpha = 0$ instance corresponds to $\pi_{N,0} = \pi_{10}$, $\pi_{N,n} = 0$ ($n = 1, 2, \ldots, N-1$) and $\pi_{N,N} = 1 - \pi_{10}$. If $0 < \pi_{10} < 1$, then only two among the $2^N$ states have nonzero probability, $\forall N$, namely the states associated with $\pi_{N,0}$ and $\pi_{N,N}$.
We may relax the Leibnitz rule to some extent by considering those cases where the rule is satisfied only asymptotically, i.e.,
$$\lim_{N \to \infty} \frac{\pi_{N,n} + \pi_{N,n+1}}{\pi_{N-1,n}} = 1 \quad (n = 0, 1, 2, \ldots). \quad [4]$$
Such cases will be said to be not strictly but *asymptotically scale-invariant* (or *asymptotically scale-free*). This is, for a variety of reasons, the situation in which we are primarily interested. The main reason is that what vast classes of natural and artificial systems typically exhibit is not precisely power-laws, but behaviors *which only asymptotically become power-laws* (once we have corrected, of course, for any finite size effects). This is consistent with the fact that within nonextensive statistical mechanics $S_q$ is optimized by $q$-exponential functions (see ref. 1 and references therein and refs. 27 and 28), which only asymptotically yield power-laws. It is consistent also with a new central limit theorem that has been recently conjectured (29) for specially correlated random variables.
Let us now introduce a further concept, namely *q-describability*. A model constituted by $N$ equal and distinguishable subsystems will be called *q-describable* if a value of $q$ exists such as $S_q(N)$ is *extensive*, i.e., $\lim_{N \to \infty} S_q(N)/N < \infty$. If that special value of $q$ equals unity, this corresponds to the usual BG universality class. If that value of $q$ differs from unity, we will have nontrivial universality classes. If the subsystems $\{A_i\}$ are not necessarily equal, the system is *q-describable* if an entropic index $q$ exists such that $\lim_{N \to \infty} [S_q(A_1 + A_2 + \ldots + A_N)/\Sigma_{i=1}^N S_q(A_i)] < \infty$. It should be clear that we could equally well demand the extensivity of say $S_{2-q}$ [or even of $S_{Q(q)}$, where $Q(q)$ is some monotonically decreasing function of $q$ satisfying $Q(1) = 1$] instead of that of $S_q$. This would of course have the effect of having nontrivial solutions for $q > 1$ whenever we had solutions for $q < 1$ if the extensivity that was imposed was that of $S_q$.
Finally, let us point out that we might consider the subsystems of a probabilistic system to be either *strongly* (or *globally*) *correlated* or *weakly* (or “*locally*”) *correlated*. The trivial case of *independence*, i.e., when the subsystems are *uncorrelated*, is of course a particular case of weakly correlated. Let us make these notions more precise. A system is weakly correlated if for every generic (different from zero and from unity) joint probability $\pi_{i_1,i_2,\ldots,i_N}^{A_1+A_2+\cdots+A_N}$ a set of individual probabilities $\{\pi_{i_r}^{A_r}\}$ exists such that $\lim_{N \to \infty} (\pi_{i_1,i_2,\ldots,i_N}^{A_1+A_2+\cdots+A_N})/\Pi_{r=1}^N \pi_{i_r}^{A_r} = 1$. Otherwise, the system is said to be strongly correlated. The particular case of independence corresponds to
$$\pi_{i_r}^{A_r} = \Sigma_{j_1,j_2,\ldots,j_{r-1},j_{r+1},\ldots,j_N} \pi_{i_1,i_2,\ldots,i_N}^{A_1+A_2+\cdots+A_N} \quad (r = 1, 2, \ldots, N).$$
If the subsystems are equal and binary, this definition becomes as follows: a system is weakly correlated if, for generic $\pi_{N,n}$, a probability $p_0$ exists such that $\lim_{N \to \infty} \pi_{N,n}/p_0^{N-n} (1 - p_0)^n = 1$. Otherwise the system is said to be strongly correlated. The particular case of independence corresponds to $p_0 = \pi_{10}$. In the present sense, weakly correlated systems could also be thought and referred to as *asymptotically uncorrelated*. The interplay of scale-invariance, $q$-describability, and global correlation is schematized in Fig. 3.
We have verified that all systems illustrated in $i$ and $ii$ above belong to the $q = 1$ class (see examples in Fig. 4). We next address $q \neq 1$ systems.
**A Discrete Model That Is Not Asymptotically Scale-Invariant.** Let us consider the probabilistic structure indicated in Fig. 5, where, for given $N$, only the $d + 1$ first elements are different from zero, with $d = 0, 1, 2, \ldots, N$.
As we see, $\pi_{N,n}^{(d)} = 0$ for $N \geq d + 1$ and $n = d + 1, d + 2, \ldots$.
---
1 On the basis of what we have called here the *Leibnitz rule*, L.G. Moyano, C.T., and M.G.-M. (44) obtained interesting preliminary numerical results based on the so called $q$-product (30, 31) and its relation to the possible $q$-generalization of the central limit theorem. More precisely, imposing the Leibnitz rule with $\pi_{n,0}^{-1} = p^{-1} \otimes q_p p^{-1} \otimes q_p \ldots \otimes q_p p^{-1} = [Np^{N-1} + (W - 1)p^{W-d}]$ (with $\pi_{n,0} = p^N$ for $q = 1$), one verifies for $p = 1/2$ that, as $N$ increases, the distribution probability appears to approach a $q$-generalized Gaussian $P(r, N)$. The centered and rescaled distribution $P(r, N)/N/2$ gradually becomes (say for even $N$) proportional to $e_q^{-(r/N)^2}$, where $q = 1/(2 - d)$. The value of $q$ is such that the exponent appears to satisfy $q_{exp} = 2 - (1/q)$. This relation is obtained by applying the $q \to (2 - q)$ transformation after the $q \to 1/q$ transformation (notice that this relation can be rewritten as $q = 1/(2 - q_{exp})$, which is the application of the same two transformations in the other possible order). The combinations of these two transformations define an interesting mathematical structure which might well be at the basis of the $q$-triplet conjectured in (32) and recently confirmed (33) with data received from the spacecraft Voyager 1 in the distant heliosphere. The $q$-triplet observed in the solar wind is given by $q_{gen} \sim 0.6 \sim 0.2$, $q_{rel} \sim 3.8 \sim 0.3$, and $q_{stat} \sim 1.75 \sim 0.06$ (33). These values are consistent with $q_{rel} + (1/q_{gen}) = 2$ and $q_{stat} + (1/q_{rel}) = 2$, hence $1 - q_{gen} = [1 - q_{stat}]/[3 - 2q_{rel}]$. Therefore, we expect only one $q$ of the triplet to be independent. The most precisely determined value in ref. 33 is $q_{stat} = 1.75 = 7/4$. It immediately follows that $q_{gen} = -1/2$ (nearly consistent with $-0.6 \pm 0.2$) and $q_{rel} = 4$ (nearly consistent with $3.8 \pm 0.3$). There may be some difficulties with this approach, and efforts are being made to clear up the situation.
The total number of states is given by \( W(N) = 2^N \) (\( \forall d \)), but the number of states with nonzero probability is given by
\[
W_{\text{eff}}(N, d) = \sum_{k=0}^{d} \frac{N!}{(N-k)!k!},
\]
where eff stands for effective. For example, \( W_{\text{eff}}(N, 0) = 1 \), \( W_{\text{eff}}(N, 1) = N + 1 \), \( W_{\text{eff}}(N, 2) = \frac{1}{2}N(N+1) + 1 \), \( W_{\text{eff}}(N, 3) = \frac{1}{6}N(N^2 + 5) + 1 \), and so on. For fixed \( d \) and \( N \to \infty \) we have that
\[
W_{\text{eff}}(N, d) \sim \frac{N^d}{d!}.
\]
Let us now make a simple choice for the nonzero probabilities, namely equal probabilities. In other words,
\[
\pi_{N,n}^{(d)} = \frac{1}{2^N} \quad \text{(if } N \leq d),
\]
\[
\pi_{N,n}^{(d)} = \frac{1}{W_{\text{eff}}(N, d)} \quad \text{(if } N > d \text{ and } n \leq d), \text{ and}
\]
\[
\pi_{N,n}^{(d)} = 0 \quad \text{(if } N > d \text{ and } n > d).
\]
See Fig. 6 for an illustration of this model.
The entropy for this model is given by
\[
S_q(N) = \ln W_{\text{eff}}(N, d) = \frac{[W_{\text{eff}}(N, d)]^{1-q} - 1}{1-q}
\]
\[
\sim \frac{N^{d(1-q)}}{(1-q)(d!)^{1-q}},
\]
where we have used now Eq. 6. Consequently, \( S_q \) is extensive [i.e., \( S_q(N) \propto N \) for \( N \to \infty \)] if and only if
\[
q = 1 - \frac{1}{d}.
\]
Hence, if \( d = 1, 2, 3 \ldots \), the entropic index monotonically approaches the BG limit from below. We can immediately verify in Fig. 6 (and using Eq. 7) that this model violates the Leibnitz rule for all \( N \), including asymptotically when \( N \to \infty \). Consequently, it is neither strictly nor asymptotically scale-free. However, it is \( q \)-describable (see Fig. 3).
**An Asymptotically Scale-Invariant Discrete Model.** Starting with the Leibnitz harmonic triangle, we shall construct a heterogeneous distribution \( \pi_{N,n}^{(d)} \). The Leibnitz triangle is given in Fig. 2 and satisfies
\[
p_{N,n} = p_{N+1,n} + p_{N+1,n+1},
\]
\[
p_{N,n} = \frac{1}{(N+1)} \frac{(N-n)!n!}{N!}.
\]
We now define
\[
\pi_{N,n}^{(d)} \equiv \begin{cases}
p_{N,n} + l_{N,n}^{(d)} s_N^{(d)} & (n \leq d) \\
0 & (n > d)
\end{cases}
\]
where the excess probability \( s_N^{(d)} \) and the distribution ratio \( l_{N,n}^{(d)} \) (with \( 0 < \varepsilon < 1 \)) are defined through
\[
s_N^{(d)} \equiv \sum_{k=d+1}^{N} p_{N,k} = \frac{N-d}{N+1}
\]
\[
l_{N,n}^{(d)} \equiv \frac{p_{N,n}}{s_N^{(d)}} = \frac{N-d}{(N+1)(N-n)}
\]
\[
t_{N,n}^{(d)} \equiv \begin{cases}
1 - \varepsilon & (n = 0) \\
(1 - \varepsilon)e^n \frac{(N-n)!n!}{N!} & (0 < n < d) \\
\varepsilon^d \frac{(N-d)!d!}{N!} & (n = d)
\end{cases}
\]
(see Fig. 7). We have verified for \(d = 1, 2, 3, 4\) and \(N \to \infty\) a result that we expect to be correct for all \(d < N/2\), namely that \(0 < \pi_{N,n+1} \ll \pi_{N,n} \sim \pi_{N-1,n} \ll 1\), hence
\[
\lim_{N \to \infty} \frac{\pi_{N-1,n}^{(d)}}{\pi_{N,n}^{(d)} + \pi_{N,n+1}^{(d)}} = 1,
\]
\[
\lim_{N \to \infty} \frac{\pi_{N-1,d}^{(d)}}{\pi_{N,d}^{(d)} + 0} = 1.
\]
In other words, the Leibnitz rule is asymptotically satisfied for the entire probability set \(\{\pi_{N,n}\}\), i.e., this system has asymptotic scale invariance. Its entropy is given by
\[
S_q(N,d) = \frac{1 - \sum_{k=0}^{d} [N!/(N-k)!k!] \left[ \pi_{N,k}^{(d)} \right]^q}{q - 1},
\]
and we verify that a value of \(q\) exists such that \(\lim_{N \to \infty} S_q(N,d)/N\) is finite. Our numerical results suggest that, for \(0 < \varepsilon < 1\) (see Fig. 8),
\[
q = 1 - \frac{1}{d}.
\]
For a description of a strictly scale-invariant discrete model and a continuous model, see Supporting Text and Figs. 9–17, which are published as supporting information on the PNAS web site.
**Final Remarks**
Let us now critically re-examine the physical entropy, a concept which is intended to measure the nature and amount of our ignorance of the state of the system. As we shall see, extensivity may act as a guiding principle. Let us start with the simple case of an isolated classical system with strongly chaotic nonlinear dynamics, i.e., at least one positive Lyapunov exponent. For almost all possible initial conditions, the system quickly visits the various admissible parts of a coarse-grained phase space in a virtually homogeneous manner. Then, when the system achieves thermodynamic equilibrium, our knowledge is as meager as possible (microcanonical ensemble), i.e., just the Lebesgue measure \(W\) of the appropriate (hyper) volume in phase space (continuous degrees of freedom), or the number \(W\) of possible states (discrete degrees of freedom). The entropy is given by \(S_{BG}(N) = k \ln W(N)\) [Boltzmann principle (34)].** If we consider independent equal subsystems, we have \(W(N) = [W(1)]^N\), hence \(S_{BG}(N) = NS_{BG}(1)\). If the \(N\) subsystems are only locally correlated, we expect \(W(N) \sim \mu^N (\mu \geq 1)\), hence \(\lim_{N \to \infty} S_{BG}(N)/N = \mu\), i.e., the entropy is extensive (i.e., asymptotically additive).
Consider now a strongly chaotic case for which we have more information, e.g., the set of probabilities \(\{p_i\} (i = 1, 2, \ldots, W)\) of the states of the system. The form \(S_{BG} = -k \sum_{i=1}^{W} p_i \ln p_i\) yields \(S_{BG}(A + B) = S_{BG}(A) + S_{BG}(B)\) in the case of independence \((p_{ij}^{A+B} = p_i^A p_j^B)\). This form, although more general than \(klnW\) (corresponding to equal probabilities), still satisfies additivity. It frequently happens, though, that we do not know the entire set \(\{p_i\}\), but only some constraints on this set, besides the trivial one \(\sum_{i=1}^{W} p_i = 1\). The typical case is Gibbs’ canonical ensemble (Hamiltonian system in longstanding contact with a thermal
---
**A. Einstein: “Usually \(W\) is set equal to the number of ways (complexions) in which a state, which is incompletely defined in the sense of a molecular theory (i.e. coarse grained), can be realized. To compute \(W\) one needs a complete theory (something like a complete molecular-mechanical theory) of the system. For that reason it appears to be doubtful whether Boltzmann’s principle alone, i.e. without a complete molecular-mechanical theory (Elementary theory) has any real meaning. The equation \(S = k \log W + \text{const.}\) appears [therefore] without an Elementary theory—or however one wants to say it—devoid of any meaning from a phenomenological point of view.” [translated by E. G. D. Cohen (34)]. A slightly different translation also is available: “[Usually \(W\) is put equal to the number of complexions. . . . In order to calculate \(W\), one needs a complete (molecular-mechanical) theory of the system under consideration. Therefore it is dubious whether the Boltzmann principle has any meaning without a complete molecular-mechanical theory or some other theory which describes the elementary processes. \(S = R/N \log W + \text{const.}\) seems without content, from a phenomenological point of view, without giving in addition such an Elementartheorie” (35)].
constraint leads to a substantial modification of the description of the states of the system, and the entropy form has to be consistently modified, as shown in any textbook. These expressions may be seen as further generalizations of $S_{BG}$, and the extremizing probabilities constitute, at the level of the one-particle states, generalizations of the just mentioned BG weight, recovered asymptotically at high temperatures. It is remarkable that, through these successive generalizations (and even more, since correlations due to local interactions might exist in addition to those connected with quantum statistics), the entropy remains extensive. Another subtle case is that of thermodynamic critical points, where correlations at all scales exist. There we can still refer to $S_{BG}$, but it exhibits singular behavior.
Finally, we address the completely different class of systems for which the condition of independence is severely violated (typically because the system is only weakly chaotic, i.e., its sensitivity to the initial conditions grows slowly with time, say as a power-law, with the maximal Lyapunov exponent vanishing). In such systems, long range correlations typically exist that unavoidably point toward generalizing the entropic functional, essentially because the effective number of visited states grows with $N$ as something like a power law instead of exponentially. We exhibited here such examples for which (either exact or asymptotic) scale-invariant correlations are present. There the entropy $S_q$ for a special value of $q \neq 1$ is extensive, whereas $S_{BG}$ is not.
Weak departures from independence make $S_{BG}$ lose strict additivity, but not extensivity. Something quite analogous is expected to occur for scale-invariance in the case of $S_q$ for $q \neq 1$. Amusingly enough, we have shown (see also refs. 29 and 38) that the “nonextensive” entropy $S_q$—indeed nonextensive for independent subsystems—acquires extensivity in the presence of suitable asymptotically scale-invariant correlations. Thus arguments presented in the literature that involve $S_q$ (with $q \neq 1$) concomitantly with the assumption of independence should be revisited. In contrast, those arguments based on extremizing $S_q$, without reference to the composition of probabilities, remain unaffected. Although reference to “nonextensive statistical mechanics” still makes sense, say for long-range interactions, we see that the usual generic labeling of the entropy $S_q$ for $q \neq 1$ as “nonextensive entropy” can be misleading.
The asymptotic scale invariance on which we focus appears to be connected with the asymptotically scale-free occupation of phase space that has been conjectured (1) to be dynamically generated by the complex systems addressed by nonextensive statistical mechanics (see also refs. 39 and 40). Extensivity—together with concavity, Lesche-stability (41–43), and finiteness of the entropy production per unit time—increases the suitability of the entropy $S_q$ for linking, with no major changes, statistical mechanics to thermodynamics.
Last but not least, the probability structure of our discrete cases is, interestingly enough, intimately related to both the Pascal and the Leibnitz triangles.
---
**Fig. 8.** Illustrations of the extensivity of $S_q$ for the $q \neq 1$ ASF model (with $\varepsilon = 0.5$): (a) $d = 1$; (b) $d = 2$; and (c) $d = 3$. Notice that the minimal value of $N$ equals $d - 1$. $\lim_{N \to \infty} S_q(N)/N$ vanishes (diverges) if $q > 1 - 1/d$ ($q < 1 - 1/d$), whereas it is finite for $q = 1 - 1/d$.
We are grateful to R. Hersh for pointing out to us that the joint-probability structure of one of our discrete models is analogous to that of the Leibnitz triangle. We have also benefited from very fruitful remarks by J. Marsh and L. G. Moyano. Y.S. was supported by the Postdoctoral Fellowship at Santa Fe Institute. Support from SI International and AFRL is acknowledged as well. Finally, the work of one of us (M.G.M.) was supported by the C.O.U.Q. Foundation and by Insight Venture Management. The generous help provided by these organizations is gratefully acknowledged.
---
††This is due, as well known, to the fractal structure of the correlation clusters existing at critical points. An instructive description in nonextensive terms of such special situations has been recently advanced in refs. 36 and 37.
1. Gell-Mann, M. & Tsallis, C., eds. (2004) *Nonextensive Entropy: Interdisciplinary Applications* (Oxford Univ. Press, New York), pp. 1–422.
2. Havrda, J. & Charvat, F. (1967) *Kybernetika* **3**, 30–35.
3. Vajda, I. (1968) *Kybernetika* **4**, 105–110 (in Czech).
4. Daroczy, Z. (1970) *Inf. Control* **16**, 36–51.
5. Wehrl, A. (1978) *Rev. Mod. Phys.* **50**, 221–260.
6. Tsallis, C. (1995) *Chaos Solitons Fractals* **6**, 539–559.
7. Abe, S. & Okamoto, Y., eds. (2001) *Nonextensive Statistical Mechanics and Its Applications, Lecture Notes in Physics* (Springer, Heidelberg).
8. Tsallis, C. (2002) *Chaos Solitons Fractals* **13**, 371–391.
9. Csiszar, I. (1974) in *Transactions of the Seventh Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, and the European Meeting of Statisticians, 1974* (Reidel, Dordrecht), pp. 73–86.
10. Schutzenberger, P.-M. (1954) *Publ. Inst. Statist. Univ. Paris* **3**, 3.
11. Renyi, A. (1961) *Proc. Fourth Berkeley Symp.* **1**, 547–561 (See also A. Renyi, A. (1970), *Probability Theory* (North-Holland, Amsterdam).)
12. Landsberg, P. T. & Vedral, V. (1998) *Phys. Lett. A* **247**, 211–217.
13. Rajagopal, A. K. & Abe, S. (1999) *Phys. Rev. Lett.* **83**, 1711–1714.
14. Sharma, B. D. & Mittal, D. P. (1975) *J. Math. Sci.* **10**, 28–40.
15. Mast, M. (2005) *Phys. Lett. A* **338**, 217–224.
16. Tsallis, C. (1988) *J. Stat. Phys.* **52**, 479–487.
17. Curado, E. M. F. & Tsallis, C. (1991) *J. Phys. A* **24**, L69–L72, and errata (1991) **24**, 3187 and (1992) **25**, 1019.
18. Tsallis, C., Mendes, R. S. & Plastino, A. R. (1998) *Physica A* **261**, 534–554.
19. Lyra, M. L. & Tsallis, C. (1998) *Phys. Rev. Lett.* **80**, 53–56.
20. Borges, E. P., Tsallis, C., Ananos, G. F. J. & Oliveira, P. M. C. (2002) *Phys. Rev. Lett.* **89**, 254103-1–254103-4.
21. Ananos, G. F. J. & Tsallis, C. (2004) *Phys. Rev. Lett.* **93**, 020601-1–20601-4.
22. Mayoral, E. & Robledo, A. (2005) *Phys. Rev. E* **72**, 026209-1–026209-7.
23. Casati, G., Tsallis, C. & Baldovin, F. (2005) *Europhys. Lett.*, in press.
24. Watts, D. J. & Strogatz, S. H. (1998) *Nature* **393**, 440–442.
25. Albert, R. & Barabasi, A.-L. (2002) *Rev. Mod. Phys.* **74**, 47–98.
26. Polya, G. (1962) *Mathematical Discovery* (Wiley, New York), Vol. 1, p. 88.
27. Plastino, A. R. & Plastino, A. (1995) *Physica A* **222**, 347–354.
28. Tsallis, C. & Bukman, D. J. (1996) *Phys. Rev. E* **54**, R2197–R2200.
29. Tsallis, C. (2005) *Milan J. Math.* **73**, in press.
30. Nivanen, L., Le Mehaute, A. & Wang, Q. A. (2003) *Rep. Math. Phys.* **52**, 437–444.
31. Borges, E. P. (2004) *Physica A* **340**, 95–101.
32. Tsallis, C. (2004) *Physica A* **340**, 1–10.
33. Burlaga, L. F. & Vinas, A. F. (2005) *Physica A* **356**, 375–384.
34. Cohen, E. G. D. (2005) *Pramana J. Phys.* **64**, 635–642.
35. Pais, A. (1982) *Subtle Is the Lord: The Science and the Life of Albert Einstein* (Oxford Univ. Press, New York).
36. Robledo, A. (2004) *Physica A* **344**, 631–636.
37. Robledo, A. (2005) *Mol. Phys.*, in press.
38. Tsallis, C. (2005) in *Complexity, Metastability and Nonextensivity*, eds. Beck, C., Benedek, G., Rapisarda, A. & Tsallis, C. (World Scientific, Singapore), pp. 13–32.
39. Soares, D. J. B., Tsallis, C., Mariz, A. M. & Silva, L. R. (2005) *Europhys. Lett.* **70**, 70–76.
40. Thurner, S. & Tsallis, C. (2005) *Europhys. Lett.* **72**, 197–203.
41. Lesche, B. (1982) *J. Stat. Phys.* **27**, 419–422.
42. Abe, S. (2002) *Phys. Rev. E* **66**, 046134-1–046134-6.
43. Lesche, B. (2004) *Phys. Rev. E* **70**, 017102-1–017102-4.
44. Moyano, L. G., Tsallis, C. & Gell-Mann, M. (2005) arXiv: cond-mat/0509229.
|
बैंक ऑफ बारोडा व्युत्पन्नवई अंचल की यानी मंडव जिले, मंवई उपनगरी और टापे, पालघर तथा रायगड जिले की अपनी शाखाओं / कार्यालयों में पृष्ठकालिक मस्त्राफ (चपरामी) / सफाई कर्मचारी-सह-चपरामी की -219- रिक्तियों भरने के लिए ऑनलाइन आवेदन आमंत्रित करता है। पदों की विस्तृत जानकारी निम्नानुसार हैं:
| क्र. सं. | पद का नाम | पदों की संख्या | अनारक्षित | अनुसूचित जनजाति | अन्य पिछड़ा वर्ग | भूतपूर्व सैनिक (**) | मंद दृष्टि (**) वर्ग (**) विकलांग (**) |
|--------|---------------------|----------------|------------|---------------------|------------------|----------------------|----------------------------------|
| 1 | चपरामी | 02 | 01 | - | 01 | - | - |
| 2 | सफाईकर्मी-सह-चपरामी | 217 | 139 | 19 | 59 | 53** | 3 मंद दृष्टि** 3 वशिष्ट** 1 विकलांग** |
** -53- भूतपूर्व सैनिक, -3- मंद दृष्टि, -3- वशिष्ट तथा -1- विकलांग उपर्युक्त में से किसी भी श्रेणी में भर्ती किये जा सकते हैं।
मूल बेतन Rs. 9560 – Rs. 18545 प्रतिमाह (+ DA, HRA आदि, जैसा समय समय पर लागू हो)
A. पात्रता मानदण्ड:
I. आयु (वि. 01.11.2015 को)
न्यूनतम आयु : 18 वर्ष, अधिकतम आयु : 26 वर्ष। (अनारक्षित वर्ग के लिए) यानी उम्मीदवार का जन्म 02.11.1989 से पहले और 01.11.1997 के बाद का नहीं होना चाहिए (दोनों तारीखों को शामिल करते हुए)
आयु की अधिकतम सीमा में छूट
| क्रम सं. | श्रेणी | आयु में छूट |
|--------|-----------------|---------------|
| 1 | अनुसूचित जनजाति | 5 वर्ष |
| 2 | अन्य पिछड़ा वर्ग (मौन – क्रिमी लेखर) | 3 वर्ष |
| 3 | अश्रमता वाले व्यक्ति | 10 वर्ष |
| 4 | भूतपूर्व सैनिक / विकलांग भूतपूर्व सैनिक | रक्षा बलों में वास्तविक सेवा अवधि + 3 वर्ष, वर्धमान अधिकतम आयु सीमा 50 वर्ष |
नोट करें: (i) अनु. जाति / अनु. जनजाति / अन्य पिछड़े वर्ग के उम्मीदवारों को अधिकतम आयु सीमा में दी गई छूट संकलित आधार पर दी जाती है, जिसमें शेष श्रेणियों में से केवल एक श्रेणी को उपरोक्त मद क 1(3) और 1(4) के अनुसार छूट की अनुमति दी गई है।
(ii) विनिर्दिष्ट अधिकतम आयु सीमा सामान्य संवर्ग के उम्मीदवारों को लागू है।
(iii) आयु में छूट वाहनेवाले उम्मीदवारों को आवश्यक प्रमाणपत्रों की मूल प्रति / प्रतियां साक्षात्कार के समय और भर्ती प्रक्रिया के दौरान बाद में भी बैंक की आवश्यकतानुसार प्रस्तुत करनी होगी।
II. शैक्षिक अर्थात:
न्यूनतम : मैट्रिक्स/लेखन / 10 वी कक्षा या समकक्ष परीक्षा उत्तीर्ण
उम्मीदवार द्वारा ऑनलाइन आवेदन रजिस्टर करते समय मैट्रिक्स/लेखन / 10 वी कक्षा परीक्षा उत्तीर्ण होने के संबंध में उम्मीदवारों के पास वैश्य मार्क शीट / प्रमाणपत्र होना आवश्यक है और ऑनलाइन रजिस्टर करते समय हर परीक्षा / पाठ्यक्रम में अर्जित अंकों का प्रतिशत दर्शाना आवश्यक है।
मराठी भाषा की जानकारी (बोलना, पढना और लिखना) आवश्यक है।
भूतपूर्व सैनिक जिसके पास उपरोक्त नागरी परीक्षा का प्रमाणपत्र नहीं है, उनका मैट्रिक्स/लेखन भूतपूर्व सैनिक होना आवश्यक है, जिन्होंने संयं के सैनिकी बलों में कम से कम 15 वर्ष की सेवा पूरी करने के बाद शिक्षा का आर्मी स्पेशल प्रमाणपत्र या नौदर्श या वायुसेना का तदनुसूच प्रमाणपत्र प्राप्त किया है। ऐसे प्रमाणपत्र की तारीख 01.11.2015 का या उसके पहले का होना चाहिए।
B. लिखित परीक्षा / साक्षात्कार:
उम्मीदवारों की संख्या के अनुसार आवश्यक होने पर बैंक लिखित परीक्षा और उसके बाद साक्षात्कार आयोजित कर सकता है। अन्यथा, केवल साक्षात्कार द्वारा चयन किया जाएगा।
दस्तावेजों की सूची, जिन्हें साक्षात्कार के समय प्रदूषित किया जाना है (जैसा कि लागू हो)
उम्मीदवार की पात्रता और पहचान को प्रमाणित करनेवाले निम्नलिखित दस्तावेजों की मूल प्रति तथा स्वतः प्रमाणित छायाप्रति साक्षात्कार के समय अनिवार्यतः प्रस्तुत करनी होगी। ऐसा न करने पर उम्मीदवार को साक्षात्कार देने से मना किया जा सकता है। साक्षात्कार के समय आवश्यक दस्तावेजों को प्रस्तुत न करने की वजह से आगे की भर्ती प्रक्रिया के लिए उसकी उम्मीदवारी वांछित होगी।
(i) साक्षात्कार कोल नंबर का वैश्य टिडिएटरड
(ii) ऑनलाइन आवेदन पत्र का वैश्य मिटम जनरेटेड प्रिंट आउट
(iii) जन्मतारीख का प्रमाण (नामम भुमिस्थित प्राधिकारियों या एनएएसप्लसी / 10 की कक्षा प्रमाणपत्र जन्मतारीख के माध्य)
(iv) विज्ञापन की मद की सी (रिजें) में निर्देशित फोटो पहचान प्रमाण
(v) सभी शैक्षणिक अहताओं इत्यादि के लिए मार्क शीट और प्रमाणपत्र
(vi) अनु. जाति / अनु. जनजाति / अन्य पिछड़े वर्ग के उम्मीदवारों के मामले में गश्त प्राधिकारी द्वारा जारी जाति प्रमाणपत्र, जो भारत सरकार द्वारा निर्धारित विनिर्दिष्ट नमूने में हो।
अन्य पिछड़े वर्ग के उम्मीदवारों के मामले में, प्रमाणपत्र में विशेष रूप से खंड शामिल हो कि उम्मीदवार क्रीमी लेयर वर्ग से नहीं है, जिन्हें भारत सरकार के तहत आनेवाले मिलियन, पोस्ट और सेवाओं में अन्य पिछड़े आतिशों के आरक्षण के फायदों से दूर रखा गया है। नान क्रीमी लेयर के खंड सहित अन्य पिछड़े वर्ग का प्रमाणपत्र साक्षात्कार , यदि बुलाया जाए तो, की तारीख पर वैश्य होना चाहिए (साक्षात्कार, यदि बुलाया जाए तो, की तारीख से पहले एक वर्ष के भीतर जारी किया होना चाहिए) प्रमाणपत्र में उल्लिखित जाति का नाम केंद्र सरकार की सूची / अधिसूचना के साथ अक्षरण में खानेवाला होना चाहिए।
अन्य पिछड़े वर्ग के उम्मीदवार, जो क्रीमी लेयर वर्ग में आते हैं और / अथवा उनकी जाति को केंद्र सरकार की सूची में शामिल नहीं किया गया है, जो अन्य पिछड़े वर्ग के आरक्षण का लाभ नहीं मिलेगा। ऑनलाइन आवेदन करने समय उन्हें अपनी श्रेणी का उल्लेख सामान्यवेधन के रूप में करना होगा।
(vii) विकलांग श्रेणी के व्यक्तियों के मामले में जिला मेंडिकल बोर्ड द्वारा निर्धारित प्रक्रम में जारी विकलांगता प्रमाणपत्र
(viii) सरकार / अर्थ सरकारी कार्यक्रमों / सार्वजनिक क्षेत्र के उपकरणों में (राष्ट्रीयकृत बैंक और वित्तीय संस्थाओं सहित) सेवारत उम्मीदवारों को साक्षात्कार के समय उनके नियोजन का "अनापति प्रमाणपत्र" प्रस्तुत करना आवश्यक है। ऐसा न करने पर उनकी उम्मीदवारी पर विचार नहीं किया जाएगा और यात्रा भना, यदि कोई अन्यथा दिया जाता हो, नहीं दिया जाएगा। ऐसा स्थान अनापति प्रमाणपत्र मान्य नहीं होगा और ऐसे उम्मीदवारों को साक्षात्कार में भाग लेने की अनुमति नहीं दी जाएगी / आगे की चयन प्रक्रिया में भी वे भाग नहीं ले पाएंगे।
(ix) अनुभव प्रमाणपत्र, यदि कोई है।
(x) पात्रता के समर्थन में अन्य संबंध दस्तावेज
नोट करें: - उम्मीदवार द्वारा उपरोलिखित पात्रता दस्तावेज प्रस्तुत न किए जाने की स्थिति में उन्हें साक्षात्कार देने की अनुमति नहीं दी जाएगी।
कोइ भी आवेदन/ दस्तावेज आवेदक द्वारा, परीक्षा/ साक्षात्कार से पेश की या उसके पश्चात, बैंक को ना भेजे जाए।
अनु, जाति / अनु, जनजाति / अन्य पिछड़े वर्ग / विकलांग व्यक्तियों को प्रभावपूर्ण जारी करने के लिए सक्षम प्राधिकारी निम्नतुल्य हैं:
( भारत सरकार द्वारा समय समय पर अधिसूचना के अनुसार):
अनु, जाति / अनु, जनजाति / अन्य पिछड़े वर्गों के लिए: (i) जिला देशाधिकारी / अतिरिक्त जिला देशाधिकारी / जिलाधीश / उपायुक्त / अतिरिक्त उपायुक्त / उप जिलाधीश / फस्टर क्लास स्टाइंडियरी मेजिस्ट्रेट / सिटी मेजिस्ट्रेट / सब डिविजनल मेजिस्ट्रेट (फस्टर क्लास स्टाइंडियरी मेजिस्ट्रेट की रैंक से नीचे न हो) / तानुका देशाधिकारी / कायाकारी देशाधिकारी / अतिरिक्त सहायक आयुक्त (ii) चीफ प्रेमिडीरी मेजिस्ट्रेट / अतिरिक्त चीफ प्रेमिडीरी मेजिस्ट्रेट / प्रेमिडीरी मेजिस्ट्रेट (iii) राजस्थान अधिकारी जो तहसिलदार की रैंक के नीचे का न हो (iv) उम्मीदवार और उनका परिवार सामान्यतः जहां रहता है, उस श्रेणी का सब डिविजनल अधिकारी
विकलांग व्यक्तियों के लिए: जिला स्तर पर गठित मेडिकल बोर्ड, जिसमें जिले के मुख्य मेडिकल अधिकारी, सब डिविजनल मेडिकल अधिकारी और अस्पत/ नेत्र विशेषज्ञ / ईएमडी मर्जिन शामिल होंगे।
अनु, जाति / अनु, जनजाति / विकलांग व्यक्तियों के प्रभावपूर्ण, के लिए निर्धारित प्रोफार्मा, प्रोफार्मा ए जैसा कि भूतपूर्व नीतिकों के लिए लागू है, को बैंक के वेबसाइट www.bankofbaroda.co.in से डाउनलोड किया जाए। इन संबंधों के उम्मीदवारों को साक्षात्कार के समय अनिवार्य: इन्हीं निर्धारित प्रारूपों में प्रभावपूर्ण प्रस्तुत करना आवश्यक होगा।
C. पहचान संलग्न:
परीक्षा / साक्षात्कार के समय, कॉल लेटर और उसके साथ उम्मीदवार के पहचान पत्र (जिसपर लिखा हुआ नाम और कॉल लेटर पर लिखा नाम हूँ हूँ मिलना चाहिए) जैसे – पैन कार्ड / पासपोर्ट / ड्राइविंग लाइसेंस / मददाता कार्ड / फोटो महिला बैंक पासबुक / राज्यपत्रित अधिकारी / लोक प्रतिनिधि द्वारा जारी फोटो पहचान पत्र, फोटो के साथ / मान्यताप्राप्त कॉलेज / विश्वविद्यालय द्वारा जारी पहचान पत्र / फोटो के साथ आधार कार्ड / कर्मचारी आईडी निरीक्षक को मत्यापन हेतु प्रस्तुत किया जाए।
कॉल लेटर, हाजिरी मूल्य और प्रस्तुत आवश्यक दस्तावेजों में शामिल उम्मीदवार के व्यवहार के संदर्भ में उम्मीदवार की पहचान का सत्यापन किया जाएगा। यदि उम्मीदवार की पहचान संदेहजनक लगे तो उने परीक्षा / साक्षात्कार में भाग लेने की अनुमति नहीं दी जाएगी।
राशन कार्ड और ई-आधार कार्ड को इस प्रोजेक्ट के लिए पहचान का वैश्व प्रमाण नहीं माना जाएगा। जिन उम्मीदवारों ने अपना नाम बदला है, उन्हें मूल राज्यपत्र अधिसूचना / अपना मूल विवाह प्रमाणपत्र / मूल शपथ पत्र प्रस्तुत किया जाने पर ही अनुमति दी जाएगी।
नोट करें: उम्मीदवारों ने ऑनलाइन आवेदन पत्र में / कॉल लेटर में जो नाम उल्लिखित किया है, हूँ हूँ उसी नाम को दशनिवाला फोटो पहचान पत्र को मूल रूप में प्रस्तुत करना आवश्यक है, इसी फोटो पहचान पत्र की छायाप्रति साक्षात्कार के कॉल लेटर के साथ साक्षात्कार में शामिल होने के समय प्रस्तुत करनी होगी। इसे प्रस्तुत ना किए जाने पर साक्षात्कार में शामिल नहीं होने दिया जाएगा।
D. आवेदन कैसे करें
आवेदन 26.11.2015 से 15.12.2015 तक नेवेल ऑनलाइन आवेदन करें। किसी अन्य माध्यम से किए गए आवेदन को स्वीकार नहीं किया जाएगा।
ऑनलाइन आवेदन करने के लिए आवश्यकताएं
(i) उम्मीदवार अपना फोटो और हस्ताक्षर स्कैन करें, फोटो या हस्ताक्षर का साइज 200kb से ज्यादा न हो।
(ii) कैपिटल लेटर्स में किए गए हस्ताक्षर स्वीकार नहीं किए जाएंगे।
(iii) बैंक परीक्षा / साक्षात्कार के लिए कॉल लेटर डाउनलोड करने के लिए मुख्या, वेबसाइट पर शोधित करने और एसएमएस भेजने के अतिरिक्त, ई मेल द्वारा भेज सकता है। अतः उम्मीदवारों को सूचित किया जाता है कि उनके पास वेब ई मेल आईडी होना चाहिए।
ई मेल आईडी देना अनिवार्य नहीं है, पर बेहतर संपर्क के लिए ईमेल आईडी देने की सलाह दी जाती है।
ऑनलाइन आवेदन करने की कार्यपद्धति
(1) ऑनलाइन आवेदन पार्श्व खोलने के लिए "ऑनलाइन आवेदन करने के लिए यहाँ क्लिक करें CLICK HERE TO APPLY ONLINE" पर क्लिक करें।
(2) उम्मीदवारों से अनुरोध है कि ऑनलाइन आवेदन स्वयं एवं सावधानी से भरे क्योंकि ऑनलाइन आवेदन में एक बार भरे गए डाटा में बाद में कोई परिवर्तन सम्भव नहीं है / इसकी अनुमति नहीं है। ऑनलाइन आवेदन प्रस्तुत करने से पहले उम्मीदवारों में अनुरोध है कि वे "Verify" सुविधा का प्रयोग करते हुए ऑनलाइन आवेदन में दी गई सूचना की जांच करें और आवश्यकतानुसार इसमें सुधार करें। सबमिट बटन "Submit" पर क्लिक करने के बाद कोई परिवर्तन सम्भव नहीं होगा। दूसरी स्थितियों उम्मीदवारों की जिम्मेदारी है कि वे ऑनलाइन आवेदन में भरी गई जानकारी के सत्यपन के लिए जिम्मेदार हैं। अतः वे ऑनलाइन आवेदन में भरी गई सूचना की समर्पित जांच करें क्योंकि सबमिशन के बाद कोई परिवर्तन सम्भव नहीं होगा।
कृपया नोट करें कि ऑनलाइन आवेदन में भरी गई सभी तत्वों की सूचना, उम्मीदवार का नाम, संबंध, जन्मतारीख, पता, मोबाइलनंबर, डैं बेंक आईडी, परीक्षा का केंद्र, इत्यादि को अंतिम समझा जाएगा और ऑनलाइन आवेदन पार्श्व के सबमिशन के बाद कोई परिवर्तन सम्भव नहीं होगा / इसकी अनुमति नहीं दी जाएगी। अतः उम्मीदवारों को सूचित किया जाता है कि फार्म भरने में अस्तित्विक सावधानी लें क्योंकि सूचना में परिवर्तन के निर्देश भी अनुरोध / प्रस्तुत पर विचार नहीं किया जाएगा। आवेदन पत्र में गलत और असूची विचारण या आवश्यक जानकारी न दिए जाने के किसी भी संभाव्य परिणाम के लिए बैंक जिम्मेदार नहीं होगा।
जो ऑनलाइन आवेदन किसी भी दृष्टि से अधूरा, जैसे ऑनलाइन आवेदन में उचित पासपोर्ट साईज का फोटो और हस्ताक्षर अपलोड न किया जाना होगा, उसे वैश्य नहीं माना जाएगा।
उम्मीदवारों को उनके अपने हित में सूचित किया जाता है कि अंतिम तारीख से पहले ऑनलाइन आवेदन भरे और अंतिम तारीख की प्रतिस्पर्धा नहीं करें, जिससे कोन्स्टेंट न मिलने / इंटरनेट पर भारी लोड / वेबसाइट जेम के बजाय से बैंक के वेबसाइट को लोग इन करने में असमर्थता / असफलता की स्थितियों का सामना न करना पड़े।
उम्मीदवार यदि उपरोक्त कारणों से या किसी अन्य कारण से जिनपर बैंक का नियंत्रण नहीं है, अंतिम तारीख से पहले अपना आवेदन प्रस्तुत नहीं कर पाता, तो इसकी कोई जिम्मेदारी बैंक पर नहीं होगी। कृपया नोट करें कि आवेदन करने की उपरोक्त कार्यपद्धति ही एकमात्र वैश्य कार्यपद्धति है। किसी अन्य विधा से या अधूरे चरणों में किया गया आवेदन स्वीकार नहीं होगा और पेसे आवेदनों को अस्तीत्वित किया जाएगा।
आवेदक द्वारा अपने आवेदन में दी गई सूचना आवेदक पर व्यक्तित्व: जानकारी रहेगी और उनके द्वारा प्रस्तुत कोई भी सूचना बाद में किसी भी समय गलत पाई जाने की स्थिति में कानूनी कार्यवाही / स्थिति परिणामों का दायित्व उम्मीदवार का होगा।
E. सामान्य निर्देश
1. उम्मीदवारों को सावधानीकर के समय अनिवार्यतः आवश्यक दस्तावेज, जैसे कॉल लेटर, फोटो पहचान पत्र, जिसपर नाम उसी तरीके से लिखा गया हो, जैसा कि ऑनलाइन प्रस्तुत किए गए फार्म में प्रदर्शित है, उपनब्य और प्रस्तुत करना होगा।
2. आवेदन करने से पूर्व, उम्मीदवारों को यह सुनिश्चित करना होगा कि वे विज्ञापन में उल्लिखित पात्रता और अन्य मानदंड पूरे कर रहे हैं। अतः उम्मीदवारों को सूचित किया जाता है कि विज्ञापन ध्यान से पढ़ें और ऑनलाइन आवेदन प्रस्तुत करने के संबंध में दिए गए सभी निर्देशों का अनुपालन करें।
3. स्थिति परिस्थिति / सावधानी / दोनों तथा बाद की प्रक्रिया के लिए उम्मीदवारों को संक्षिप्त सूची में चयन अनिवार्य ही होगा। उम्मीदवार को कॉल लेटर जारी किया है, सिर्फ इसी तथ्य के आधार पर यह तर्क लगाना सही नहीं होगा कि उसकी उम्मीदवारी अंतिम रूप से स्थीतिक की गई है। यदि किसी भी चरण में यह पता लगे कि उम्मीदवार ने पात्रता मानदंड पूरे नहीं किए हैं या उन्होंने नाराज / झूठी जानकारी / प्रमाणपत्र / दस्तावेज प्रस्तुत किए हैं या किसी भौतिक तथ्य को छुपाया है, तो किसी भी आवेदन को, चयन प्रक्रिया के किसी भी चरण में अस्वीकार करने, उसकी उम्मीदवारी निरस्त करने की बैंक को स्वतंत्रता है। यदि बैंक में नियुक्ति के बाद भी कोई दृष्टि / त्रुटियाँ दिखाई देती हैं, तो उसकी सेवा पूरी तरह से समाप्त की जा सकती हैं।
4. उम्मीदवार की पात्रता, किन चरणों पर पात्रता की द्वारा नीति की जानी है, अहिंसा और अन्य पात्रता मानदंड, साक्षात्कार के आयोजन के लिए प्रस्तुत किए जानेवाले कानूनात्मक, सत्यपान आदि से संबंधित मामले तथा भर्ती प्रक्रिया से संबंधित सभी मामलों में बैंक का निर्णय अंतिम होगा और उम्मीदवार के लिए बाध्यकारी होगा। इस दिशा में कोई पत्राचार या व्यक्तिगत पृष्ठतात्त्व पर बैंक विचार नहीं करेगा।
5. एक उम्मीदवार द्वारा केवल एक आवेदन प्रस्तुत किया जाए। यदि आवेदन एक से अधिक होंगे तो अंतिम वैश्य (पूरी तरह से भरा हुआ) आवेदन रखा जाएगा।
6. एक बार पंजीकृत आवेदन को वापस नहीं लिया जा सकेगा।
7. विज्ञापन से उत्पन्न भर्ती प्रक्रिया सहित कोई भी विवाद पूरी तरह से मुंबई में स्थित न्यायालय के अधिकार क्षेत्र में आएगा।
8. किसी भी प्रकार से कैनवर्सिंग का परिणाम अनहित होगा।
9. ऑनलाइन आवेदन में दिए गए पत्र, विवरण में परिवर्तन के किसी भी अनुरोध को स्वीकार नहीं किया जाएगा।
10. इस विज्ञापन के अंग्रेजी के अलावा किसी भी संस्करण के खंडों की व्याख्या / प्रतिपादन में विवाद उत्पन्न होने की स्थिति में बैंक की वेबसाइट पर प्रदर्शित अंग्रेजी संस्करण मान्य समझा जाएगा।
11. उम्मीदवार को यह सुनिश्चित करना चाहिए कि उसके द्वारा अलग अलग स्थानों पर यात्री, अप्रै कॉल लेटर में, हाजिरी सूची में आदि तथा बैंक के साथ किसी भी भावी पत्राचार में जोड़े जानेवाले हस्ताक्षर एकसमान हों और इनमें किसी भी प्रकार की भिन्नता ना हो।
12. उम्मीदवार ऑनलाइन आवेदन फॉर्म में उम्मीदवार द्वारा नया, पहचानने योग्य छायाचित्र अपलोड किया जाए और उम्मीदवार यह सुनिश्चित करें कि प्रक्रिया के विभिन्न चरणों में इस्तेमाल के लिए वे इसी फोटो की प्रतियाँ तैयार रखें। उम्मीदवारों को यह भी सुनिश्चित किया जाता है कि प्रक्रिया पूरी होने तक वे अपना हुलिया न बदलें। विभिन्न चरणों में एक जैसी फोटो प्रस्तुत न करते या पहचान के लिए संदेह पैदा होने की स्थिति में अप्रत्यक्ष समझा जा सकता है।
13. उम्मीदवारों को साक्षात्कार में आने के खर्चें स्वयं करने होंगे। तथापि, बाहरी अनु. जाति / अनु. जनजाति / विकलांग व्यक्ति संबंधों के पात्र उम्मीदवारों को साक्षात्कार में बुलाए जाने पर उन्हें लघुत्तम मार्ग से रेल / बस का तृतीय श्रेणी का आने जाने का कार्यान्वयन या वास्तविक खर्च, दोनों में से जो कम हो, यात्रा का प्रमाण (रेल / बस टिकट इत्यादि) प्रस्तुत किए जाने पर अदा किया जाएगा। यह रियायत उन अनु. जाति / अनु. जनजाति / विकलांग व्यक्ति संबंधों के उम्मीदवारों को नहीं दिया जाएगा, जो अभी केंद्र / राज्य सरकार, निगमों, सार्वजनिक उपक्रमों / स्थानीय योग्य कार्यक्रम, संस्थायों और पंचायतों इत्यादि में सेवारत हैं।
14. उम्मीदवारों की निरूपित, बैंक की किसी अन्य आवश्यकताओं के अनुसार उनके मेडिकल फिट घोषित किए जाने और बैंक के सेवा व आचरण संबंधी नियमों के अधीन होगी।
15. कोई भी मानदंड, चयन की पद्धति में परिवर्तन करने (रद्द करने / संशोधन करने / जोड़ने) का बैंक को अधिकार रहेगा।
16. सूचनाएं ई मेल और / अथवा एसएमएस द्वारा ही ऑनलाइन आवेदन फॉर्म में पंजीकृत ई मेल आईडी और मोबाइल नंबर पर बेजी जाएंगी।
मोबाइल नंबर, ई मेल के पते में परिवर्तन, तकनीकी शुद्धि या अन्य कारण से, जो बैंक के नियंत्रण में नहीं है, यदि सुचना / निर्देश उम्मीदवार को नहीं मिलते, तो उसके लिए बैंक जिम्मेदार नहीं होगा और उम्मीदवारों को सूचित किया जाता है कि अच्छतन जानकारी प्राप्त करने के लिए बैंक की अधिकृत वेबसाइट www.bankofbaroda.co.in पर ध्यान रखें।
17. किसी भी प्रश्न/ शंका के बारे में उम्मीदवार हमें firstname.lastname@example.org पर ई मेल भेज सकते हैं। फोन पर की गई पृष्ठतात्त्व को स्वीकार नहीं किया जाएगा।
F. कॉल लेटर्स
साधारणतः का केंद्र, स्थान का पता, तारीख और समय की सूचना संबंधित कॉल लेटर में दी जाएगी। पात्र उम्मीदवार अपना कॉल लेटर बैंक की वेबसाइट www.bankofbaroda.co.in से अपना विवरण – यानी आवेदन नंबर / जन्मतारीख डालकर डाउनलोड करें। कॉल लेटर / सूचनाके हेडआउट इस्पात की हार्ड कॉपी डाक / कूरियर द्वारा नहीं भेजी जाएगी।
सूचनाएं ई मेल और / अथवा एसएमएस द्वारा ही भरी के लिए ऑनलाइन आवेदन फार्म में पंजीकृत ई मेल आईडी और मोबाइल नंबर पर भेजी जाएगी। मोबाइल नंबर, ई मेल के पते में परिवर्तन, तकनीकी तुरंत या अन्य कारण से, जो बैंक के नियमण में नहीं है, यदि ई मेल द्वारा / एसएमएस द्वारा भेजे गए सूचना / निर्देश उम्मीदवार को देख से मिलते हैं / नहीं मिलते हैं, तो उसके लिए बैंक जिम्मेदार नहीं होगा अतः उम्मीदवारों को सूचित किया जाता है कि प्रक्रिया अवधि के दौरान बैंक की अधिकृत वेबसाइट www.bankofbaroda.co.in नियमित रूप से बेहों, साथ ही अपने पंजीकृत ई मेल खाते की भी नियमित जांच करें, ताकि वेबसाइट पर डाले गए / ई मेल पर भेजे गए चयन, अथवा जानकारी या ऐसी किसी सूचना को तुरंत प्राप्त कर सकें।
G. घोषणा:
इस प्रक्रिया से संबंधित सभी भारी घोषणाएं / चयन समय समय पर बैंक की अधिकृत वेबसाइट www.bankofbaroda.co.in पर ही प्रकाशित / उपलब्ध कराई जाएगी।
H. घोषणा-पत्र:
उम्मीदवार द्वारा गलत जानकारी दी जाने और / अथवा प्रक्रिया के उल्लंघन की घटना का किसी भी चयन / प्रक्रिया के किसी भी चरण में पता लगाने पर उम्मीदवार को चयन प्रक्रिया के लिए अपार समझ जाएगा और उसे उसके बाद भरी की निर्णीत भी प्रक्रिया में भाग लेने नहीं दिया जाएगा। यदि ऐसी बातें जारू चयन प्रक्रिया के दौरान उजागर नहीं हो जाती हैं परन्तु बाद में उसके बारे में पता चल जाए तो सूचितर्त प्रभाव से अपारता के परिणाम प्रभावी होंगे। सबस्टाफ की भरी की प्रक्रिया के विषय में बैंक द्वारा दिए गए स्टाटसक्रप / निर्णय अंतिम और बाध्यकारी होंगे।
संबंध
दिनांक : 26.11.2015
बैंक ऑफ बरौदा
Bank of Baroda invites online applications for filling up -219- vacancies for **Full Time Subordinate Staff (Peon)** and **Full Time Sweeper-cum-Peon** in Subordinate Cadre at Branches/ Offices in Greater Mumbai Zone i.e. branches/ Offices in the districts of Mumbai, Mumbai Suburban, Thane, Palghar and Raigad. The details of the vacancies are as under:
| SN | Name of Post | Total No. of Vacancies | Unreserved | ST | OBC | EX-SM (***) | VI(**) HI(**) OH(**) |
|----|-----------------------|------------------------|------------|----|-----|-------------|----------------------|
| 1 | Peon | 02 | 01 | - | 01 | - | - |
| 2 | Sweeper Cum Peon | 217 | 139 | 19 | 59 | 53** | 3 VI** 3 HI** 1 OH** |
**Basic Pay Rs. 9560 – Rs.18545 pm (Plus DA, HRA etc. as applicable from time to time)**
A. **ELIGIBILITY CRITERIA:**
I. **AGE (As on 01.11.2015)**
Minimum: 18 years. Maximum: 26 years (For Unreserved Category)
i.e. a candidate must have been born not earlier than 02.11.1989 and not later than 01.11.1997 (both dates inclusive).
**Relaxation of Upper Age Limit**
| SN. | Category | Age relaxation |
|-----|-----------------------------------------------|---------------------------------------|
| 1 | Scheduled Tribe | 5 years |
| 2 | Other Backward Classes (Non-Creamy Layer) | 3 years |
| 3 | Persons with Disabilities | 10 years |
| 4 | Ex-Servicemen/ Disabled Ex-Servicemen | actual period of service rendered in the defence forces + 3 years subject to a maximum age limit of 50 years |
**NOTE:**
(i) The relaxation in upper age limit to SC/ST/OBC candidates is allowed on cumulative basis with only one of the remaining categories for which age relaxation is permitted as mentioned above in Point No. I (3) and I (4).
(ii) The maximum age limit specified is applicable to General Category candidates.
(iii) Candidates seeking age relaxation will be required to submit necessary certificate(s) in original/ copies at the time of Interview and at any subsequent stage of the recruitment process as required by the Bank.
II. **EDUCATIONAL QUALIFICATION:**
Minimum: Matriculation / 10th Standard pass or equivalent.
The candidate must possess valid Mark-sheet / Certificate that he/ she is a matriculate/ 10th standard pass on the day he/ she registers and indicate the percentage of marks obtained in each exam/ course while registering online.
Knowledge of Marathi language (Speak, Read and Write) is essential.
Ex-Servicemen who do not possess the above civil examination qualifications should be matriculate Ex-Servicemen who have obtained the Army Special Certificate of Education or corresponding certificate in the Navy or Air Force after having completed not less than 15 years of service in the Armed Forces of the Union. Such certificates should be dated on or before 01.11.2015.
B. WRITTEN TEST/ INTERVIEW:
The Bank may, if necessary, conduct a written test depending upon the number of candidates, followed by interview. Otherwise, the mode of selection would be interview only.
List of Documents to be produced at the time of interview (as applicable)
The following documents in original and self attested photocopies in support of the candidate’s eligibility and identity are to be invariably submitted at the time of interview failing which the candidate may not be permitted to appear for the interview. Non submission of requisite documents by the candidate at the time of interview will debar his candidature from further participation in the recruitment process.
(i) Printout of the valid Interview Call Letter
(ii) Valid system generated printout of the online application form
(iii) Proof of Date of Birth (Birth Certificate issued by the Competent Municipal Authorities or SSLC/ Std. X Certificate with DOB)
(iv) Photo Identity Proof as indicated in Point C (below) of the advertisement
(v) Mark-sheets & certificates for all educational qualifications etc.
(vi) Caste Certificate issued by the competent authority in the prescribed format as stipulated by Government of India in the case of SC / ST / OBC category candidates.
In case of candidates belonging to OBC category, certificate should specifically contain a clause that the candidate does not belong to creamy layer section excluded from the benefits of reservation for Other Backward Classes in Civil post & services under Government of India. OBC caste certificate containing the Non-creamy layer clause should be valid as on the date of interview, if called for (issued within one year prior to the date of interview, if called for). Caste Name mentioned in certificate should tally letter by letter with Central Government list / notification.
Candidates belonging to OBC category but coming under creamy layer and/or if their caste does not find place in the Central List are not entitled to OBC reservation. They should indicate their category as General in the online application form.
(vii) Disability certificate in prescribed format issued by the District Medical Board in case of Persons With Disability category
(viii) Candidates serving in Government / quasi govt offices/ Public Sector Undertakings (including Nationalised Banks and Financial Institutions) are required to produce a “No Objection Certificate” from their employer at the time of interview, in the absence of which their candidature will not be considered and travelling expenses, if any, otherwise admissible, will not be paid. Production of such conditional NOCs at the time of interview will not be considered and such candidates will not be permitted to participate in interview/will not be considered for further selection process.
(ix) Experience Certificate, if any.
(x) Any other relevant documents in support of eligibility.
Note:- Candidates will not be allowed to appear for the interview if he/she fails to produce the relevant eligibility documents as mentioned above.
No application/documents shall be directly sent to the Bank by candidates before or after the interview.
The Competent Authority for the issue of the certificate to SC / ST / OBC / PERSONS WITH DISABILITIES is as under (as notified by GOI from time to time):
For Scheduled Castes / Scheduled Tribes / Other Backward Classes: (i) District Magistrate / Additional District Magistrate / Collector / Deputy Commissioner / Additional Deputy Commissioner / Deputy Collector / First Class Stipendiary Magistrate / City Magistrate / Sub-Divisional Magistrate (not below the rank of First Class Stipendiary Magistrate) / Taluk Magistrate / Executive Magistrate / Extra Assistant Commissioner (ii) Chief Presidency Magistrate/ Additional Chief Presidency Magistrate/ Presidency Magistrate (iii) Revenue Officer not below the rank of Tehsildar (iv) Sub-divisional officer of the Area where the candidate and or his family normally resides.
For Persons with Disabilities: Authorised certifying authority will be the Medical Board at the District level consisting of Chief Medical Officer, Sub-Divisional Medical Officer in the District and an Orthopaedic / Ophthalmic / ENT Surgeon.
Prescribed Formats of SC, ST, OBC, PWD certificates, Proforma A as applicable for Ex-Servicemen can be downloaded from the Bank’s website www.bankofbaroda.co.in. Candidates belonging to these categories are required to produce the certificates strictly in these formats only at the time of interview.
C. IDENTITY VERIFICATION:
At the time of written test/ interview, the call letter along with a photocopy of the candidate’s photo identity (bearing exactly the same name as it appears on the call letter) such as PAN Card/ Passport/ Driving Licence/ Voter’s Card/ Bank Passbook with photograph/ Photo identity proof issued by a Gazetted Officer/ People’s Representative along with a photograph / Identity Card issued by a recognised college/ university/ Aadhar card with a photograph/ Employee ID should be submitted to the invigilator for verification.
The candidate’s identity will be verified with respect to his/her details on the call letter, in the Attendance List and requisite documents submitted. If identity of the candidate is in doubt the candidate may not be allowed to appear for the Examination/ interview.
Ration Card and E-Aadhar card will not be accepted as valid id proof for this project. In case of candidates who have changed their name, will be allowed only if they produce original Gazette notification / their original marriage certificate / affidavit in original.
Note: Candidates have to produce, in original, the same photo identity proof bearing the name as it appears on the online application form/ call letter and submit photocopy of the photo identity proof along with the Interview Call Letter while appearing for the interview, without which they will not be allowed to appear for the interview.
D. HOW TO APPLY
Candidates can apply online only from 26.11.2015 to 15.12.2015 and no other mode of application will be accepted.
Pre-Requisites for Applying Online
Before applying online, candidates should-
(i) Scan their photograph and signature ensuring that both the photograph. Size of the Photo or Signature should not be more than 200kb.
(ii) Signature in CAPITAL LETTERS will NOT be accepted.
(iii) Bank may send intimations for downloading call letters for the examination/ interview etc. through Email apart from announcing on the website and sending SMS. Therefore, the candidates are advised to have a valid email ID, providing Email Id is not mandatory. However for the convenience of the candidates it is advisable to furnish Email ID for better communication.
Procedure for applying Online
(1) Click on the link “CLICK HERE TO APPLY ONLINE” to open the Online Application Form.
(2) Candidates are advised to carefully fill in the online application themselves as no change in any of the data filled in the online application will be possible/ entertained. Prior to submission of the online application candidates are advised to use the “Verify” facility to verify the details in the online application form and modify the same if required. No change is permitted after clicking on “SUBMIT” Button. Visually impaired candidates are responsible for carefully verifying/ getting the details filled in, in the online application form properly verified and ensuring that the same are correct prior to submission as no change is possible after submission.
Please note that all the particulars mentioned in the online application including Name of the Candidate, Category, Date of Birth, Address, Mobile Number, Email ID, Centre of Examination, registration of preferences for Participating Organisations etc. will be considered as final and no change/modifications will be allowed after submission of the online application form. Candidates are hence advised to fill in the online application form with the utmost care as no correspondence regarding change of details will be entertained. Bank will not be responsible for any consequences arising out of furnishing of incorrect and incomplete details in the application or omission to provide the required details in the application form.
An online application which is incomplete in any respect such as without proper passport size photograph and signature uploaded in the online application form will not be considered as valid. Candidates are advised in their own interest to apply on-line much before the closing date and not to wait till the last date to avoid the possibility of disconnection/ inability/ failure to log on to the Bank’s website on account of heavy load on internet/website jam.
Bank does not assume any responsibility for the candidates not being able to submit their applications within the last date on account of the aforesaid reasons or for any other reason beyond the control of the Bank.
Please note that the above procedure is the only valid procedure for applying. No other mode of application or incomplete steps would be accepted and such applications would be rejected.
Any information submitted by an applicant in his/ her application shall be binding on the candidate personally and he/she shall be liable for prosecution/ civil consequences in case the information/ details furnished by him/ her are found to be false at a later stage.
E. GENERAL INSTRUCTIONS
1. Candidates will have to invariably produce and submit the requisite documents such as valid call letter, a photocopy of photo-identity proof bearing the same name as it appears on the online submitted application form etc. at the time of interview.
2. Before applying, the candidates should ensure that they fulfil the eligibility and other norms mentioned in this advertisement. Candidates are therefore advised to carefully read this advertisement and follow all the instructions given for submitting online application.
3. A Candidate’s shortlisting for written test/ interview/ both and subsequent processes is strictly provisional. The mere fact that the call letter(s) has been issued to the candidate does not imply that his/ her candidature has been finally cleared by the Bank. Bank would be free to reject any application, at any stage of the process, cancel the candidature of the candidate in case it is detected at any stage that a candidate does not fulfill the eligibility norms and/or that he/she has furnished any incorrect/false information/certificate/documents or has suppressed any material fact(s). If any of these shortcomings is/are detected after appointment in Bank, his/her services are liable to be summarily terminated.
4. Decision of the Bank in all matters regarding eligibility of the candidate, the stages at which such scrutiny of eligibility is to be undertaken, qualifications and other eligibility norms, the documents to be produced for the purpose of the conduct of interview, verification etc. and any other matter relating to the recruitment process will be final and binding on the candidate. No correspondence or personal enquiries shall be entertained by the Bank in this behalf.
5. Not more than one application should be submitted by any candidate. In case of multiple Applications only the latest valid (completed) application will be retained.
6. Online applications once registered will not be allowed to be withdrawn.
7. Any dispute arising out of this advertisement including the recruitment process shall be subject to the sole jurisdiction of the Courts situated at Mumbai.
8. Canvassing in any form will be a disqualification.
9. Any request for change of address, details mentioned in the online application form will not be entertained.
10. In case any dispute arises on account of interpretation of clauses in any version of this advertisement other than English, the English version available on Bank’s website shall prevail.
11. A candidate should ensure that the signatures appended by him/her in all the places viz. in his/her call letter, attendance sheet etc., and in all correspondence with the Bank in future should be identical and there should be no variation of any kind.
12. A recent, recognizable photograph should be uploaded by the candidate in the online application form and the candidate should ensure that copies of the same are retained for use at various stages of the process. Candidates are also advised not to change their appearance till the process is completed. Failure to produce the same photograph at various stages of the process or doubt about identity at any stage could lead to disqualification.
13. Candidates will have to appear for the interview at their own expense. However, eligible outstation SC/ST/Persons with Disabilities category candidates called for interview will be paid II class to & fro railway/ bus fare or actual expenses incurred, whichever is less, by shortest route on production of proof of travel (rail/ bus ticket etc.). The above concession will not be admissible to SC/ST/Persons with Disabilities category candidates who are already in service in Central / State Government, Corporations, Public Undertakings / Local Government, Institutions and Panchayats etc.
14. Appointment of candidates is subject to his/her being declared medically fit, as per any other requirements of the Bank and subject to service and conduct rules of the Bank.
15. Bank reserves the right to change (cancel/ modify/ add) any of the criteria, method of selection.
16. Intimations will be sent by email and/or sms only to the email ID and mobile number registered in the online application form.
Bank shall not be responsible if the information/intimations do not reach candidates in case of change in the mobile number, email address, technical fault or otherwise, beyond the control of the Bank and candidates are advised to keep a close watch on the authorised Bank’s website www.bankofbaroda.co.in for latest updates.
17. With regard to any query the candidate may email us at email@example.com. No telephone query will be entertained.
F. CALL LETTERS
The Centre, venue address, date and time for examination and/or interview shall be intimated in the respective Call Letter.
An eligible candidate should download his/her call letter from the Bank’s website www.bankofbaroda.co.in by entering his/her details i.e. Application Number and Password/Date of Birth. No hard copy of the call letter/ Information Handout etc. will be sent by post/courier.
Intimations will be sent by email and/or sms to the email ID and mobile number registered in the online application form for the recruitment. Bank will not take responsibility for late receipt/non-receipt of any communication e-mailed/sent via sms to the candidate due to change in the mobile number, email address, technical fault or otherwise beyond the control of the Bank. Candidates are hence advised to regularly keep in touch with the authorised Bank website www.bankofbaroda.co.in for details, updates and any information which may be posted for further guidance as well as to check their registered e-mail account from time to time during the recruitment process.
G. **ANNOUNCEMENTS:**
All further announcements/ details pertaining to this process will only be published/ provided on Bank’s authorised website [www.bankofbaroda.co.in](http://www.bankofbaroda.co.in) from time to time.
H. **DISCLAIMER:**
Instances for providing incorrect information and/or process violation by a candidate detected at any stage of the selection/ process will lead to disqualification of the candidate from the selection process and he/she will not be allowed to appear in any of the recruitment processes in the future. If such instances go undetected during the current selection process but are detected subsequently, such disqualification will take place with retrospective effect. Clarifications / decisions given / to be given by the Bank regarding process for recruitment of Substaff shall be final and binding.
Mumbai
Date : 26.11.2015
Bank of Baroda
|
A Guide to Conic Optimisation and its Applications
Adam N. Letchford* Andrew J. Parkes†
To Appear in RAIRO-OR
Abstract
Most OR academics and practitioners are familiar with linear programming (LP) and its applications. Many are however unaware of conic optimisation, which is a powerful generalisation of LP, with a prodigious array of important real-life applications. In this invited paper, we give a gentle introduction to conic optimisation, followed by a survey of applications in OR and related areas. Along the way, we try to help the reader develop insight into the strengths and limitations of conic optimisation as a tool for solving real-life problems.
Key Words: conic optimisation, second-order cone programming, semidefinite programming.
1 Introduction
Most OR students, academics and practitioners are familiar with linear programming (LP). For many problems arising in industry and elsewhere, LP is an attractive option, due to its simplicity, the ease of doing sensitivity analysis, the existence of effective algorithms, and, perhaps most importantly, the availability of good software. With modern software packages such as CPLEX, Gurobi or Xpress, it is now possible to routinely solve LP instances with thousands of variables and/or constraints to proven optimality. These software packages can also cope with integer-constrained variables.
Of course, in real-world applications, one often encounters problems that have non-linear aspects. Sometimes, one can construct good linear approximations to such problems, perhaps with the help of binary variables, and thereby make them amenable to solution with the above-mentioned packages. In other cases, however, the nonlinearity is significant and must be faced head-on. For such cases, various non-linear programming (NLP) algorithms have been developed (see, e.g., [13, 14]).
*Department of Management Science, Lancaster University, United Kingdom.
E-mail: firstname.lastname@example.org
†School of Computer Science, University of Nottingham, United Kingdom.
E-mail: email@example.com
The purpose of this paper is to introduce a very important special case of NLP called *conic optimisation* (CO). Despite having a very special structure, CO is remarkably powerful. It has a prodigious array of important real-life applications, not only in OR, but also in related areas, such as statistics, computer science, engineering and finance. Moreover, CO inherits some of the nice features enjoyed by LP, such as the existence of efficient (polynomial-time) algorithms, a well-developed duality theory, and the availability of good software.
Apart from LP, the most important special cases of CO are *second order cone programming* (SOCP) and *semidefinite programming* (SDP). There already exist several good surveys on both SOCP (e.g., [2, 53, 62]) and SDP (e.g., [43, 93, 94, 96]). There also exists an excellent monograph on CO in general (Nesterov & Nemirovsky [73]). It is fair to say, however, that most of these works assume that the reader is a nonlinear programming expert. This can make them rather inaccessible to the OR generalist.
In this work, we assume only minimal knowledge of LP, geometry and linear algebra. Moreover, we place an emphasis on applications in OR and related areas, covering, e.g., inventory control, facility location, portfolio optimisation, problems involving binary variables, and various methods for optimising under uncertainty, such as mean-variance, chance-constrained and robust optimisation. Along the way, we try to help the reader develop insight into the strengths and limitations of CO as a tool for modelling and solving real-life problems.
The paper is structured as follows. In Sect. 2, we define CO and present some basic theory. In Sect. 3, we cover algorithms and complexity. In Sect. 4, we review the main applications of SOCP in OR and related fields. In Sect. 5, we do the same for SDP. In Sect. 6, we list some of the available software packages, and in Sect. 7, we make some concluding remarks.
Throughout the paper, $\mathbb{R}_+$, $\mathbb{R}_+^n$ and $\mathcal{S}^n$ denote the non-negative reals, the real vectors with $n$ non-negative components, and the real symmetric matrices of order $n$, respectively. Moreover, given a vector $x \in \mathbb{R}^n$, we let $||x||_1$ and $||x||_2$ denote $\sum_{i=1}^{n} |x_i|$ and $\sqrt{\sum_{i=1}^{n} x_i^2}$, respectively.
## 2 Definitions and Basic Theory
In this section, we define CO formally and present the minimal amount of theory and notation needed to understand how it works. We start by defining *cones* in Subsect. 2.1. We define CO itself in Subsect. 2.2. We then present elementary duality theory in Subsect. 2.3.
2.1 Cones
If a non-mathematician hears the word *cone*, the thing most likely to come to mind is either an ice-cream cone or a traffic cone. An idealised version of an ice-cream cone is displayed in Fig. 1. The idealised version is supposed to extend upward forever. One can check that it corresponds to the following set:
\[
\left\{ x \in \mathbb{R}^3 : x_3 \geq \sqrt{x_1^2 + x_2^2} \right\}.
\]
The ancient Greek astronomer Apollonius of Perga (c. 262 BC – c. 190 BC) discovered that, if one “slices” the idealised ice-cream cone at various angles, one can obtain interesting convex shapes; see Fig. 2. This already gives a hint as to why cones could be of relevance to optimisation: whereas LP forces us to optimise over polyhedra, we can optimise over various non-polyhedral convex sets by “slicing” cones in various ways. This enables us to model and solve a variety of important nonlinear problems, as we will see in Sections 4 and 5.
Mathematically speaking, a cone is a set of points (in some underlying space, such as Euclidean space) with the following property: if a point $x$ belongs to the cone, then so does the point $\lambda x$ for any $\lambda \in \mathbb{R}_+$. Note that cones can be extremely complicated. Indeed, consider an *arbitrary* set $S \subset \mathbb{R}^n$. The set
\[
C(S) = \left\{ (x, \lambda) \in \mathbb{R}^n \times \mathbb{R}_+ : x = \lambda x' \text{ for some } x' \in S \right\}
\]
is a cone, and it is “just as complicated” as $S$ itself. Indeed, if we intersect $C(S)$ with the hyperplane defined by the equation $\lambda = 1$, the resulting “slice” is $S$ (or, more precisely, an embedding of $S$ into a space of higher dimension).
If one wishes to say anything useful about cones, then, one must restrict attention to cones with a special structure. In the optimisation context, one
is typically interested in so-called *proper* cones. A cone $C$ is called proper if it is:
- convex: if $x^1, x^2 \in C$, then $\mu x^1 + (1 - \mu) x^2 \in C$ for all $\mu \in [0, 1]$;
- closed: $C$ contains all of its limit points;
- full-dimensional: there is no hyperplane containing $C$;
- pointed: if $x \in C$, then $-x \notin C$.
One can check that the idealised ice-cream cone is proper. The following five proper cones arise frequently in various contexts:
- the *non-negative* cone, which is simply $\mathbb{R}_+$;
- the *second-order cone* of order $n$, which is
\[
\{(x, t) \in \mathbb{R}^n \times \mathbb{R}_+ : t \geq ||x||_2\};
\]
- the *positive semidefinite* (psd) cone of order $n$, which is
\[
\left\{X \in \mathcal{S}^n : v^T X v \geq 0 \ (\forall v \in \mathbb{R}^n)\right\}; \tag{1}
\]
the *copositive* cone of order $n$ (first defined by Motzkin [71]), which is
\[
\left\{X \in \mathcal{S}^n : v^T X v \geq 0 \ (\forall v \in \mathbb{R}^n_+)\right\};
\]
- the *completely positive* cone of order $n$ (first defined by Hall [39]), which is
\[
\left\{X \in \mathcal{S}^n : X = A^T A \text{ for some real non-negative matrix } A\right\}.
\]
(There are several other cones of interest to optimisation, such as the correlation cone [26], the $p$th-order cone [32], the exponential cone [31] and the relative entropy cone [21], but we do not give details, for the sake of brevity.)
The second-order cone is a natural generalisation of the ice-cream cone to higher dimensions. It is sometimes called the *Lorentz* cone, after the Dutch physicist Hendrik Lorentz (1853–1928). (Indeed, for those familiar with special relativity, the second-order cone with $n = 3$ is the forward light cone of the origin, where $x$ represents space and $t$ represents time.)
The psd cone of order $n$ is usually denoted by $\mathcal{S}_+^n$. It can be defined in many different ways. It is known (see, e.g., Horn [47]) that a matrix $X \in \mathcal{S}^n$ belongs to $\mathcal{S}_+^n$ if and only if any of the following (equivalent) conditions hold:
- the quadratic function $f(v) = v^T X v$ is non-negative for all $v \in \mathbb{R}^n$ (this is just definition (1));
- the same function $f(v)$ is convex;
- the region $\{v \in \mathbb{R}^n : v^T X v \leq 1\}$ is an ellipsoid;
- all eigenvalues of $X$ are non-negative;
- all principal submatrices of $X$ have non-negative determinants;
- there exists a lower-triangular matrix $A \in \mathbb{R}^{n \times n}$ such that $X = A^T A$ (Cholesky factorisation);
- there exist vectors $u^1, \ldots u^n \in \mathbb{R}^n$ such that $X_{ij} = u^i \cdot u^j$ for all $i, j$ (Gram representation);
- $X$ can be written as a non-negative linear combination of symmetric rank-1 real matrices of the form $vv^T$.
From these definitions, one can see that, for a given value of $n$, the completely positive cone is contained in $\mathcal{S}_+^n$, which in turn is contained in the copositive cone.
To make the above ideas more concrete, we give some examples. The matrix $\begin{pmatrix} 5 & 1 \\ 1 & 1 \end{pmatrix}$ is completely positive, since it has the factorisation
$$\begin{pmatrix} 2 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 2 & 0 \\ 1 & 1 \end{pmatrix}.$$
The matrix $\begin{pmatrix} 5 & -1 \\ -1 & 1 \end{pmatrix}$ is psd, since it has the factorisation:
$$\begin{pmatrix} 2 & -1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 2 & 0 \\ -1 & 1 \end{pmatrix}.$$
It is not however completely positive, since it contains negative entries. Finally, the matrix \( X = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \) is copositive, since \( v^T X v = 2v_1 v_2 \geq 0 \) for all \( v \in \mathbb{R}_+^2 \). It is however not psd, since
\[
\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 1 \\ -1 \end{pmatrix} = \begin{pmatrix} -1 \\ 1 \end{pmatrix},
\]
and therefore it has \(-1\) as an eigenvalue.
### 2.2 Conic optimisation
A *conic programme* or *conic optimisation problem* is a problem that can be modelled as the problem of optimising a linear function over the intersection of a hyperplane and a proper cone. That is, a CO problem can be written in the form
\[
\sup \left\{ c^T x : Ax = b, \ x \in C \right\}, \tag{2}
\]
where \( c \in \mathbb{Z}^n, \ b \in \mathbb{Z}^m, \ A \in \mathbb{Z}^{m \times n} \) and \( C \subset \mathbb{R}^n \) is a proper cone. (We have to use supremum rather than maximum here, for technical reasons. See the last example in this subsection.)
Actually, as it stands, the definition (2) is too general, since it includes \( NP \)-hard problems as special cases (see the next section). Nesterov & Nemirovsky [73] showed that particularly effective solution methods could be devised for CO when the proper cones in question are *symmetric*. For brevity, we do not define symmetric cones here. Instead, we just remark that most symmetric cones of interest can be constructed using three basic building blocks; namely, the non-negative, second order and psd cones mentioned in the previous subsection. More specifically:
- The product of \( n \) non-negatives cones is \( \mathbb{R}_+^n \). This is a symmetric cone, and it is the cone used in LP.
- The product of a finite number of second-order cones is symmetric. Cones of this type are used in *second-order cone programming* or SOCP (sometimes also called *conic quadratic programming* or CQP).
- The product of a finite number of psd cones is also symmetric. Cones of this type are used in *semidefinite programming* (SDP).
It might seem at first that restricting attention to products of cones is a big limitation, since it implies that each cone must involve a different set of variables. However, this apparent limitation can be overcome by “splitting” variables into two or more copies. For example, if we wish to impose the second-order conic constraints \( t \geq ||(x)_y||_2 \) and \( s \geq ||(x)_y||_2 \) simultaneously, where the vector \( x \) is involved in both constraints, we replace the vector
$x$ with two vectors, say $x'$ and $x''$, add the constraints $t \geq ||(x')||_2$ and $s \geq ||(x'')||_2$, and add the (linear) constraint $x' = x''$.
It turns out that SDP generalises SOCP, which in turn generalises LP. Indeed, the second-order conic constraint $t \geq ||x||_2$ is equivalent to
$$\begin{pmatrix} t & x^T \\ x & tI_n \end{pmatrix} \in S^{n+1}_+,$$
where $I_n$ is the identity matrix of order $n$; and the non-negativity constraint $t \geq 0$ is implied by $t \geq ||x||_2$ together with $x_i = 0$ for $i = 1, \ldots n$. (Geometrically speaking, the non-negative cone is a “slice” of the second-order cone, which is in turn a “slice” of the psd cone.)
In the case of LP and SOCP, it is usual to view the variables as being arranged in a vector, which is usually denoted by $x$. In the case of SDP, however, it is usual to view them as being arranged in a square symmetric matrix, which is usually denoted by $X$. Accordingly, people sometimes refer to *vector variables* and *matrix variables*. When $X$ is a matrix variable, the cone $C$ is to be thought of as a subset of $S^n$ rather than $\mathbb{R}^n$.
To help the reader, we now give a couple of examples.
**Example 1:** Consider the following SOCP:
$$\sup \left\{ x_1 : x_1 - x_2 = 0, \ x_3 + x_4 = 2, \ x_3 \geq \sqrt{x_1^2 + x_2^2}, \ x_4 \geq 0 \right\}.$$
The largest value that $x_3$ can take is 2, which means that $x_1^2 + x_2^2$ cannot exceed 2. The optimal solution is therefore $x_1^* = x_2^* = \sqrt{2}$, $x_3^* = 2$ and $x_4^* = 0$, with a profit of $\sqrt{2}$. □
**Example 2:** Consider the following SDP:
$$\sup \left\{ X_{12} + X_{21} : X_{11} = 1, \ X_{22} = 3, \ X = \begin{pmatrix} X_{11} & X_{12} \\ X_{21} & X_{22} \end{pmatrix} \in S^2_+ \right\}.$$
Note that the determinant of $X$ is $X_{11}X_{22} - X_{12}X_{21}$, and this must be non-negative. Since $X_{11}X_{22}$ must equal 3, and $X_{12}$ must equal $X_{21}$, the profit is maximised by setting $X_{12}$ and $X_{21}$ to $\sqrt{3}$. This yields a profit of $2\sqrt{3}$. □
The above examples show that an SOCP or SDP can have a unique optimal solution in which one or more variables take *irrational* values. The following example shows that it is also possible for the supremum not to be attainable in an SOCP.
**Example 3:** Consider the following SOCP:
$$\sup \left\{ x_1 - x_3 : x_2 = 1, \ x_3 \geq \sqrt{x_1^2 + x_2^2} \right\}.$$
This is equivalent to:
\[
\sup \left\{ x_1 - \sqrt{x_1^2 + 1} : x_1 \in \mathbb{R} \right\}.
\]
We can bring the profit arbitrarily close to 0 (by making \(x_1\) arbitrarily large), but we cannot actually reach 0.
In a similar way, it can be shown that the supremum may not be attainable in an SDP.
### 2.3 Duality
There is an elegant duality theory for CO, which can be viewed as a generalisation of LP duality. First, assume that we are working with a vector variable \(x \in \mathbb{R}^n\). Associated with any cone \(C \subset \mathbb{R}^n\) we can define the *dual* (a.k.a. *polar*) cone
\[
C^* := \{ x \in \mathbb{R}^n : y^T x \geq 0 \ (\forall y \in C) \}.
\]
If we are dealing with a matrix variable \(X \in \mathcal{S}^n\) instead, and a cone \(C \subset \mathcal{S}^n\), then the definition of the polar cone must be modified slightly to:
\[
C^* := \{ X \in \mathcal{S}^n : X \bullet Y \geq 0 \ (\forall Y \in C) \},
\]
where \(X \bullet Y\) denotes the matrix inner product \(\sum_{i=1}^{n} \sum_{j=1}^{n} X_{ij} Y_{ij}\), which is also equal to \(\text{tr}(Y^T X)\).
It is easy to show that (a) the dual of a proper cone is proper, (b) if a cone is proper, then it is the dual of its dual, (c) the dual of the completely positive cone is the copositive cone (and vice-versa), and (d) the non-negative, second-order and psd cones are *self-dual*. (This is a property shared by all symmetric cones.)
The following fact can be shown, e.g., by an application of Lagrangian relaxation:
**Theorem 1 (Weak Duality for CO)** For any proper cone \(C \subset \mathbb{R}^n\), and any \(c \in \mathbb{R}^n\), \(b \in \mathbb{R}^m\) and \(A \in \mathbb{R}^{m \times n}\), we have:
\[
\sup \left\{ c^T x : Ax = b, \ x \in C \right\} \leq \inf \left\{ b^T y : A^T y - c \in C^*, \ y \in \mathbb{R}^m \right\}.
\]
Specialised to the case of LP, Theorem 1 reduces to the well-known fact that
\[
\max \left\{ c^T x : Ax = b, \ x \in \mathbb{R}_+^n \right\} \leq \min \left\{ b^T y : A^T y \geq c, \ y \in \mathbb{R}^m \right\}.
\]
In that case, we have strong duality, i.e., the inequality can be changed to an equation. For CO in general, however, strong duality is not guaranteed. Fortunately, it is guaranteed to hold under certain conditions that commonly
occur in practice. For example, it holds if the *Slater condition* is satisfied, i.e., if there is a feasible $x$ that lies in the interior of $C$ (see, e.g., [14, 73]).
Even when duality is strong, however, some care must be taken when interpreting the dual. For example, although the optimal dual solution $y^*$ provides meaningful dual prices, the components of the vector $A^T y^* - c$ are not always meaningful “reduced costs” (see, e.g., Helmberg [42]). Moreover, concepts such as degeneracy, dual degeneracy and complementary slackness must be handled carefully (e.g., Alizadeh et al. [3]).
## 3 Algorithms and Complexity
Now we turn our attention to algorithms for solving CO problems. To begin, it is helpful to recall the following facts about LP:
1. The *simplex method* (Dantzig [22, 23]) is typically very fast in practice, but takes exponential time in the worst case.
2. The *ellipsoid method* (Khachiyan [50]) runs in polynomial time in the worst case, but is too slow to be of practical use.
3. There exist various *interior-point methods* (IPMs) that run in polynomial time in the worst case, and tend to be reasonably fast in practice (see, e.g., Gondzio [36]).
The situation for CO is a bit more complicated:
1. There is no known effective analogue of the simplex method for general CO, nor indeed for SOCP or SDP in particular.
2. The ellipsoid method can solve CO in polynomial time (Grötschel et al. [38]), but only to *fixed precision*, and only when two technical conditions are met: (i) it must be possible to test in polynomial time whether a given rational point lies in the given cone, and (ii) the feasible region must be explicitly bounded (e.g., by the addition of a constraint which forces the solution to lie inside a ball of known centre and radius). Unfortunately, as in the case of LP, this result is of little use in practice.
3. There exist IPMs that can solve SOCPs and SDPs in polynomial time, but again, only to fixed precision and only when the feasible region is bounded (Nesterov & Nemirovsky [73]). The best of these IPMs are now fast enough to be of practical use.
To see why COs can only be solved to fixed precision, recall from Subsect. 2.2 that even SOCPs and SDPs can have irrational solutions. Such solutions cannot be represented exactly with a finite number of bits. (For a discussion
on how one might represent solutions exactly as *algebraic numbers*, see Nie *et al.* [75].) One consequence of this limited precision is that even testing feasibility of an SOCP or SDP is a non-trivial problem, if one wants a precise answer. In fact, it is not known whether it can be done in polynomial time; see Ramana [84] for a discussion.
The other technical condition mentioned above is that the feasible region must be explicitly bounded. To see why, consider the following constraints:
\[ x_3 - x_2 = 1, \quad x_4 - 2x_2 = 1, \quad x_3 \geq \sqrt{x_1^2 + x_2^2}. \]
These can be shown to imply \( x_4 \geq x_1^2 \). Chaining such constraints together, we can get \( x_7 \geq x_4^2, x_{10} \geq x_7^2, \) and so on. At the end, we get \( x_{3p+1} \geq x_1^{(2^p)} \). If we add the constraint \( x_1 = 2 \), we find that \( x_{3p+1} \geq 2^{(2^p)} \). So there exist SOCPs such that it takes an exponential number of bits to represent any feasible solution. An analogous example for SDPs is given in Alizadeh [1]. Requiring the feasible region to lie inside a ball eliminates such “pathological” SOCP and SDP instances.
As for the question of testing membership of a cone, testing whether a given \( x^* \in \mathbb{Q} \) lies in the non-negative cone is trivial, and so is testing whether a given rational point \((x^*, t^*)\) lies in the second-order cone. Less obviously, one can also check whether a given rational matrix \( X^* \in S^n \) belongs to \( S^n_+ \) in polynomial time, via a modified form of Gaussian elimination [38]. (In practice, one can just compute the minimum eigenvalue of \( X^* \), to some desired precision, and check whether it is non-negative.)
On the other hand, testing whether a rational matrix is completely positive is \( NP \)-hard [72]. (By duality, testing copositivity is co-\( NP \)-hard.) In fact, Burer [16] showed that any mixed 0-1 linear or quadratic program can be transformed into a completely positive program, i.e., a conic optimisation problem over the completely positive cone. As a result, completely positive programming is \( NP \)-hard in the strong sense. (By duality, copositive programming is co-\( NP \)-hard in the strong sense.) For a survey of results on completely positive and copositive programming, see Burer [17].
At the end of the previous subsection, we mentioned that dual information for SOCP or SDP, such as reduced costs and dual prices, must be interpreted with care. The lack of a simplex method leads to some other issues. For example:
- It can be hard to exploit sparsity in the constraint matrix (e.g., Benson *et al.* [10]).
- It is not trivial to re-optimise SOCPs and SDPs efficiently after, e.g., adding a constraint or variable (e.g., [69, 92]). In other words, it is difficult to do ‘warm starts’.
Work on these issues is ongoing within the CO community.
We now wish to mention two other remarkable papers. Ben-Tal & Nemirovski [12] proved that one can “simulate” an SOCP with $n$ variables to arbitrary precision $\epsilon$ using an LP with $O(n \log(1/\epsilon))$ variables. This means that one can solve an SOCP approximately by solving a single LP of reasonable size. Conversely, Braun et al. [15] proved that one cannot simulate SDPs by LPs or SOCPs in an analogous way, thereby proving that SDP is, in some sense, fundamentally more powerful than LP or SOCP.
Finally, we mention that there exist several other algorithms for solving SDPs, such as the spectral bundle method (Helmberg & Rendl [45]), the boundary point method (Povh et al. [82]), augmented Lagrangian methods [67, 98], cutting-plane methods (Krishnan & Mitchell [52]), and a method that works directly with the Cholesky factorisation (Burer & Monteiro [19]).
4 Applications of SOCP
In this section, we give some examples of the kinds of problems that can be tackled using SOCP. For additional examples, see, e.g., [2, 53, 62, 73].
4.1 Problems involving convex quadratic functions
SOCP includes as a special case all convex nonlinear programs involving quadratic functions. For brevity, we give just two examples, one from finance and one from statistics.
In the famous portfolio selection model of Markowitz [68], there are $n$ stocks, for which we have a vector $r \in \mathbb{R}^n$ of expected returns and a psd matrix $Q \in \mathbb{R}^{n \times n}$ of covariances. We have a vector $x \in \mathbb{R}_+^n$ of decision variables, where $x_i$ represents the proportion invested in stock $i$. The expected return of a portfolio $x$ is then $r^T x$ and the variance (a measure of risk) is $x^T Q x$. Minimising risk subject to a lower bound $L$ on the expected return is then equivalent to the following convex quadratic program:
$$\min \left\{ x^T Q x : r^T x \geq L, ||x||_1 = 1, \ x \in \mathbb{R}_+^n \right\}.$$
If we compute the Cholesky factorisation $Q = A^T A$, then the objective function is equivalent to $||Ax||_2^2$. So we can reformulate the problem as:
$$\min \left\{ t : t \geq ||y||_2, \ y = Ax, \ r^T x \geq L, \ ||x||_1 = 1, \ x \in \mathbb{R}_+^n, \ y \in \mathbb{R}^n, \ t \in \mathbb{R}_+ \right\}.$$
This is an SOCP. Similarly, the problem of maximising expected return subject to an upper bound $U$ on the variance can be modelled as the SOCP:
$$\max \left\{ r^T x : t \geq ||y||_2, \ y = Ax, \ t \leq \sqrt{U}, \ ||x||_1 = 1, \ x \in \mathbb{R}_+^n, \ y \in \mathbb{R}^n, \ t \in \mathbb{R}_+ \right\}.$$
In statistical estimation, one often encounters problems of the form:
\[
\min \left\{ \sum_{j=1}^{m} w_j \| y^j - x \|_2^2 : x \in S \right\},
\]
where
- \(y^1, \ldots, y^m\) are observations of a random \(n\)-vector with unknown mean;
- \(x\) is an estimate of the mean (to be determined);
- \(S \subset \mathbb{R}^n\) is a convex set representing some prior information on \(x\);
- \(w_j\) is the weight (importance) given to the \(j\)th observation;
- the objective is to minimise a weighted sum of squared residuals.
By introducing a new variable \(z \in \mathbb{R}_+\) and new vectors of variables \(u \in \mathbb{R}_+^m\) and \(t^1, \ldots, t^m \in \mathbb{R}_+^m\), we can reformulate the estimation problem as:
\[
\begin{align*}
\min & \quad z \\
\text{s.t.} & \quad z \geq \| u \|_2 \\
& \quad u_j \geq \| t^j \|_2 \quad (j = 1, \ldots, m) \\
& \quad t^j = \sqrt{w_j} (y^j - x) \quad (j = 1, \ldots, m) \\
& \quad x \in S \\
& \quad (z, u, t) \in \mathbb{R}_+^{1+m+m \times n}.
\end{align*}
\]
If \(S\) is a polyhedron, the reformulated problem is an SOCP. If \(S\) is defined by a mixture of linear and convex quadratic constraints, the problem can be easily converted into an SOCP.
### 4.2 Problems involving hyperbolic functions
Now recall from Subsect. 2.1 that the *hyperbola* is a cross-section of the ice-cream cone. More precisely, the convex set
\[
\{(x, t) \in \mathbb{R}_+^2 \times \mathbb{R} : t \geq \| x \|_2, \ x_2 = 1\}
\]
is easily shown to be an affine image of the convex set
\[
\{(x, y) \in \mathbb{R}_+^2 : y \geq 1/x\},
\]
the boundary of which is a hyperbola. For this reason, one can easily force the feasible region to be a hyperbola via SOCP.
Note that the function \(y \geq 1/x\) is equivalent to \(xy \geq 1\). More generally, a *hyperbolic function* is any function of the form \(t_1 t_2 \geq \| Ax \|^2\), where \(t_1, t_2 \in \mathbb{R}_+\) and \(A \in \mathbb{R}^{n \times n}\).
$\mathbb{R}_+$ and $x \in \mathbb{R}^n$ are variables, and $A$ is a real matrix with $n$ columns. It is shown in [62] that any convex nonlinear program involving a combination of linear, convex quadratic and hyperbolic functions can be converted to an SOCP. (The feasible region of a constraint involving a hyperbolic function is sometimes called a *hyperboloid.*)
Here is a simple example of an OR problem that involves hyperbolic functions. The classic inventory control model of Harris [40] is
$$\min \left\{ hx/2 + cd/x : x \in \mathbb{R}_+ \right\},$$
where $x$ is the order quantity (to be determined), $h$ is the annual cost of holding one unit in stock, $c$ is the charge for a delivery, and $d$ is the annual demand. Harris solved this problem by calculus, and the optimal value of $x$ is the well-known *economic order quantity* or EOQ. Ziegler [99] considered the following multi-item extension of this model:
$$\begin{align*}
\min & \quad \sum_{i=1}^{n} (h_i x_i / 2 + c_i d_i / x_i) \\
\text{s.t.} & \quad \sum_{i=1}^{n} b_i x_i \leq b_0 \\
& \quad \ell_i \leq x_i \leq u_i \quad (i = 1, \ldots, n).
\end{align*}$$
Here, for a given product $i$, $x_i$ represent the order quantity, $h_i$ is the annual holding cost, $c_i$ is the delivery charge, $d_i$ is the annual demand, $b_i$ is the space occupied by one unit, and $\ell_i$ and $u_i$ are lower and upper bounds on the order quantity. The constant $b_0$ represents the storage space available.
Kuo & Mittelmann [53] showed that this problem can be transformed into an SOCP as follows. Define new variables $s_i$ and $t_i$ satisfying:
$$\begin{align*}
x_i &= (s_i - t_i)/2 \\
1/x_i &= (s_i + t_i)/2.
\end{align*}$$
Then model the problem as:
$$\begin{align*}
\min & \quad \sum_{i=1}^{n} ((h_i/2 + c_i d_i)s_i + (h_i/2 - c_i d_i)t_i) \\
\text{s.t.} & \quad \sum_{i=1}^{n} b_i(s_i - t_i) \leq 2b_0 \\
& \quad 2\ell_i \leq s_i - t_i \leq 2u_i \quad (i = 1, \ldots, n) \\
& \quad s_i \geq \sqrt{t_i^2 + 2} \quad (i = 1, \ldots, n) \\
& \quad s, t \in \mathbb{R}^n.
\end{align*}$$
### 4.3 Problems involving norms
SOCP can also be used to model problems involving Euclidean or other norms. For brevity, we give three examples.
The first example concerns facility location. Suppose we have $n$ facilities in the plane and wish to locate a new facility so as to minimise the sum of the weighted distances from the existing facilities. This can be modelled as:
\[
\begin{align*}
\min & \quad \sum_{i=1}^{n} w_i \sqrt{(x_i - \tilde{x})^2 + (y_i - \tilde{y})^2} \\
\text{s.t.} & \quad (\tilde{x}, \tilde{y}) \in \mathbb{R}^2,
\end{align*}
\]
where $(x_i, y_i)$ are the co-ordinates of the $i$th existing facility, and $(\tilde{x}, \tilde{y})$ are the co-ordinates of the new facility. (This problem is sometimes called the *Fermat-Weber* problem.) Defining new variables $d_1, \ldots, d_n$, representing the Euclidean distance from the new facility to each of the existing facilities, the problem can be converted into the following SOCP:
\[
\begin{align*}
\min & \quad \sum_{i=1}^{n} w_i d_i \\
\text{s.t.} & \quad d_i \geq \sqrt{u_i^2 + v_i^2} \quad (i = 1, \ldots, n) \\
& \quad u_i = x_i - \tilde{x} \quad (i = 1, \ldots, n) \\
& \quad v_i = y_i - \tilde{y} \quad (i = 1, \ldots, n) \\
& \quad (\tilde{x}, \tilde{y}) \in \mathbb{R}^2 \\
& \quad u, v, d \in \mathbb{R}^n.
\end{align*}
\]
Moreover, SOCP can also easily handle extensions of this problem in which there are linear, convex quadratic and/or hyperbolic constraints on the coordinates $\tilde{x}$ and $\tilde{y}$ and/or the distances $d_i$.
Our second example is a modified version of the Markowitz portfolio selection model (see Subsect. 4.1). Suppose we wish to maximise expected return subject to a *chance constraint*, which states that the probability of the return not exceeding some quantity $\alpha \in \mathbb{R}$ must be less than some small quantity $\beta > 0$. (For example, we may require that there is only a 0.1% chance of losing one million euros or more.) The chance constraint can be written as:
\[
r^T x + \Phi^{-1}(\beta) \sqrt{x^T Q x} \geq \alpha,
\]
where $\Phi$ is the Cumulative Distribution Function of the Normal distribution. Using again the Cholesky factorisation $Q = AA^T$, the constraint can be written as:
\[
r^T x + \Phi^{-1}(\beta) \|Ax\|_2 \geq \alpha.
\]
To handle this with SOCP, we just add new variables $y \in \mathbb{R}^n$ and $z \in \mathbb{R}$, and add the constraints:
\[
\begin{align*}
y &= Ax \\
z &\geq \|y\|_2 \\
r^T x + \Phi^{-1}(\beta) z &\geq \alpha.
\end{align*}
\]
(Actually, this transformation only works when $\Phi^{-1}(\beta) < 0$. Fortunately, this is always the case in practice, since $\beta$ is always less than $1/2$.)
Our third and final example comes from robust optimisation. Consider the LP
$$\min \left\{ c^T x : Ax \leq b, \ x \in \mathbb{R}_+^n \right\},$$
and suppose that the precise values of the components of the matrix $A$ are uncertain. Ben-Tal & Nemirovski [11] suggest writing the LP as:
$$\min \left\{ c^T x : a_i^T x \leq b_i \ (i = 1, \ldots, m), \ x \in \mathbb{R}_+^n \right\},$$
and then considering the case in which, for $i = 1, \ldots, m$, the vector $a_i$ is known to lie inside the ellipsoid
$$\left\{ \hat{a}_i + Q_i u : ||u||_2 \leq 1 \right\},$$
where the vectors $\hat{a}_i$ and the matrices $Q_i$ are known. They then show that the problem of minimising the worst-case cost, the so-called robust counterpart of the LP, can be formulated as:
$$\min \left\{ c^T x : \hat{a}_i^T x + ||Q_i x||_2 \leq b_i \ (i = 1, \ldots, m), \ x \in \mathbb{R}_+^n \right\}.$$
This is again easy to handle via SOCP.
5 Applications of SDP
Although SOCP is a useful modelling tool, SDP is much more powerful. In this section, we give just a few examples of the kinds of problems that can be tackled using SDP. For additional examples, see, e.g., [4, 43, 73, 93, 94, 96, 97].
5.1 Problems involving special types of matrices
Covariance and correlation matrices play a fundamental role in statistics, probability and (as we saw in Subsect. 4.1 and 4.3) finance. It is well known that a real symmetric matrix is a covariance matrix if and only if it is psd, and a correlation matrix if and only if, in addition, it has 1s on the main diagonal. This means that one can use SDP to solve various optimisation problems involving such matrices, such as:
- The positive semidefinite matrix completion problem: given a matrix with missing entries, check if it can be completed to a covariance (or correlation) matrix (see, e.g., Johnson [49]).
- The nearest correlation matrix problem: given a matrix that is not a correlation matrix, find a correlation matrix that is as close as possible, where “close” is measured according to, e.g., an $L_1$, $L_2$ or $L_\infty$ norm (see, e.g., Higham [46]).
Somewhat less well known are *Euclidean distance* (ED) matrices. A matrix $M \in S^n$ is an ED matrix if and only if there exist points $x^1, \ldots, x^n \in \mathbb{R}^n$ such that, for $i, j = 1, \ldots, n$, $M_{ij} = ||x_i - x_j||_2^2$, i.e., the square of the Euclidean distance between $x_i$ and $x_j$. A classic result of Schoenberg [87] states that a given matrix $M \in S^n$ is an ED matrix if and only if the symmetric matrix $M'$ belongs to $S^{n-1}_+$, where:
$$M'_{ii} = M_{i,n} \quad (i = 1, \ldots, n-1)$$
$$M'_{ij} = \frac{1}{2}(M_{in} + M_{jn} - M_{ij}) \quad (1 \leq i < j \leq n-1)$$
Laurent [55] observes that one can therefore also use SDP to solve various optimisation problems involving ED matrices. This includes problems in, e.g., computational biology (such as molecular conformation problems) and engineering (such as wireless sensor network localisation problems); see Liberti et al. [61] for a survey.
### 5.2 Combinatorial optimisation
SDP has proven to be a remarkably useful tool for constructing strong bounds for various *combinatorial optimisation* problems. For brevity, we consider just two examples: the *stable set* and *max-cut* problems. For other examples, see, e.g., [1, 33, 34, 60, 64, 86, 96].
Let $G = (V, E)$ be an undirected graph. A set $S \subset V$ is called *stable* if no two nodes in $S$ are adjacent in $G$. The *stability number*, denoted by $\alpha(G)$, is the maximum cardinality of a stable set in $G$. The *stable set problem* calls for a stable set of maximum cardinality. It is not only NP-hard in the strong sense, but hard to approximate [41].
A simple 0-1 LP formulation of the stable set problem is:
\begin{align}
\text{max} & \quad \sum_{i \in V} x_i \\
\text{s.t.} & \quad x_i + x_j \leq 1 \quad (\{i, j\} \in E) \\
& \quad x \in \{0, 1\}^n.
\end{align}
Unfortunately, the LP relaxation of this formulation yields an extremely weak upper bound, since one can just set every variable to 1/2. Padberg [77] noted that one can strengthen the LP relaxation by replacing the constraints (3) with *clique inequalities* of the form $\sum_{i \in C} x_i \leq 1$, where $C$ is a maximal clique (set of pairwise-adjacent nodes) in $G$. Unfortunately, the number of cliques is in general exponential in $|V|$. Even worse, the *separation problem* for the clique inequalities (i.e., the problem of detecting when an LP solution violates a clique inequality) is NP-hard (e.g., Grötschel et al. [38]).
In his seminal paper, Lovász [63] defined a new upper bound for the stable set problem, which he called $\theta(G)$. Grötschel et al. [37, 38] showed that $\theta(G)$ can be computed by solving an SDP. One way to do it is as follows.
We begin by formulating the stable set problem as the following continuous quadratic optimisation problem:
\[
\begin{align*}
\max & \quad \sum_{i \in V} x_i \\
\text{s.t.} & \quad x_i^2 - x_i = 0 \quad (i \in V) \tag{4} \\
& \quad x_i x_j = 0 \quad (\{i, j\} \in E) \tag{5} \\
& \quad x \in \mathbb{R}^{|V|}.
\end{align*}
\]
Now we introduce the matrix
\[
Y = \begin{pmatrix} 1 \\ x \end{pmatrix} \begin{pmatrix} 1 \\ x \end{pmatrix}^T = \begin{pmatrix} 1 & x^T \\ x & xx^T \end{pmatrix},
\]
and note that $Y$ should be psd and have rank 1. We then replace $xx^T$ with a matrix variable $X$, and replace the quadratic terms in (4) and (5) with the corresponding entries in $X$. This yields the following alternative formulation of the stable set problem:
\[
\begin{align*}
\max & \quad \sum_{i \in V} x_i \\
\text{s.t.} & \quad X_{ii} - x_i = 0 \quad (i \in V) \tag{6} \\
& \quad X_{ij} = 0 \quad (\{i, j\} \in E) \tag{7} \\
& \quad Y = \begin{pmatrix} 1 & x^T \\ x & X \end{pmatrix} \in S_+^{n+1} \tag{8} \\
& \quad \text{rank}(Y) = 1.
\end{align*}
\]
Dropping the rank constraint, which is non-convex, we obtain the desired SDP relaxation. The corresponding upper bound is $\theta(G)$.
The following result is due to Grötschel et al. [38]. To help the reader, we give a short proof here.
**Proposition 1** If $(x, X) \in \mathbb{R}^n \times S^n$ satisfies (6)–(8), then $x$ satisfies all clique inequalities.
**Proof.** Given any clique $C \subseteq V$, let $v(C) \in \{0, 1\}^n$ be a vector with a “1” in position $i$ if and only if node $i$ belongs to $C$. From constraint (8) and the first definition of psd-ness in Subsect. 2.1, we have
\[
\begin{pmatrix} -1 \\ v(C) \end{pmatrix}^T \begin{pmatrix} 1 & x^T \\ x & X \end{pmatrix} \begin{pmatrix} -1 \\ v(C) \end{pmatrix} \geq 0,
\]
or, equivalently,
\[
2 \sum_{i \in C} x_i - \sum_{i \in C} \sum_{j \in C} X_{ij} \leq 1. \tag{9}
\]
Now, due to constraints (6) and (7), we have $X_{ii} = x_i$ for all $i \in C$ and $X_{ij} = 0$ for all $\{i, j\} \subseteq C$. Thus, (9) reduces to $2 \sum_{i \in C} x_i - \sum_{i \in C} x_i \leq 1$, which is equivalent to the clique inequality on $C$. \qed
Although easy to prove, this result is remarkable, given the above-mentioned fact that clique separation is $\mathcal{NP}$-hard. Indeed, it is a good illustration of the power of SDP relative to LP. Intuitively, the extra power comes from the fact that imposing psd-ness is equivalent to imposing an infinite number of linear inequalities.
Now we move on to max-cut. Given a graph $G = (V, E)$ and an arbitrary set $S \subseteq V$, the set
$$\left\{ \{i, j\} \in E : i \in S, j \in V \setminus S \right\}$$
is called an edge cutset or simply cut. Suppose we are also given a vector $w \in \mathbb{R}^{|E|}$ of edge weights. The max-cut problem calls for a cut of maximum total weight. The problem is $\mathcal{NP}$-hard in the strong sense (Garey et al. [30]).
A simple 0-1 LP formulation of max-cut is:
\begin{align*}
\text{max} & \quad \sum_{e \in E} w_e x_e \\
\text{s.t.} & \quad x_{ij} + x_{ik} + x_{jk} \leq 2 \quad (\{i, j, k\} \subset V) \\
& \quad x_{ik} + x_{jk} \geq x_{ij} \quad (\{i, j\} \subset V, k \in V \setminus \{i, j\}) \\
& \quad x_{ij} \in \{0, 1\} \quad (\{i, j\} \subset V).
\end{align*}
Unfortunately, the LP relaxation can be rather weak. Poljak & Tuza [81] showed that, even when $w \geq 0$, the upper bound from the relaxation can approach twice the weight of the optimal cut.
An SDP relaxation of the max-cut problem was proposed by Schrijver (unpublished), and then studied in, e.g., [25, 35, 57, 79]. For each $i \in V$, let $z_i$ be a variable taking the value 1 if $i \in S$, and $-1$ otherwise. Then the max-cut problem can be formulated as:
\begin{align*}
\text{max} & \quad \frac{1}{2} \sum_{\{i, j\} \in E} (1 - z_i z_j) \\
\text{s.t.} & \quad z_i^2 = 1 \quad (i \in V) \\
& \quad z \in \mathbb{R}^{|V|}.
\end{align*}
The corresponding SDP relaxation is:
\begin{align*}
\text{max} & \quad \frac{1}{2} \sum_{\{i, j\} \in E} (1 - Z_{ij}) \\
\text{s.t.} & \quad Z_{ii} = 1 \quad (i \in V) \\
& \quad Z \in \mathcal{S}_+^n.
\end{align*}
In a major breakthrough, Goemans & Williamson [35] proved that, when $w \geq 0$, the upper bound from the SDP is no more than 1.131 times the weight of the optimal cut. (They did this by showing how to compute a cut whose weight is at least 0.878 times the SDP bound.)
Observe that the feasible region to the SDP can be projected into $x$-space via the identities $Z_{ij} = 1 - 2x_{ij}$. It turns out that the projection “wraps” rather closely around the convex hull of cut vectors. For example, from a consideration of 3-by-3 principal submatrices of $Z$, it can be shown that a point in the projection cannot violate any of the triangle inequalities (10), (11) by more than 1/4. This provides a partial explanation for the strength of the SDP bound; see, e.g., [8, 29, 57, 58] for more details.
In a similar way, one can construct SDP relaxations of any problem that can be formulated as a 0-1 LP. In fact, it is possible to construct entire hierarchies of SDP relaxations; see, e.g., [56, 59, 65, 85, 89]. Moreover, SDP has been successfully applied to many 0-1 quadratic programs; see [44, 60, 80, 91] for early work on the subject, and [51, 95] for some recent applications.
### 5.3 Non-convex quadratic (and polynomial) optimisation
Finally, we consider general quadratic optimisation problems, and their natural generalisation, polynomial optimisation problems.
A quadratically constrained quadratic program (QCQP) is a problem of the form:
\[
\begin{align*}
\inf & \quad x^T Q^0 x + c^0 \cdot x \\
\text{s.t.} & \quad x^T Q^k x + c^k \cdot x \leq b_j \quad (k = 1, \ldots, m) \\
& \quad x \in \mathbb{R}^n,
\end{align*}
\]
where $Q^k \in S^n$, $c^k \in \mathbb{R}^n$ and $b_k \in \mathbb{R}$ for $k = 0, \ldots, m$. If $Q^0, \ldots, Q^m$ are all psd, then the QCQP is convex and can be converted into an SOCP (see Subsect. 4.1). In general, however, QCQP is NP-hard in the strong sense. (Indeed, this follows from the fact that the stable set and max-cut problems can be formulated as QCQPs; see the previous subsection). Moreover, non-convex QCQPs arise in many contexts besides OR, including economics and finance (Horst et al. [48]) and signal processing (Luo et al. [66]).
Ramana [83] proposed the following natural SDP relaxation of QCQP:
\[
\begin{align*}
\inf & \quad Q^0 \bullet X + c_0 \cdot x \\
\text{s.t.} & \quad Q^k \bullet X + c_k \cdot x \leq b_k \quad (k = 1, \ldots, m) \\
& \quad \begin{pmatrix} 1 & x^T \\ x & X \end{pmatrix} \in S_{n+1}^+.
\end{align*}
\]
The derivation of this relaxation is similar to the one for the stable set problem described in the previous subsection. (Earlier, Shor [90] derived essentially the same relaxation, but in dual form.)
A hierarchy of SDP relaxations for QCQP, similar to the one of Lovász & Schrijver for 0-1 LPs, was proposed by Fujie & Kojima [28]. Some results on
the quality of the SDP bound, similar to the result of Goemans & Williamson [35], are surveyed in Nesterov et al. [74].
In practice, the vector $x$ is usually constrained to be non-negative, or even to lie in a hypercube. This fact can be exploited to derive stronger SDP relaxations; see, e.g., [5, 7, 9, 18, 27]. The best such relaxations are shown by Anstreicher [6] to dominate many other known relaxations. There is also evidence that SDP-based algorithms for QCQP can be effective in practice; e.g., [5, 20, 66].
A natural generalisation of QCQP is polynomial optimisation, in which the objective and constraint functions can be arbitrary polynomials. Polynomial optimisation is a fascinating inter-disciplinary field, with relevance not only to OR, but also to statistics, computer science, engineering, and branches of pure mathematics such as algebraic geometry, commutative algebra and moment theory [4, 24]. Interesting and powerful SDP relaxations and hierarchies for polynomial optimisation have been proposed by, e.g., Lasserre [54] and Parrilo [78]. However, the complexity of solving the SDPs in the hierarchies is still not fully settled; see O’Donnell [76].
6 Software
Finally, we mention some of the available software systems. We cover modelling interfaces in Subsect. 6.1 and solvers in Subsect. 6.2. For brevity, we do not cover all systems, and refer the reader to Mittelmann [70] for a more comprehensive survey. (Readers may also find the “Decision Tree for Optimization Software”\footnote{http://plato.asu.edu/guide.html} useful.)
6.1 Modelling interfaces
We are aware of the following five modelling interface systems:
- **CVX**\footnote{http://cvxr.com/} is an influential and long-established system for conic programming based on MatLab. The CVX website states “CVX is a popular modeling framework for disciplined convex programming that... turns MatLab into a modeling language, allowing constraints and objectives to be specified using standard MatLab syntax.”
- **CVXOPT**\footnote{http://cvxopt.org/} is similar to CVX but based on Python. It can be used within the open-source mathematics software system SageMath.
- **JuliaOpt**\footnote{http://www.juliaopt.org} now contains a modelling language called JuMP.
• YALMIP\textsuperscript{5} is also based on MatLab, and acts as a convenient front-end to many solvers.
• PICOS\textsuperscript{6} provides a front-end to many solvers and is based on Python. It also does some automatic reformulation (e.g., it has a function that “replaces quadratic constraints by equivalent second order cone constraints”).
We remark that, although using modelling languages makes it much easier to model and solve problems, there is usually a computational overhead, in terms of both time and memory.
### 6.2 CO solvers
Now we list some of the commonly-used solvers. (A more comprehensive list can be found on the YALMIP site\textsuperscript{7}.)
- CSDP\textsuperscript{8} is a C library for solving SDPs using an interior-point method. It is now part of COIN-OR.\textsuperscript{9}
- ECOS\textsuperscript{10} is an SOCP solver designed for embedded systems. It is written using less than a thousand lines of C code, and it can also handle constraints involving the exponential cone.
- MOSEK\textsuperscript{11} has supported both SOCP and SDP since version 7.0. A recent blog entry\textsuperscript{12} reports on the solution of an SOCP with over 0.5 million variables (on a powerful parallel processor).
- PENNON\textsuperscript{13} is an implementation of a generalized augmented Lagrangian algorithm for SDPs. It can solve SDPs with general convex objective and constraint functions.
- SDPA\textsuperscript{14} is a collection of C++ routines for SDP, based on a primal-dual interior-point method. It is designed to exploit sparsity in the constraints.
\textsuperscript{5}\url{http://yalmip.github.io/allsolvers/}
\textsuperscript{6}\url{http://picos.zib.de}
\textsuperscript{7}\url{https://yalmip.github.io/allsolvers/}
\textsuperscript{8}\url{https://projects.coin-or.org/Csdp/wiki/CSDPUsed}
\textsuperscript{9}\url{https://projects.coin-or.org/}
\textsuperscript{10}\url{https://www.embotech.com/ECOS}
\textsuperscript{11}\url{http://www.mosek.com/}
\textsuperscript{12}\url{http://blog.mosek.com/2017/05/biggest-conic-quadratic-problem-solved.html}
\textsuperscript{13}\url{http://web.mat.bham.ac.uk/kocvara/pennon/}
\textsuperscript{14}\url{http://sdpa.sourceforge.net/}
• SDPLR\textsuperscript{15} is a C library for SDP, based on the augmented Lagrangian method.
• SDPT3\textsuperscript{16} is a library of MatLab routines that can solve SOCPs, SDPs, and various other problems. An infeasible path following algorithm is used.
• SeDuMi\textsuperscript{17} is another useful MatLab toolbox, based on the concept of “self-dual embedding”.
Finally, we mention two MatLab packages that are specifically designed for solving polynomial optimisation problems: GloptiPoly\textsuperscript{18} and SOSTOOLS\textsuperscript{19}.
7 Conclusions
In this guide, we hope to have convinced the reader that Conic Optimisation is an elegant and powerful generalisation of standard linear programming, which allows one to capture many forms of non-linearity that arise in practical problems. In particular, both SOCP and SDP enable one to model many \textit{convex} non-linearities arising in practice (such as convex quadratic and hyperbolic functions, and functions involving norms or eigenvalues); and SDP also provides good bounds and approximation algorithms for many \textit{non-convex} (and NP-hard) problems, including a wide range of combinatorial and global optimisation problems.
Moreover, software for SOCP and SDP, and CO in general, is developing rapidly. This includes not only solvers, but also modelling languages and procedures for automatic reformulation. Hence, although CO is more difficult to master than LP, we believe that it will soon become a standard technique for both practitioners and researchers in optimisation, just as LP is at present.
References
[1] F. Alizadeh (1995) Interior point methods in semidefinite programming with applications to combinatorial optimization. \textit{SIAM J. Optim.}, 5, 13–51.
[2] F. Alizadeh & D. Goldfarb (2003) Second-order cone programming. \textit{Math. Program.}, 95, 3–51.
\textsuperscript{15}\url{http://sburer.github.io/projects.html}
\textsuperscript{16}\url{http://www.math.cmu.edu/~reha/sdpt3.html}
\textsuperscript{17}\url{http://sedumi.ie.lehigh.edu/}
\textsuperscript{18}\url{http://homepages.laas.fr/henrion/software/gloptipoly3/}
\textsuperscript{19}\url{http://www.cds.caltech.edu/sostools/}
[3] F. Alizadeh, J.-P.A. Haeberly & M.L. Overton (1997) Complementarity and nondegeneracy in semidefinite programming. *Math. Program.*, 77, 111–128.
[4] M. Anjos & J.B. Lasserre (eds.) (2012) *Handbook on Semidefinite, Conic and Polynomial Optimization*. International Series in OR/MS, vol. 166. New York: Springer.
[5] K.M. Anstreicher (2009) Semidefinite programming versus the reformulation-linearization technique for nonconvex quadratically constrained quadratic programming. *J. Glob. Optim.*, 43, 471–484.
[6] K.M. Anstreicher (2012) On convex relaxations for quadratically constrained quadratic programming. *Math. Program.*, 136, 233–251.
[7] K.M. Anstreicher & S. Burer (2010) Computable representations for convex hulls of low-dimensional quadratic forms. *Math. Program.*, 124: 33–43.
[8] D. Avis & J. Umemoto (2003) Stronger linear programming relaxations of max-cut. *Math. Program.*, 97, 451–469.
[9] X. Bao, N.V. Sahinidis & M. Tawarmalani (2011) Semidefinite relaxations for quadratically constrained quadratic programming: a review and comparisons. *Math. Program.*, 129, 129–157.
[10] S.J. Benson, Y. Ye & X. Zhang (2000) Solving large-scale sparse semidefinite programs for combinatorial optimization. *SIAM J. Optim.*, 10, 443–461.
[11] A. Ben-Tal & A. Nemirovski (1999) Robust solutions of uncertain linear programs. *Oper. Res. Lett.*, 25, 1–13.
[12] A. Ben-Tal & A. Nemirovski (2001) On polyhedral approximations of the second order cone. *Math. Oper. Res.*, 26, 193–205.
[13] D. Bertsekas (2016) *Nonlinear Programming* (3rd edn.) Bellmont, MA: Athena Scientific.
[14] S. Boyd & L. Vandenberghe (2004) *Convex Optimization*. Cambridge: Cambridge University Press.
[15] G. Braun, S. Fiorini, S. Pokutta & D. Steurer (2015) Approximation limits of linear programs (beyond hierarchies). *Math. Oper. Res.*, 40, 756–772.
[16] S. Burer (2009) On the copositive representation of binary and continuous nonconvex quadratic programs. *Math. Program.*, 120, 479–495.
[17] S. Burer (2012) Copositive programming. In M. Anjos & J.B. Lasserre (eds.) *op. cit.*, 201–218.
[18] S. Burer & A.N. Letchford (2009) On non-convex quadratic programming with box constraints. *SIAM J. Optim.*, 20, 1073–1089.
[19] S. Burer & R.D.C. Monteiro (2005) Local minima and convergence in low-rank semidefinite programming. *Math. Program.*, 103, 427–444.
[20] S.A. Burer & D. Vandenbussche (2008) A finite branch-and-bound algorithm for nonconvex quadratic programming via semidefinite relaxations. *Math. Program.*, 113, 259–282.
[21] V. Chandrasekaran & P. Shah (2017) Relative entropy optimization and its applications. *Math. Program.*, 161, 1–32.
[22] G.B. Dantzig (1951) Maximization of a linear function of variables subject to linear inequalities. In T.C. Koopmans (ed.) *Activity Analysis of Production and Allocation*, pp. 339–347. New York: Wiley.
[23] G.B. Dantzig (1963) *Linear Programming and Extensions*. Princeton, NJ: Princeton University Press.
[24] J.A. De Loera, R. Hemmecke & M. Köppe (2013) *Algebraic and Geometric Ideas in the Theory of Discrete Optimization*. SIAM-MOS Series on Optimization, vol. 14. Philadelphia, PA: SIAM.
[25] C. Delorme & S. Poljak (1993) Combinatorial properties and the complexity of an eigenvalue approximation of the max-cut problem. *Eur. J. Combin.*, 14, 313–333.
[26] M.M. Deza & M. Laurent (1997) *Geometry of Cuts and Metrics*. Berlin: Springer-Verlag.
[27] M. Dür (2010) Copositive programming: a survey. In M. Diehl *et al.* (eds.) *Recent Advances in Optimization and its Applications in Engineering*, pp. 3–20. Berlin: Springer.
[28] T. Fujie & M. Kojima (1997) Semidefinite programming relaxation for nonconvex quadratic programs. *J. Glob. Optim.*, 10, 367–380.
[29] L. Galli, K. Kaparis & A.N. Letchford (2012) Complexity results for the gap inequalities for the max-cut problem. *Oper. Res. Lett.*, 40, 149–152.
[30] M.R. Garey, D.S. Johnson & L. Stockmeyer (1976) Some simplified $NP$-complete graph problems. *Theor. Comput. Sci.*, 1, 237–267.
[31] F. Glineur (2000) An extended conic formulation for geometric optimization. *Found. Comput. Decis. Sci.*, 25, 161–174.
[32] F. Glineur & T. Terlaky (2004) Conic formulation for $\ell_p$-norm optimization. *J. Optim. Th. Appl.*, 122, 285–307.
[33] M.X. Goemans (1997) Semidefinite programming in combinatorial optimization. *Math. Program.*, 79, 143–161.
[34] M.X. Goemans & F. Rendl (2000) Combinatorial optimization. In H. Wolkowicz *et al.* (eds.) *op. cit.*, pp. 343–360.
[35] M.X. Goemans and D. Williamson (1995) Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. *J. of the ACM*, 42, 1115–1145.
[36] J. Gondzio (2012). Interior point methods 25 years later. *Eur. J. Oper. Res.*, 218, 587–601.
[37] M. Grötschel, L. Lovász & A.J. Schrijver (1981) The ellipsoid method and its consequences in combinatorial optimization. *Combinatorica*, 1, 169–197.
[38] M. Grötschel, L. Lovász & A.J. Schrijver (1988) *Geometric Algorithms and Combinatorial Optimization*. New York: Wiley.
[39] M. Hall Jr. (1962) Discrete problems. In J. Todd (ed.) *A Survey of Numerical Analysis*, pp. 518–542. New York: McGraw-Hill.
[40] F.M. Harris (1913) How many parts to make at once. *Factory, The Magazine of Management*, 10: 135–136, 152.
[41] J. Håstad (1999) Clique is hard to approximate within $n^{1-\epsilon}$. *Acta Mathematica*, 182, 105–142.
[42] C. Helmberg (2000) Fixing variables in semidefinite relaxations. *SIAM J. Matrix Anal. & Appl.*, 21 952–969.
[43] C. Helmberg (2002) Semidefinite programming. *Eur. J. Oper. Res.*, 137, 461–482.
[44] C. Helmberg & F. Rendl (1998) Solving quadratic (0,1)-programs by semidefinite programs and cutting planes. *Math. Program.*, 82, 291–315.
[45] C. Helmberg & F. Rendl (1999) A spectral bundle method for semidefinite programming. *SIAM J. Optim.*, 10, 673–696.
[46] N. Higham (2002) Computing the nearest correlation matrix—A problem from finance. *IMA J. Numer. Anal.*, 22, 329–343.
[47] R.A. Horn (2012) *Matrix Analysis* (2nd edn). Cambridge: Cambridge University Press.
[48] R. Horst, P.M. Pardalos & N. Thoai (2000) *Introduction to Global Optimization* (2nd edn.). Dortrecht: Kluwer.
[49] C.R. Johnson (1990) Matrix completion problems: a survey. In C.R. Johnson (ed.) *Matrix Theory and Applications*, pp. 171–198. Providence, RI: Amer. Math. Soc.
[50] L.G. Khachiyan (1979) A polynomial algorithm in linear programming. *Soviet Math. Doklady*, 20, 191–194.
[51] E. de Klerk, R. Sotirov & U. Truetsch (2015) A new semidefinite programming relaxation for the quadratic assignment problem and its computational perspectives. *INFORMS J. Comput.*, 27, 378–391.
[52] K. Krishnan & J.E. Mitchell (2006) A unifying framework for several cutting plane methods for semidefinite programming. *Optim. Meth. & Soft.*, 21, 57–74.
[53] Y.-J. Kuo & H.D. Mittelmann (2004) Interior-point methods for second-order cone programming and OR applications. *Comput. Optim. & Appl.*, 28, 255–285.
[54] J.B. Lasserre (2001) Global optimization with polynomials and the problem of moments. *SIAM J. Optim.*, 11, 796–817.
[55] M. Laurent (1998) A connection between positive semidefinite and Euclidean distance matrix completion problems. *Lin. Alg. & Appl.*, 273, 9–22.
[56] M. Laurent (2003) A comparison of the Sherali-Adams, Lovász-Schrijver, and Lasserre relaxations for 0–1 programming. *Math. Oper. Res.*, 28, 470–496.
[57] M. Laurent & S. Poljak (1995) On a positive semidefinite relaxation of the cut polytope. *Lin. Alg. Appl.*, 223/224, 439–461.
[58] M. Laurent & S. Poljak (1996) Gap inequalities for the cut polytope. *Eur. J. Combin.*, 17, 233–254.
[59] M. Laurent & F. Rendl (2005) Semidefinite programming and integer programming. In K. Aardal *et al.* (eds.) *Handbook on Discrete Optimization*, pp. 393–514. Amsterdam: Elsevier.
[60] C. Lémarechal & F. Oustry (2001) SDP relaxations in combinatorial optimization from a Lagrangian viewpoint. In N. Hadjisavas & P.M. Pardalos (eds.) *Advances in Convex Analysis and Global Optimization*, pp. 119–134. Dortrecht: Kluwer.
[61] L. Liberti, C. Lavor, N. Maculan & A. Mucherino (2014) Euclidean distance geometry and applications. *SIAM Review*, 56, 3–69.
[62] M.S. Lobo, L. Vandenberghe, S. Boyd & H. Lebret (1998) Applications of second-order cone programming. *Lin. Alg. Appl.*, 284, 193–228.
[63] L. Lovász (1979) On the Shannon capacity of a graph. *IEEE Trans. Inform. Th.*, IT-25, 1–7.
[64] L. Lovász (2003) Semidefinite programs and combinatorial optimization. In B. Reed & C.L. Sales *Recent Advances in Algorithms and Combinatorics*, pp. 137–194. New York: Springer.
[65] L. Lovász & A.J. Schrijver (1991) Cones of matrices and set-functions and 0–1 optimization. *SIAM J. Optim.*, 1, 166–190.
[66] Z.-Q. Luo, W.-K. Ma, A.M.-C. So, Y. Ye & S. Zhang (2010) Semidefinite relaxation of quadratic optimization problems. *IEEE Signal Processing Magazine*, 27, 20–34.
[67] J. Malick, J. Povh, F. Rendl & A. Wiegele (2009) Regularization methods for semidefinite programming. *SIAM J. Optim.*, 20, 336–356.
[68] H.M. Markowitz (1952) Portfolio selection. *J. Finance*, 7, 77–91.
[69] J.E. Mitchell (2001) Restarting after branching in the SDP approach to MAX-CUT and similar combinatorial optimization problems. *J. Combin. Optim.*, 5, 151–166.
[70] H.D. Mittelmann (2012) The state-of-the-art in conic optimization software. In M. Anjos & J.B. Lasserre (eds.), *op. cit.*, pp. 671–686.
[71] T.S. Motzkin (1952) Copositive quadratic forms. *Nat. Bur. Stand. Rep. 1818*, 11–22.
[72] K.G. Murty & S.N. Kabadi (1987) Some $\mathcal{NP}$-complete problems in quadratic and nonlinear programming. *Math. Program.*, 39, 117–129.
[73] Y. Nesterov & A. Nemirovsky (1994) *Interior Point Methods in Convex Programming: Theory and Applications*. Philadelphia, PA: SIAM Press.
[74] Y. Nesterov, H. Wolkowicz & Y. Ye (2000) Semidefinite programming relaxations of nonconvex quadratic optimization. In H. Wolkowicz *et al.* (eds.) *op. cit.*, pp. 361–419.
[75] J. Nie, K. Ranestad & B. Sturmfels (2010) The algebraic degree of semidefinite programming. *Math. Program.*, 122, 379–405.
[76] R. O’Donnell (2016) SOS is not obviously automatizable, even approximately. ECCC Report No. 141, 2016.
[77] M.W. Padberg (1973) On the facial structure of set packing polyhedra. *Math. Program.*, 5, 199–215.
[78] P. Parrilo (2003) Semidefinite programming relaxations for semialgebraic problems. *Math. Program.*, 96, 293–320.
[79] S. Poljak & F. Rendl (1995) Non-polyhedral relaxations of graph-bisection problems. *SIAM J. Optim.*, 5, 467–487.
[80] S. Poljak, F. Rendl & H. Wolkowicz (1995) A recipe for semidefinite relaxation for \((0,1)\)-quadratic programming. *J. Glob. Optim.*, 7, 51–73.
[81] S. Poljak & Zs. Tuza (1994) The expected relative error of the polyhedral approximation of the max-cut problem. *Oper. Res. Lett.*, 16, 191–198.
[82] J. Povh, F. Rendl & A. Wiegele (2009) A boundary point method to solve semidefinite programs. *Computing*, 78, 277–286.
[83] M. Ramana (1993) *An Algorithmic Analysis of Multiquadratic and Semidefinite Programming Problems*. PhD thesis, Johns Hopkins University, Baltimore, MD.
[84] M.V. Ramana (1997) An exact duality theory for semidefinite programming and its complexity implications. *Math. Program.*, 77, 129–162.
[85] F. Rendl (2010) Semidefinite relaxations for integer programming. In M. Jünger et al. (eds.) *50 Years of Integer Programming 1958-2008*, pp. 687–726. Heidelberg: Springer.
[86] F. Rendl (2012) Semidefinite relaxations for partitioning, assignment and ordering problems. *4OR*, 10, 321–346.
[87] I.J. Schoenberg (1938) Metric spaces and positive definite functions. *Trans. Amer. Math. Soc.*, 44, 522–536.
[88] C. Seligman (1993) *Online Astronomy Text*. Available at: http://cseligman.com/text/history/ellipses.htm
[89] H.D. Sherali & W.P. Adams (1990) A hierarchy of relaxations between the continuous and convex hull representations for zero-one programming problems. *SIAM J. Discr. Math.*, 3, 411–430.
[90] N.Z. Shor (1987) Quadratic optimization problems. *Sov. J. Comput. Syst. Sci.*, 25, 1–11.
[91] N.Z. Shor (1990) Dual quadratic estimates in polynomial and Boolean programming. *Ann. Oper. Res.*, 25, 163–168.
[92] A. Skajaa, E.D. Andersen & Y. Ye (2013) Warmstarting the homogeneous and self-dual interior point method for linear and conic quadratic problems. *Math. Program. Comput.*, 5, 1–25.
[93] M.J. Todd (2001) Semidefinite optimization. *Acta Numerica*, 10, 515–560.
[94] L. Vanderberghe & S. Boyd (1996) Semidefinite programming. *SIAM Review*, 38, 49–95.
[95] P. Wang, C. Shen, A. van den Hengel & P.H.S. Torr (2017) Large-scale binary quadratic optimization using semidefinite relaxation and applications. *IEEE Trans. Patt. Anal. & Mach. Intel.*, 39, 470–485.
[96] H. Wolkowicz & M.F. Anjos (2002) Semidefinite programming for discrete optimization and matrix completion problems. *Discr. Appl. Math.*, 123, 513–577.
[97] H. Wolkowicz, E. Saigal & L. Vandenberghe (eds.) (2000) *Handbook of Semidefinite Programming*. International Series in OR/MS, vol. 27. New York: Springer.
[98] Z. Wen, D. Goldfarb & W. Yin (2010) Alternating direction augmented Lagrangian methods for semidefinite programming. *Math. Program. Comput.*, 2, 203–230.
[99] H. Ziegler (1982) Solving certain singly constrained convex optimization problems in production planning. *Oper. Res. Lett.*, 1, 246–252.
|
1 Supplementary Information
1.1 The Payoff Matrix
The payoff matrix of a Prisoners’ Dilemma (PD) has the form:
\[
\begin{array}{c|c|c}
& C & D \\
\hline
C & R & S \\
D & T & P \\
\end{array}
\]
(1)
where \(C\) accounts for cooperation, \(D\) for defection and \(T > R > P > S\). This payoff matrix leads to the dominant strategy of defect because no matter which option the other player chooses it is always more profitable to defect. Evidence shows that subjects often interpret the PD game as Coordination Game [1]. An explanation for this perception is that people have an inequity aversion that reduces the off-diagonal elements of the matrix [2]. Depending on the parameters of this model the PD payoff matrix could turn into a Coordination Game payoff matrix, where the most profitable action depends on the action of the other player. We follow this path and use the following payoff matrix:
\[
P = \begin{array}{c|c|c}
& C & D \\
\hline
C & 1.3 & 0 \\
D & 1.1 & 0.4 \\
\end{array}
\]
(2)
We keep the \(C\) and \(D\) notation. While it is true that not all subjects interpret the PD as a Coordination Game, in using this payoff matrix we endowed our model with agents of the more cooperative kind, the ones that are willing to cooperate if the other player will also do it. Precisely for this reason, the agents that we use are the ones that are more resilient to the biases that we study.
With the payoff matrix of Eq. (2) we can compute the action with the highest expected reward given the likelihood of defection of the other player:
\[
E(R|C) = P_{CC} + (P_{CD} - P_{CC})\hat{\theta}
\]
\[
E(R|D) = P_{DC} + (P_{DD} - P_{DC})\hat{\theta}
\]
where \(E(R|C)\) and \(E(R|D)\) are the expected reward of cooperating and the expected reward of defect correspondingly, and \(\hat{\theta}\) is the probability that the other player chooses to defect. There is a value of \(\hat{\theta}^* = \frac{1}{3}\) for which these two expected rewards are equal. If the estimation of \(\hat{\theta}\) is greater than \(\hat{\theta}^*\) then \(E(R|D) > E(R|C)\) and if it is smaller then \(E(R|D) < E(R|C)\).
1.2 The Beta Distribution
The functional form of the probability density function of the beta distribution for \(0 \leq x \leq 1\) is:
\[
\beta(x; a, b) = \frac{x^{a-1}(1-x)^{b-1}}{B(a, b)}
\]
where $B$ stands for the beta function (not the beta distribution) which is a normalization constant, $a > 0$ and $b > 0$.
If both parameters satisfy that $a > 1$ and $b > 1$, then the distribution has a bell shape, as can be seen in Fig. 5, main text. The most important fact of the beta distribution for this work is that is the conjugate prior of the Bernoulli distribution. This fact implies that using the Bayes Theorem to compute a posterior with a Bernoulli likelihood and Beta prior yields a Beta posterior—allowing the use of the posterior as a new prior in an iterative fashion.
### 1.3 The Bayesian Updating Rule
To estimate the future actions of the other agents, each agent assumes that the other agent will defect with probability $\theta$. This assumption is like thinking that the other agent chooses her options randomly using a Bernoulli distribution with parameter $\theta$. To estimate the parameter of a Bernoulli distribution, we use a *Beta* distribution. This option is natural because the *Beta* distribution is the conjugate prior for the Bernoulli distribution, which means that the posterior distribution follows the same parametric form as the prior [3, 4]. The *Beta* distribution has two parameters $a$ and $b$ which we associate with the number of times an agent experienced that another agent cooperated or defected to him. For example, if an agent has a prior distribution $Beta_\theta(5,7)$, then it will estimate that the probability that the other agent defect on him is $\frac{7}{12}$, that is the number of defections observed divided the total number of observations. If, when playing, the other agent defect to him, he will update his prior using the Bayes theorem:
$$P(\theta|D) \propto Bi_\theta Beta_\theta(5,7),$$
where $Bi_\theta$ is the Bernoulli distribution, which in this case is the likelihood of observing a defection, and $P(\theta|D)$ is the conditional probability of $\theta$ after observing a defection. It turns out that $P(\theta|D) = Beta_\theta(6,7)$, then the Bayesian way of updating the priors after the observations is incrementing the parameter $a$ or $b$ in one unit depending on if the agent observed a defection or a cooperation action, correspondingly.
A problem that arises with this model is that the agents would have infinite memory because each time they play they update their prior. This is a problem for two reasons: first, the beta approaches a delta function, and numerical problems arise, and second, the memory that people uses in this game is not infinite [5]. To solve this problem, we limit the number of observation to 10 in a First In First Out fashion. That is when a new observation is done the oldest one is deleted. To avoid improper priors, we also add one $C$ and one $D$ observation, in such a way that the parameters of the *Beta* distribution always sum 12.
### 1.4 The Empirical Value of Projection
In [6] we performed an experiment where subjects, playing a game among each other, reported their belief about other choosing the most selfish action (an
action that is similar to a defect in a PD game). When the subjects were also allowed to choose a selfish action against the other player (similar to a defect in a PD game), then their belief about the other changed significantly in 0.2. This change means that acting in a selfish manner (or defecting) change the belief about the others. The subjects that were allowed to behave more selfishly performed on average 4.47 more selfish actions than the control group, which lead to an average of 0.045 difference per selfish action. Given that we used a Beta distribution to describe the belief of agents regarding the probability of others defecting at her we should compute what change in the parameters of the Beta will lead to a variation in the mean value of 0.045. That is:
\[
\int_0^1 \theta \text{Beta}_\theta(a + \text{Projection}, b) d\theta = \int_0^1 \theta \text{Beta}_\theta(a, b) d\theta + 0.045
\]
\[
\frac{a + \text{Projection}}{a + b} = \frac{a}{a + b} + 0.045
\]
\[
\frac{\text{Projection}}{12} = 0.045
\]
\[
\text{Projection} = 0.54
\]
### 1.5 Properties of the Erdős-Rényi Network
To build the network we used the NetworkX python library [7]. The Erdős-Rényi network was built with $10^5$ nodes and a $10^{-5}$ probability of edge creation, resulting in an average degree of 4. After this, we remove the self-edges, and we kept only the largest connected component. The final network has 97964 nodes and an average degree of 4.07.
### 1.6 Computation of the Cascade Condition
Given that the estimated probability of defect of another agent, $\hat{\theta}$, is the mean value of a beta-distributed random variable, its value is computed in the following way:
\[
\hat{\theta} = \frac{a}{a + b}
\]
If we add the Projection bias, the value of $a$ is replaced by
\[
a \leftarrow \left( a + \text{Projection} \frac{a_{own}}{k} \right) \frac{12}{12 + \text{Projection} \frac{a_{own}}{k}}
\]
\[
b \leftarrow b \frac{12}{12 + \text{Projection} \frac{a_{own}}{k}}
\]
where $a_{own}$ is the number of defection actions done by the agent herself, and $k$ is her number of neighbors. The fraction, $\frac{12}{12 + \text{Projection} \frac{a_{own}}{k}}$, is a normalization factor to keep $a + b = 12$. Then, the biased estimated probability of defection is computed as:
\[
\dot{\theta}^m = \frac{a + \text{Projection} \frac{a_{\text{own}}}{k}}{12 + \text{Projection} \frac{a_{\text{own}}}{k}}
\]
Finally, setting \(a = 1\), which is the best case scenario and \(a_{\text{own}} = 12\), which is the value you get after enough interactions against an ALLD Agent, this yields the equation:
\[
\hat{\theta} = \frac{1 + \frac{12}{k} \text{Projection}}{12 + \frac{12}{k} \text{Projection}}
\] (3)
The requirement for the agents to choose to defect, that is, to change their initial behavior of deciding to cooperate, is that \(\hat{\theta} > \frac{1}{3}\). This expression arises from the values of the payoff matrix of the game (see SI I.1). Then, the condition for an agent to change its behavior toward other \(BA\) agents is:
\[
k < \frac{8}{3} \text{Projection}
\]
The agents that are under this condition are called vulnerable, because, one ALLD agent in his neighborhood will change his behavior. Given that the agents are on a random network it is possible to calculate which proportion of them are in this vulnerable condition. If the fraction of vulnerable agents percolates the network (that is that they are a finite fraction), the cascades are possible. In our model, this happens if:
\[
\text{Projection} > 1.5
\] (4)
If we assume a symmetric \textit{Projection}, its value does not change the vulnerability of the agents. Nevertheless, if the value of the \textit{Paranoia} bias is greater than zero, the computational simulations show a similar behavior of the system.
### 1.7 Analysis of the Model with Symmetrical Projection
Based on empirical results, we propose a model where selfish action make people think that others are selfish too. However, settings could affect empirical results, and in this section, we analyze the situation where altruistic actions also change the agents’ belief so that cooperating make them thing that others are more likely to cooperate. We repeat our analysis but with this symmetric \textit{Projection} bias.
First, we study what happens when only one ALLD agent is introduced. As can be seen in Fig. 1, if the \textit{Paranoia} bias is set to 0, then the effect of the \textit{Projection} bias is not present. However, if the \textit{Paranoia} bias is 0.36 the effect appears. This pattern is due to the interaction effect that is also present in the original model.
Following the same analysis that in the main text, now we study the effect of introducing a fraction of ALLD agents. Given that the \textit{Projection} bias does not produce any effect by itself, but in combination with \textit{Paranoia}, we study
the case where $Paranoia = 0.22$. In Fig. 2 it can be seen a similar pattern to the one in Fig. 3 in the main text and Fig. 4 in the main text with two kinds of phase transitions.
Finally, and following the analysis of the main text, we simulate the evolution of the system for sixteen combinations of parameters and we classify them in the same three categories of the main text. The results can be seen in Fig. 3. This map is similar to the map in Fig. 6 in the main text, but the regions are shifted towards cooperations as expected due to the symmetry of this new model. Even though the cooperative actions are also affecting the belief of the agents towards cooperation, there are regions of high defection because if the interaction of the $Projection$ bias with the $Paranoia$ one.
### 1.8 Percolation Threshold Without Biases
If the agents in the network do not have any bias ($Projection = 0$ and $Paranoia = 0$), the percolation threshold, $f_{c1}$ can be derived analytically. The problem mappable to the percolation process which states that the percolation threshold for an Erdős-Rényi network is:
$$f_c = \frac{1}{\langle k \rangle}$$
where $\langle k \rangle$ is the average degree of the nodes in the network.
Our system is different because two ALLD agents could be part of the same cluster of defection even if they are not linked by an edge in the network. For example, if two ALLD agents are connected to the same regular agent of the network, they will be part of the same cluster because the three agents would be defecting to each other. Since, in average, each ALLD agents has $\langle k \rangle$ neighbors, each other ALLD has $\langle k \rangle + 1$ (the neighbors and itself) possible ways of forming a cluster with it. Then, the fraction $f$ of ALLD agents should be scaled for the factor $\langle k \rangle + 1$, which yields the critical value:
$$f_{c1} = \frac{1}{(\langle k \rangle + 1) \langle k \rangle}$$
Figure 2: (a) Heat map of Size of the giant component when Projection and $f$ vary. The heat map shows two different regions: when Projection < 0.7 the value of $S_{gc}$ increases continuously with $f$, if Projection ≥ 0.7 the value of $S_{gc}$ changes discontinuously at a critical value $f_{c2}$ of $f$. (b) Examples of $f$ vs. $S_{gc}$ for three values of Projection. The arrows show the point $f_{c1}$ at which $S_{gc}$ changes from zero to positive values and the dashed line indicates the discontinuous transition at $f_{c2}$.
Figure 3: Classification of the network according to its stability for 16 different parameters. The two insets show $S_a$ and $S_{gc}$ as a function of the fraction of ALLD agents. The dashed line in the left inset shows the expected $S_a$ due only to the presence of the ALLD agents. It can be seen that the stable system does not have a sharp transition while the bistable ones do and the unstable one has a large $S_a$ and $S_{gc}$ for any non-zero values of the fraction of ALLD agents.
Given that in our simulations $\langle k \rangle = 4$, the analytical value of the threshold is $f_{c1} = 0.05$.
### 1.9 Sensitivity to Initial Value of $\hat{\theta}^m$
In the main text, the simulations were performed with a specific initial value of $\hat{\theta}^m$. As can be seen in Fig. 5 main text, the value used in the main text is the maximum $\hat{\theta}^m$ that led to a cooperative behavior. This choice was made to maximize the number of comparisons, but in this section, we show that our conclusions hold even if this value changes.
We explore the results when the initial conditions are such that the initial value of $\hat{\theta}^m$ is lower than the threshold of cooperation. We investigate the
Figure 4: Belief distribution for tree parameter sets of Paranoia, $a$ and $b$. In A we plot a Beta$_\theta(3,9)$ with Paranoia = 0, in B a Beta$_\theta(2,10)$ with Paranoia = 0.23, and in C a Beta$_\theta(1,11)$ with Paranoia = 0.34. The green line shows the mean value of the probability of defection, $\hat{\theta}$, while the red line shows the manipulated mean value $\hat{\theta}^m$. The area under the distribution between $\hat{\theta}$ and $\hat{\theta}^m$ is the value of the Paranoia parameter. The mean of the distribution (or equivalently the values of $a$ and $b$) and the Paranoia parameters have been chosen in such a way that they compensate each other and lead to the same manipulated mean $\hat{\theta}^m$. The blue vertical line shows the limit above which the agent believes that the maximum reward is achieved by defecting and below which the agent believes that the maximum reward is obtained by cooperating. Under these three conditions, the agents, initially, cooperate with each other.
evolution of the system when the initial value of $\hat{\theta}^m = \frac{1}{4}$, $\hat{\theta}^m = \frac{1}{6}$ and $\hat{\theta}^m = \frac{1}{12}$, as can be seen in Fig. 4, Fig. 5 and Fig. 6 accordingly.
As can be seen in Fig. 7, when the initial value of $\hat{\theta}^m = \frac{1}{4}$, the system could behave in the same three ways as we showed in the main text. The main difference between the results of the main text is that now the values of Projection that set the system in a High Defection state are greater than the ones in the main text which is in agreement with the fact that now the maximum level of Paranoia is smaller. In Fig. 8 and Fig. 9, we show the same results but for the initial value $\hat{\theta}^m = \frac{1}{6}$ and $\hat{\theta}^m = \frac{1}{12}$, accordingly.
We also performed the simulations for the model with symmetric version of the projection bias. The results of for the symmetrical version of the bias, with initial value $\hat{\theta}^m = \frac{1}{4}$, can be seen in Fig. 10 In this case, we find the same three possible states as well. The results of for the symmetrical version of the bias, with initial value $\hat{\theta}^m = \frac{1}{6}$, can be seen in and Fig. 11. In this case, as we already know that the symmetric version of Projection does not affect the system if Paranoia = 0 we do not show results with initial value $\hat{\theta}^m = \frac{1}{12}$ and Paranoia = 0. For the symmetric version of the Projection bias, when the initial belief is $\hat{\theta}^m = \frac{1}{6}$, and therefore we can not set the Paranoia parameter to a greater value than 0.25, we do not find the High Defection state. This lack of a High Defection state is due to the effect of Projection on the cooperative behavior. As stated in the main text, the symmetric version
Figure 5: Belief distribution for two parameter sets of *Paranoia*, $a$ and $b$. In A we plot a $Beta_\theta(2,10)$ with $Paranoia = 0$, and in B a $Beta_\theta(1,11)$ with $Paranoia = 0.25$. The green line shows the mean value of the probability of defection, $\hat{\theta}$, while the red line shows the manipulated mean value $\hat{\theta}^m$. The area under the distribution between $\hat{\theta}$ and $\hat{\theta}^m$ is the value of the *Paranoia* parameter. The mean of the distribution (or equivalently the values of $a$ and $b$) and the *Paranoia* parameters have been chosen in such a way that they compensate each other and lead to the same manipulated mean $\hat{\theta}^m$. The blue vertical line shows the limit above which the agent believes that the maximum reward is achieved by defecting and below which the agent believes that the maximum reward is obtained by cooperating. Under these two conditions, the agents, initially, cooperate with each other.
Figure 6: Belief distribution for $Beta_\theta(1,11)$ with $Paranoia = 0$. The green line shows the mean value of the probability of defection, $\hat{\theta}$. The blue vertical line shows the limit above which the agent believes that the maximum reward is achieved by defecting and below which the agent believes that the maximum reward is obtained by cooperating. Under this condition, the agents, initially, cooperate with each other.
Figure 7: Classification of the network according to its stability for 12 different parameters. The simulations were performed with the asymmetrical version of the Projection bias. The two insets show $S_a$ and $S_{gc}$ as a function of the fraction of ALLD agents. The dashed line in the left inset shows the expected $S_a$ due only to the presence of the ALLD agents and assuming that they do not interact with each other. The initial belief of the agents was set such that of $\hat{\theta}^m = \frac{1}{4}$ in every simulation.
Figure 8: Classification of the network according to its stability for 12 different parameters. The simulations were performed with the asymmetrical version of the Projection bias. The two insets show $S_a$ and $S_{gc}$ as a function of the fraction of ALLD agents. The dashed line in the left inset shows the expected $S_a$ due only to the presence of the ALLD agents and assuming that they do not interact with each other. The initial belief of the agents was set such that of $\hat{\theta}^m = \frac{1}{6}$ in every simulation.
Figure 9: Classification of the network according to its stability for 5 different parameters. The simulations were performed with the asymmetrical version of the Projection bias. The two insets show $S_a$ and $S_{gc}$ as a function of the fraction of ALLD agents. The dashed line in the left inset shows the expected $S_a$ due only to the presence of the ALLD agents and assuming that they do not interact with each other. The initial belief of the agents was set such that of $\hat{\theta}^m = \frac{1}{12}$ in every simulation.
of Projection does not affect the system by itself, but only when Paranoia is greater than zero. In this simulation, the value of Paranoia is not high enough to, in combination with Projection, change the system to a High Defection state. In fact, if the Projection bias is great enough, its effect is to isolate the ALLD agents. This isolation happens because the regular agents of the network start with a cooperative behavior, then the Projection mechanism biases them even more towards cooperation, counteracting the effect of the observation of the defection actions by the ALLD. But, the values needed to counter the effect of the ALLD are beyond our estimation of its empirical value.
1.10 Sensitivity to the Memory Parameter
In the main text, we set the memory of the agent to 10. This parameter constrains the family of beta distributions that the agents use to estimate $\theta$. The $a$ and $b$ parameters of these beta distributions must be such that $a + b = 12$. Now, we investigate the evolution of the system when we set the memory of the agents to a value of 12. This value constrains the contrain $a + b = 14$. These belief distributions are shown in Fig. 12. As the sum of the $a$ and $b$ parameters increases, the variance of the beta distribution decreases, then, we expect the effect of Paranoia to be diminished.
Figure 11: Classification of the network according to its stability for 14 different parameters. The simulations were performed with the symmetrical version of the Projection bias. The two insets show $S_a$ and $S_{gc}$ as a function of the fraction of ALLD agents. The initial belief of the agents was set such that of $\theta^m = \frac{1}{6}$ in every simulation. The third set of results is not shown in the bottom inset because the non zero values of $S_{gc}$ are in the region $f > 0.25$, as predicted by the standard percolation theory.
Figure 12: Belief distribution for four parameter sets of *Paranoia*, $a$ and $b$. In A we plot a $\text{Beta}_\theta(4, 10)$ with $\text{Paranoia} = 0$, in B a $\text{Beta}_\theta(3, 11)$ with $\text{Paranoia} = 0.21$, in B a $\text{Beta}_\theta(2, 12)$ with $\text{Paranoia} = 0.35$, and in D a $\text{Beta}_\theta(1, 13)$ with $\text{Paranoia} = 0.37$. The green line shows the mean value of the probability of defection, $\hat{\theta}$, while the red line shows the manipulated mean value $\hat{\theta}^m$. The area under the distribution between $\hat{\theta}$ and $\hat{\theta}^m$ is the value of the *Paranoia* parameter. The mean of the distribution (or equivalently the values of $a$ and $b$) and the *Paranoia* parameters have been chosen in such a way that they compensate each other and lead to the same manipulated mean $\hat{\theta}^m = \frac{2}{5}$. The blue vertical line shows the limit above which the agent believes that the maximum reward is achieved by defecting and below which the agent believes that the maximum reward is obtained by cooperating. Under these four conditions, the agents, initially, cooperate with each other.
In Fig. 13, we show the results using the asymmetric version of the bias, and in Fig. 14, we show the results using the symmetric version of the bias. As in the main text, the three classification of the system appears for both versions of the Projection bias.
References
[1] Hayashi N, Ostrom E, Walker J, Yamagishi T (1999) Reciprocity, Trust, and the Sense of Control. *Rationality and Society* 11(1):27–46.
[2] Fehr E, Schmidt KM (1999) A Theory of Fairness, Competition, and Cooperation. *Quarterly Journal of Economics* 114(3):817–868.
[3] MacKay DJ (2003) *Information theory, inference and learning algorithms*. (Cambridge university press).
[4] Gelman A, Carlin JB, Stern HS, Rubin DB (2014) *Bayesian data analysis*. (Chapman & Hall/CRC Boca Raton, FL, USA) Vol. 2.
[5] Fudenberg D, Rand DG, Dreber A (2012) Slow to Anger and Fast to Forget: Leniency and Forgiveness in an Uncertain World. *American Economic Review* 102(2):720–749.
Figure 14: Classification of the network according to its stability for 16 different parameters. The simulations were performed with the symmetrical version of the Projection bias. The two insets show $S_a$ and $S_{qc}$ as a function of the fraction of ALLD agents. The dashed line in the left inset shows the expected $S_a$ due only to the presence of the ALLD agents and assuming that they do not interact with each other. The initial belief of the agents was set such that of $\hat{\theta}^m = \frac{2}{7}$ in every simulation.
[6] Di Tella R, Perez-Truglia R, Babino A, Sigman M (2015) Conveniently Upset: Avoiding Altruism by Distorting Beliefs about Others’ Altruism. *The American Economic Review* 105(11):3416–3442.
[7] Hagberg AA, Schult DA, Swart PJ (2008) Exploring network structure, dynamics, and function using networkx in *Proceedings of the 7th Python in Science Conference*, eds. Varoquaux G, Vaught T, Millman J. (Pasadena, CA USA), pp. 11 – 15.
|
Ohio Legislative Update
April 15, 2020
Overview
Less than one month ago, Ohio confirmed its first cases of COVID-19 and now there are over 7,000 cases. Governor Mike DeWine and Ohio Department of Health Director Dr. Amy Acton continue to hold daily press briefings with updates for the public, taking their first weekend off since early March this past Easter weekend. With Ohio closing schools, universities, bars, restaurants, gyms, and more and limiting public gatherings to 10 people for several weeks, the curve is flattening. These swift decisions have landed Ohio in the spotlight nationally. Below are updates since the end of March and continual updates are included in G2G’s weekly webinars that summarize coronavirus legislation, regulatory guidance and new funding opportunities.
The Numbers
As testing becomes more widespread and we begin to reach our peak, we are expected to see a steep number of confirmed cases and, unfortunately, deaths. The peak is estimated to hit April 15 to May 15 that could last several weeks. Because of our aggressive social distancing, the peak is now expected to be 1,600 per day (instead of 10,000). According to the Ohio Data Dashboard as of April 14:
- Total Cases: 7,280
- Confirmed ICU Admissions: 654
- Hospitalizations: 2,156
- Deaths: 324
Office of Small Business Relief
Last week, Lt. Governor Jon Husted announced the Office of Small Business Relief, a new office developed within the Ohio Development Services Agency (DSA) to better coordinate Ohio’s efforts to identify and provide support for Ohio’s nearly 950,000 small businesses. The Office will serve as the state’s designated agency for administrating federal recovery funds awarded to Ohio for small business support and recovery and work with federal, state, and local partners to evaluate and determine possible regulatory reforms that encourage employment and job creation.
Business
- **Unemployment** – ODJFS reported that claims for the past two weeks total more than 468,000, compared to 364,000 filed in all of 2019. ODJFS has added 300 new people to help with the phones. If you’re willing and able to work, they have added a section on the state’s website that lists essential jobs that are in desperate need – currently over 33,000 postings. Cincinnati decided to furlough as many as 1,700 workers after revised budget estimates projected a $27.5 million deficit. Akron furloughed 600 of its 1,800 workers, until further notice.
- **Taxes** – Ohio’s tax revenues dropped more than 10% in March, but the lagging nature of collections and the timing of recent public health orders mean that drop doesn’t represent the full scope of the economic damage. For now, tax collections are still ahead of estimates for the fiscal year to date by half a percent, bringing in $89.5 million more than expected and reaching $16.99 billion.
- **Dividends** – BWC will distribute $1.6 billion in dividends to thousands of public and private employers, relative to each employer’s premiums minus any outstanding debt. This will mean one less bill that businesses will need to worry about during the outbreak.
Stay at Home Order
This Order is now in effect until May 1. In addition to the previous order it adds:
- Essential businesses must determine and enforce a maximum number of customers allowed in a store at one time. These businesses must ensure that people waiting to enter the stores maintain safe social distancing.
Those coming to Ohio from out of state are being asked to self-quarantine for 14 days with an exception for trans boarder area occupants/workers.
Lawn care services are allowed to operate as long as they are single-mowers.
No organized sports for kids or adults. Campgrounds must be closed, unless it is a place of residence. Fishing is permitted if maintain proper social distance.
**Dispute Panel**
Chaired by Commerce Director Sherry Maxfield, this will allow the state to offer guidance to local health departments to ensure uniformity across the counties. For example, if one county allows an establishment to stay open but another does not, the panel will decide what the guidance is and all counties will then adhere. If you have a dispute, please fill out the [form](#) and email to: email@example.com
**General Assembly Updates**
- **Ohio House 2020 Economic Recovery Task Force** – Speaker Householder created a panel to meet remotely to map out a strategy to return to business in Ohio. There will be guests each meeting from the business, manufacturing, retail, wholesale, services and recreational sectors to discuss their experience and brainstorm ways to rebound Ohio’s job market once COVID-19 has run its course. It has held two meetings so far and seems to be pushing to reopen sooner than the Governor and ODH. Rep. Paul Zelwanger (R-Mason) is the chair with Rep. Terrence Upchurch (D-Cleveland) as the vice chairman. Hearings are posted [here](#).
- **Rainy Day Fund** – Speaker Householder stated it is likely Ohio will have to dip into its $2.7 billion rainy day fund to stabilize the state budget amid the coronavirus crisis.
- **Unemployment** – Ohio expanded unemployment eligibility by waiving the one week waiting period and allowing those needing to self-quarantine by doctor’s order to receive unemployment compensation. Ohio limits payments to a maximum of 26 weeks but Congress added 13 weeks of payments after state payments end, for a maximum of 39 weeks for up to half of your average weekly wage over 20 weeks plus Congress added an extra $600 a week through the end of July.
- **Tax Filing** – Ohio moved the state tax filing deadline to July 15 to align with the federal deadline.
- **Schools** – The legislature passed legislation enabling schools to pass students from one grade to the next, use online learning and skip the regular statewide testing. They also lifted the cap on distance learning credits, removed testing and report card requirements, and froze the school voucher (Ed Choice) decisions. Additionally, any senior that was on track to graduate will graduate at the close of the school year.
- **Primary Election** – With Ohio’s March 17 primary being canceled due to the pandemic, the legislature voted to extend mail-in voting until April 28. Ohioans who have not yet voted can request a ballot online or by phone. Ballots must be postmarked by April 27 or returned to your county board of election before polls close on April 28.
**Alternative Healthcare Sites**
Governor DeWine announced a plan to expand healthcare services at alternative sites in addition to the traditional medical care facilities should they be needed for potential COVID-19 surges. These sites were selected based on distance to an existing hospital, conditions safe for patients and health care professionals, and space to meet the region’s expected needs.
1. Seagate Convention Center, Lucas County
2. Case Western Reserve University’s Health Education Campus, Cuyahoga County
3. Dayton Convention Center, Montgomery County
4. Covelli Convention Center, Mahoning County
5. Duke Energy Convention Center, Hamilton County
6. Greater Columbus Convention Center, Franklin County
**Hospital Impact**
Ohio hospitals are facing $1.2 billion in monthly revenue drop from suspending elective procedures during the pandemic. On March 17, the Ohio Department of Health ordered suspending all but essential surgeries and procedures. They are allowed, if necessary, to preserve life, an organ or limb, or to prevent the spread of cancer or progression of a disease to severe symptoms. This freeze on their main income due to across-the-board cancellations of scheduled procedures can account for about 85% of the $1 billion-plus hospitals are losing monthly during this situation. Those surgeries pay the bills so hospitals can offer obstetrics, mental health
and other services. Annual revenue is about $48 billion for 236 hospitals and 14 systems. There are 68 hospitals in the state with an operating margin below 2%, and 52 are at or below 0% operating margins – by the end of this crisis, some will be forced to close. The income loss for hospitals is already prompting furloughs of mostly administrative positions and some clinical professionals as well. In addition to this financial loss, COVID-19 has infected 1 in 5 healthcare workers in Ohio – more than 1,300 health care workers across the state.
Fortunately, State Treasurer Sprague launched the Variable Rate Demand Obligation (VRDO) Stabilization Program and is working to buy up to $900 million in short-term debt that Ohio hospitals commonly issue to help fund their operations to help hospitals borrow money at a lower cost because many of their borrowing costs have spiked dramatically. That will save the hospitals hundreds of thousands or millions of dollars in interest each year. In addition, the $2 trillion federal CARES Act that was enacted in March allows hospitals to receive an advance on expected future Medicare reimbursement and allocates $100 billion to reimburse eligible healthcare providers for related expenses or lost revenues directly attributable to COVID-19. While these measures provide liquidity support, Moody’s still expects the coronavirus to have a “significant negative impact” on hospitals’ cash flow in 2020.
**Food Assistance**
The U.S. Department of Agriculture (USDA) Tuesday approved a request from ODJFS to use USDA food from The Emergency Food Assistance Program (TEFAP) for disaster household distribution. The approval runs through April 30 and will enable ODJFS to serve 1.25 million Ohioans through its network of 13 foodbanks and more than 2,800 distribution sites. DeWine also signed an executive order that provides nearly $5 million in emergency funding from the Temporary Assistance to Needy Families (TANF) block grant to support Ohio’s 12 Feeding America foodbanks and the statewide hunger relief network. The funding will be used to purchase available items such as canned fruits and vegetables; canned meats; cereals, pastas and rice; boxed dinners; locally grown produce; locally produced milk, butter, cheese and dairy products through partnerships with the Ohio Dairy Producers Association, Dairy Farmers of America – Mideast Area and the National Farmers Organization; fresh meat and eggs; and essential household cleaning and personal hygiene items.
**COVID-19 Response**
- **Personal Protective Equipment (PPE)** – The Ohio Manufacturing Alliance, 19 manufacturers partnering with three hospital groups, began large scale production of face shields. Over the next five weeks, 750,000 to 1 million face shields will be added to the Ohio’s stockpile. Resources can be found [here](#).
- **Mask & Respirator Sterilization** – FDA authorized Battelle to deploy the Critical Care Decontamination System, which sterilizes surgical masks, without a daily limit. The system has the capability to sterilize 80,000 masks per day, per system (there are two at this time). Battelle was originally only authorized to sterilize 10,000 per day. A frustrated Governor DeWine contact FDA and President Trump and within 24 hours the system was approved without limits. Then, STERIS Healthcare in Mentor was granted a temporary Emergency Use Authorization for decontaminating compatible N95 and N95-equivalent respirators.
- **Testing and Ventilators** – On April 1, Ohio Department of Health banned hospitals in the state from sending coronavirus tests to private, third-party labs, citing the wait time on private labs was too long. All hospitals must send tests to major healthcare systems. Additionally, Dr. Acton issued an order requiring statewide inventory of ventilators.
- **Plasma Treatment** – An experimental treatment using plasma from COVID-19 survivors to treat patients fighting the disease was approved for use in Ohio. Lindner Research Center at The Christ Hospital in Cincinnati developed the protocol and will oversee a study in its use at hospitals across the state, but it needs to find donors.
**Regional resources** — [Columbus](#), [Cleveland](#), [Cincinnati](#), and [Dayton](#)
**Ohio Questions** — 1-833-4-ASK-ODH or [firstname.lastname@example.org](mailto:email@example.com)
**Bioscience industry resource** — [BioOhio](#)
Please let us know any questions and be sure to register for our [Webinars](#).
Liz Powell, Esq., MPH – G2G Founder – 202.445.4242 cell | firstname.lastname@example.org
Andrea Harless, MPA – Government Affairs Manager, Columbus – 330.565.2374 | email@example.com
www.G2Gconsulting.com | G2G on [Twitter](https://twitter.com/G2Gconsulting)
|
Is Public Expenditure Productive? Evidence from the Manufacturing Sector in U.S. Cities, 1880–1920
Melissa Yeoh and Dean Stansel
This article provides the first examination of the relationship between public expenditures and labor productivity that focuses on municipalities, rather than states or nations. We use data for 1880–1920, a period of rapid industrialization in which there were both high levels of public infrastructure spending and rapid growth of productivity. We use a simple Cobb-Douglas production function to model labor productivity in the manufacturing sector, letting total factor productivity depend on “productive” public expenditure by city governments—that is, on public spending that may raise the productivity of labor and encourage human capital accumulation.
Using a data set of 45 of the largest cities in the United States, we find no statistically significant relationship between productive public expenditure and labor productivity in the manufacturing sector during this period. These findings are robust to three different econometric approaches. We do, however, find a strongly positive and statistically significant relationship between private capital and labor productivity. Our results are consistent with those of much of the literature...
examining this same relationship in states and nations and they have important implications for contemporary public policy issues.
An Overview
The decline in labor productivity in the United States during the 1970s created a challenging puzzle for economists to solve. Aschauer (1989) found that public capital had a strongly positive relationship with productivity in the United States, and argued that the productivity decline had been caused by a decline in public expenditure on infrastructure. Munnell (1990) and others found similar results. These initial findings were used by politicians and policymakers as evidence of the need for large increases in government spending on infrastructure. Some critics identified flaws in the econometric approach of this early work and, after correcting those flaws, found either a negative relationship or no statistically significant relationship between public capital and productivity. Peterson (1994) found that the marginal rate of return on public capital had declined substantially since 1950 and was substantially lower than that on private capital. He suggested that policies to increase private capital would contribute more to the growth of output than would the increases in public infrastructure recommended by Aschauer and others. The early literature on both sides of the debate is summarized in Munnell (1992) and Gramlich (1994). Few have been able to replicate the large effects found by Aschauer. Work in this area has slowed, but there remains no consensus (see, e.g., Kalyvitis and Vella 2011, and Lithgart and Suarez 2011). Moreover, virtually all of the existing literature has focused on national, regional, or state data and has analyzed contemporary time periods. While there is a substantial empirical literature investigating the relationship between local government spending and economic growth,\(^1\) there appear to be none that examine the relationship with local labor productivity.\(^2\)
\(^1\)See, for example, Glaeser, Scheinkman, and Shleifer (1995), Holtz-Eakin and Schwartz (1995), Crihfield and Panggabean (1995), Dalenberg and Partridge (1995), De Mello (2002), Glaeser and Shapiro (2003), Denaux (2007), and Stansel (2009). The general consensus of this literature is that government spending, as a whole, has no significant relationship with growth, however a positive relationship was found for several specific categories of spending.
\(^2\)Rauch (1994), Eberts (1986), Deno (1988), Eberts (1990), Duffy-Deno and Eberts (1991), and Boarnet (1998) all take a local approach and are closely related to the issue we examine, but none contain results using local labor productivity as their dependent variable.
One of Aschauer’s (1989: 177) key findings was that “a ‘core’ infrastructure of streets, highways, airports, mass transit, sewers, water systems, etc. has [the] most explanatory power for productivity.” Since the bulk of that infrastructure spending is done by local governments, we take a different approach than previous researchers and focus on local-level data and do so for a period of rapid industrialization, 1880–1920. During that time, there was a great deal of public expenditure on the construction of infrastructure and a rapid increase in the productivity of labor.\(^3\) If public expenditure is positively associated with productivity, as some have claimed, then that relationship should be readily apparent in the data we have chosen.
The period between 1880 and 1920 saw tremendous growth in cities and wide variations in the labor productivity and public expenditure in areas across the United States.\(^4\) The United States became a world leader in manufacturing during this period of rapid industrialization and much of the industrialization was correlated with city growth. Some cities recorded rapid population growth rates (for example, Detroit grew at an average of 73 percent per decade from 1880 to 1920), while other cities had slower population growth rates (such as Albany’s average of 6 percent per decade).
Since manufacturing generated more than half of the total value of output in the United States by the late 19th century and was centered in the largest cities in the nation (Gallman 1960: 26), we focus our attention on whether public expenditure played any significant role in raising the labor productivity of manufacturing workers specifically. Labor productivity directly affected the profitability of manufacturing establishments and thus the overall economic growth of a city. Cities, in turn, were allocating large quantities of resources toward “public capital” such as roads, water supply systems, sanitation, education, and health. Furthermore, local governments were responsible for the bulk of government activity during this period, so focusing on city governments is most appropriate.\(^5\)
---
\(^3\)As Kendrick (1984: 389) documents, the productivity of labor over a similar period (1889–1919) was nearly double that of the previous four decades (1855–90). Total productivity was more than five times higher.
\(^4\)In an empirical study on states, Mitchener and McLean (2003: 34–35) document “massive and persistent differences in productivity levels, and hence living standards” across the 48 U.S. states (excluding Alaska and Hawaii) from 1880 to 1960.
\(^5\)In 1902, local governments accounted for 55 percent of all government revenue and 59 percent of all government outlays, compared to about 22 percent and 25 percent today (Menes 1999).
Some argue that public expenditure, particularly in education and health, increases human capital and thus raises labor productivity in cities that invested heavily in such areas (Glaeser, Scheinkman, and Shleifer 1995). However, not every type of public expenditure will raise labor productivity. Some types of public expenditure, such as spending on the maintenance of public buildings and the salaries of city employees in the legislative and judicial branches of government, will not raise labor productivity in manufacturing. For that reason, we focus on productive public expenditure.
Economic theory posits that productive public capital—such as roads, water supply systems, sewers, education, and health—lowers the cost of doing business and raises the marginal product of other forms of capital. As a result we should see businesses flourishing in cities that invested heavily in infrastructure. For example, public expenditure on roads, bridges, highways, and waterways lowers the cost of transportation and facilitates the movement of goods and labor throughout the United States.\footnote{Moomaw and Williams (1991) found some evidence of a positive relationship between highway infrastructure and productivity in the 48 contiguous states. However, Jiwattanakulpaisarn et al. (2009) found no statistically significant relationship between investments in highways and employment in 100 counties in North Carolina.} Public expenditure on education, health, sanitation, and water supply systems may increase human capital accumulation by making the labor force (or the future labor force, in the case of school children) more literate and healthier.\footnote{Menes (1999: 2) states, “The roads, sewers, schools, transportation, electricity, gas and water provided by local governments or by government franchisees were vital to the health, wealth, and happiness of residents.”}
We can model the growth of a city using the augmented Solow growth model, assuming that the city is a small economy. This model suggests that physical and human capital accumulation should go a long way in explaining the differential income levels of cities. According to Barro (1997: 2) in his cross-country study of economic growth and convergence, “The concept of capital in the neoclassical model can be usefully broadened from physical goods to include human capital in the forms of education, experience, and health.” The effects of physical capital accumulation (Romer 1986) and
human capital accumulation (Lucas 1988) on economic growth are modeled and documented in many studies, such as Barro’s (1997) cross-country empirical study and Barro and Sala-i-Martin’s (1991) study of income convergence in U.S. states.\(^8\) Stansel (2005 and 2009) found similar results for the relationship between human capital and the growth of population and employment in U.S. metropolitan areas.
Holtz-Eakin and Schwartz (1995) and Rauch (1994, 1995) provide formal theoretical models of the relationship between public expenditure and productivity at the sub-national level that are closely related to the subject of this article. Those models come to opposite conclusions about that relationship. Holtz-Eakin and Schwartz’s (1995) article develops a neoclassical growth model explicitly incorporating infrastructure investments and providing a tractable framework to empirically analyze the significance of public capital accumulation to productivity growth. Examining a panel of state data, they find no statistically significant relationship between public sector capital and the growth of productivity. Their results suggest that higher infrastructure outlays were not associated with a significant increase in productivity growth in U.S. states between 1971 and 1986.
Rauch (1994) develops a formal model to study the effect of municipal reform in the Progressive Era (from 1902 to 1931) on city governments’ allocation of public expenditure and on city growth, using the rates of growth in manufacturing employment and value-added output as measures for city growth. He finds that city governments’ expenditure on roads, sanitation, and the water supply system are statistically significant in explaining manufacturing employment growth in both panel and cross-sectional analyses. However, expenditures on roads and sanitation are not statistically significant in explaining growth in manufacturing’s value-added output in the panel regression.\(^9\)
---
\(^8\)Black and Henderson (1999) provide a formal model of human capital accumulation and urban growth.
\(^9\)Rauch (1995: 969) states, “Investment in new infrastructure is assumed to generate city growth by providing a complementary input that attracts investment of private capital in traded goods industries (manufacturing), creating jobs which in turn attract migrants from a surrounding agricultural hinterland.”
Our dependent variable is based on the same data as those used by Rauch, but we use the *level* (rather than the growth) of the log of real dollar value added by manufactures *per worker* (rather than the total), that is, we use productivity not overall output growth. Our analysis differs from Rauch’s in four important ways. First, we explicitly model labor *productivity* (not output growth) as a function of productive public expenditure. Second, we include education and health spending in our public expenditure measure so that it will capture those additional potential benefits to human capital and thereby productivity. Third, we examine an earlier time period, 1880–1920 (one that avoids the potentially contaminating effects of the Great Depression and that includes the last two decades of the 19th century, which saw large public investments in basic infrastructure). And, fourth, we do not examine the impact of municipal reform on cities’ growth, which is the main emphasis of Rauch’s analysis. We differ from Holtz-Eakin and Schwartz (1995) in that we examine public expenditure rather than the public capital stock, we examine cities rather than states, and we examine the period 1880–1920 instead of their more recent time period.
We build on the previous literature in this area by providing the first investigation of the effect of productive public expenditures on labor productivity in municipalities, which has important implications for contemporary public policy issues. In recent years, there have been efforts in the United States and elsewhere to improve economic conditions by substantially increasing government spending on infrastructure projects at the state and local level. Proponents of such efforts have argued that those projects will increase productivity. Our focus on a period of high public expenditure on physical infrastructure and rapid industrialization and growth provides an ideal setting for finding evidence that supports that hypothesis that public expenditure is productive.
**The Theoretical Framework**
We follow the lead of previous studies (e.g., Aschauer 1989, Holtz-Eakin 1994, and Morrison and Schwartz 1992) and specify an aggregate Cobb-Douglas production function for the manufacturing sector in city $j$, for year $t$, which takes the form:
\[(1) \quad Y_{j,t} = A_{j,t} K_{j,t}^\alpha L_{j,t}^\beta,\]
where \( j \) indexes the city and \( t \) indexes the year. \( Y_{j,t} \) is value added to manufacturing output, \( K_{j,t} \) is the value of private capital, \( L_{j,t} \) is the number of workers, and \( A_{j,t} \) is the measure of total factor productivity. We assume that \( \alpha + \beta = 1 \), implying constant returns to scale. Dividing equation (1) by \( L_{j,t} \) yields the following equation denominated in per worker units:
\[
(2) \quad Y_{j,t}/L_{j,t} = A_{j,t} \left[ K_{j,t}/L_{j,t} \right]^{\alpha},
\]
where \( Y_{j,t}/L_{j,t} \) is value added per worker and \( K_{j,t}/L_{j,t} \) is the ratio of private capital to labor. Taking the natural logarithm of the above equation yields the following:
\[
(3) \quad \ln[Y_{j,t}/L_{j,t}] = \ln[A_{j,t}] + \alpha \ln[K_{j,t}/L_{j,t}].
\]
Since local estimates of the value of the public sector capital stock were not available for this time period, we follow Rauch (1994, 1995) in using productive public expenditure. To investigate the effect of cities’ public expenditure on certain public goods like roads, water supply systems, sanitation, education, and health, we exclude all other public expenditure, and we let total factor productivity depend solely on productive public expenditure as follows:
\[
(4) \quad \ln[A_{j,t}] = A_t + \gamma \text{PUBLIC}_{j,t} + c_j,
\]
where \( A_t \) is the time effect common to all cities for a given year, \( c_j \) is the city-specific effect, and \( \text{PUBLIC}_{j,t} \) is the productive public expenditure per capita in city \( j \) in year \( t \). Note that we are not modeling the effect of public expenditure on private capital accumulation. The omission of this interaction between public expenditure and private capital accumulation simplifies the analysis and allows us to focus on analyzing the levels of public expenditure and labor productivity for a given level of private capital.
In equation (4) the variable \( A_t \) can be interpreted as the technology that is available to all cities in year \( t \). These time effects can be consistently estimated with the use of dummy variables for each year in our sample (YEAR). These year dummy variables are important because we know that the technology available in the manufacturing sector changed significantly during the period 1880 to 1920. Therefore, cities chose different levels of public expenditure,
PUBLIC\textsubscript{j,t}, which will affect their level of technical efficiency.\textsuperscript{10} We expect to see lower labor productivity in cities that were slower in installing clean water supply systems and sewers or cities that invested lower expenditure per capita in the prevention and treatment of communicable diseases, medical work for school children, and food regulation and inspection because the residents of such cities may be less healthy, more prone to diseases, and less productive than their counterparts in cities that invested early in these public works.
Similar to the time effects, the city-specific effects c\textsubscript{j} can be consistently estimated using dummy variables for each city (CITY) in the sample and omitting one city’s dummy variable. These city dummy variables control for unobservable city-specific factors that do not change over time but could affect the level of technology in a specific city. These unobserved city-specific factors could also be correlated with the explanatory variables and in any given city may affect labor productivity in an unobservable way. Some examples of unobservable city effects are agglomeration externalities and knowledge spillovers for firms located in close proximity to one another within a city and the level of entrepreneurship in a city. Another unobservable city-specific effect could be corruption in the city governments and the presence of patronage politics that affects the level of public expenditure. Menes (1999) finds that patronage politics in a city results in higher than optimal provision of public goods and higher wages paid to city employees.
Finally, substituting equation (4) into equation (3) yields the following:
\begin{equation}
\ln[Y_{j,t}/L_{j,t}] = A_t + \gamma \text{PUBLIC}_{j,t} + \alpha \ln[K_{j,t}/L_{j,t}] \\
+ c_j + e_{j,t},
\end{equation}
where e\textsubscript{j,t} is a random error term. Equation (5) is a standard two-way fixed effects model with both city and year dummies. We rename ln[Y\textsubscript{j,t}/L\textsubscript{j,t}] as LNVALUE, A\textsubscript{t} as the dummy variable YEAR,
\textsuperscript{10}For simplicity, following Glaeser et al. (1995) and others, we ignore the impact of the revenue source required to finance this higher spending (taxes or bonds). Since higher taxes would tend to reduce productivity, this implicitly biases upward our coefficients on public expenditure.
$\ln[K_{j,t}/L_{j,t}]$ as LNCAPITAL, and $c_j$ as the dummy variable CITY to yield:
\begin{align*}
(6) \quad & \text{LNVALUE}_{j,t} = \sum_t \text{YEAR}_t + \gamma \text{PUBLIC}_{j,t} \\
& + \alpha \text{LNCAPITAL}_{j,t} + \sum_j \text{CITY}_j + e_{j,t},
\end{align*}
for $t = 1880, 1890, 1900, 1910, 1920$
for $j = 45$ cities listed as Albany . . . Worcester.
Consequently, equation (6) gives us a theoretical framework in which to estimate how much, if at all, cities’ public expenditure affected labor productivity in the manufacturing sector. The log-normal specification arises because we have assumed an aggregate Cobb-Douglas production function for the manufacturing sector and we take the natural logarithm of the production function in order to obtain a linear equation which we can then estimate using two-stage least squares (2SLS) regression. The interpretation of the slope coefficient $\gamma$ for PUBLIC yields a percentage change in labor productivity given a dollar change in the level of public expenditure.
One potential problem with this estimation is the endogeneity of the explanatory variable PUBLIC, that is to say, public expenditure may be influenced by the same unobservable factors that influence labor productivity (i.e., the endogenous variable PUBLIC is correlated with the model’s error term), thus rendering the 2SLS regression estimates biased and inconsistent (Bound, Jaeger, and Baker 1995: 443). For example, the demographics of the population or the preferences of the city governments can endogenously affect public expenditure. Taxation can also affect both public expenditure and labor productivity through its impact on private capital accumulation (Rauch 1995: 968–69). There may be other omitted variables that also determine labor productivity and may influence public expenditure.
Another problem is that current expenditures (our independent variable of interest) depend on per capita income because cities with
\footnote{Our sample consists of 45 of the largest U.S. cities: Albany, Atlanta, Baltimore, Boston, Buffalo, Cambridge, Chicago, Cincinnati, Cleveland, Columbus, Dayton, Detroit, Fall River, Grand Rapids, Hartford, Indianapolis, Jersey City, Kansas City, Louisville, Lowell, Memphis, Milwaukee, Minneapolis, Nashville, New Haven, New Orleans, New York, Newark, Omaha, Paterson, Philadelphia, Pittsburgh, Providence, Reading, Richmond, Rochester, San Francisco, Scranton, St. Louis, St. Paul, Syracuse, Toledo, Trenton, Wilmington, and Worcester.}
higher levels of average income may raise more tax revenues and thus provide more public goods and this may result in reverse causality linking the dependent variable (LNVALUE) to the explanatory variable (PUBLIC).
We deal with this problem of endogeneity in three econometric specifications: first, we use an instrumental variable (IV) in a 2SLS estimation; second, we use the initial-year values for the public expenditures and all other covariates to explain the subsequent 10-year growth rate of labor productivity (the dependent variable); and third, we use the lagged public expenditures as an explanatory variable. In our first method, we use ethnic fragmentation (ETHNIC) as an instrument for public expenditure because there is existing literature that suggests that ethnic fragmentation within a city makes it difficult for a city to agree on public spending due to the heterogeneous preferences of different ethnic groups over the types of public goods to produce with tax revenues. Thus, certain public goods such as education, roads, and sewers supplied by U.S. cities are inversely related to ethnic fragmentation in those cities (Alesina, Baqir, and Easterly 1999: 1243). The key finding in Alesina et al. (1999: 1274) is that ethnically fragmented cities devote lower shares of spending to core public goods like education and roads.
For our instrumental variable, we use Alesina et al.’s (1999: 1254–55) index of ethnic fragmentation (ETHNIC), which “measures the probability that two randomly drawn people from a city . . . belong to different ethnic groups.” Thus, our measure of ethnic fragmentation is as follows:
\[
(7) \quad ETHNIC = 1 - \sum_i (Race_i)^2,
\]
where \(Race_i\) indicates the proportion of the population listed by the Census as race \(i\) and \(i = \{\text{Native White, Foreign White, African-American, and Other (includes Chinese, Japanese, and American Indians)}\}\). ETHNIC is a probability that ranges from 0 (if perfect homogeneity or only a single race lives in a city) to a maximum of 0.75 (if perfectly fragmented into four equally sized racial groups). Our measure of ethnic fragmentation (ETHNIC) differs slightly from Alesina et al.’s because we use only four racial groups compared to their five racial groups, because we grouped Chinese, Japanese, and American Indians as “Other.” The modern Census classification for “Asian and Pacific Islander” and “Other” (which largely identifies the Hispanic population in the United States) is unavailable during
the period from 1880 to 1920. The main difference in our ETHNIC index from Alesina et al.’s is that we separated the White classification into Native White and Foreign White.
In the 2SLS IV estimation, we use the fitted values (PUBLIC_HAT) from the first-stage regression of PUBLIC on ETHNIC in the second stage regression of LNVALUE on PUBLIC_HAT. The idea here is that the instrumented estimate (PUBLIC_HAT) delivers exogenous variation in the explanatory variable and allows for a clean identification of the effect of PUBLIC on LNVALUE.\(^{12}\) Good instruments should be correlated to the endogenous variable but should be exogenous or excludable from the second stage of the 2SLS regression so as not to influence the outcome directly. The evidence in Alesina et al. (1999) supports our choice of ethnic fragmentation as an instrumental variable for productive public expenditure.
The 2SLS regression is specified as follows. In the first stage, we estimate PUBLIC_HAT\(_{j,t}\):
\[
(8) \quad \text{PUBLIC}_{j,t} = \alpha + \beta_1 \text{ETHNIC}_{j,t} \\
+ \beta_2 \text{LN CAPITAL}_{j,t} + \beta_3 \text{LN POP}_{j,t} + \beta_4 \text{LN LAND}_{j,t} \\
+ \beta_5 \text{LN WAGE}_{j,t} + \beta_6 \text{YEAR}_t + \beta_7 \text{CITY}_j.
\]
In the second stage, we use the instrumented estimate PUBLIC_HAT\(_{j,t}\) as a regressor:
\[
(9) \quad \text{LN VALUE}_{j,t} = \alpha + \beta_1 \text{PUBLIC\_HAT}_{j,t} \\
+ \beta_2 \text{LN CAPITAL}_{j,t} + \beta_3 \text{LN POP}_{j,t} + \beta_4 \text{LN LAND}_{j,t} \\
+ \beta_5 \text{LN WAGE}_{j,t} + \beta_6 \text{YEAR}_t + \beta_7 \text{CITY}_j.
\]
In our estimates of equations (8), (9), (10), and (11), we included other covariates such as the natural log of city’s population, the natural log of the city’s size (in acres), and the natural log of real wage to control for the effects of these covariates on public expenditures per capita and on the natural log of value added per worker. We control for cities’ population (LNPOP) because presumably cities with higher populations would provide more public goods. We also control for land area in acres (LNLAND) to account for boundary changes like the annexation of Brooklyn and Allegheny, respectively, by New York City in 1898 and Pittsburgh in 1907. Ideally we should be able to control for the level of income per capita by city (as this
\(^{12}\)See Bound, Jaeger, and Baker (1995) and Angrist and Krueger (2001) for an introduction to IV estimation.
will affect the level of public expenditure because cities with a wealthier tax base may be able to provide more public goods), but these data do not exist in the time period under study. A good proxy for personal income per capita by city is the average real manufacturing wage obtained from the *Census of Manufactures* by city and deflated by the national CPI into real 1890 constant dollars. Since wages are usually assumed to follow a log-normal distribution, we include the log of cities’ real average manufacturing wages (LNWAGE) as a control in our regression analysis. We also include city-specific (CITY) and time-specific fixed effects (YEAR).
Our second method to address the endogeneity issue is to use initial-year values for the public expenditure data (and all other explanatory variables) and subsequent growth (following the initial year) for the dependent variable.\(^{13}\) Higher productivity growth from 1880 to 1890, for example, cannot have an impact on the level of public expenditure in 1880. So we examine in a panel regression the relationship between public expenditure in the first year of each decade (1880, 1890, 1900, and 1910) and the growth of labor productivity over the subsequent decade (1880–90, 1890–1900, 1900–10, and 1910–20), summarized in equation (10).
\[
(10) \quad \text{LNVALUE}_{j,t+10} - \text{LNVALUE}_{j,t} = \sum_t \text{YEAR}_t \\
+ \gamma \text{PUBLIC}_{j,t} + \alpha_1 \text{LNCAPITAL}_{j,t} + \alpha_2 \text{LNPOP}_{j,t} \\
+ \alpha_3 \text{LNLAND}_{j,t} + \alpha_4 \text{LNWAGE}_{j,t} + \sum_j \text{CITY}_j + e_{j,t},
\]
for \( t = 1880, 1890, 1900, 1910 \)
for \( j = 45 \) cities listed as Albany . . . Worcester.
Finally, our third method to address the potentially endogenous relationship between public expenditure and productivity is to use lagged values for public expenditure. Productivity in 1890, for example, cannot have an impact on public expenditure in 1880. Due to limited availability of data, we must employ a 10-year lag. However, that may be unrealistically long. Public expenditures in the current year certainly have a bigger impact on productivity in future years than in the current year, but the ideal lag may be less than 10 years. Equation (11) provides the precise specification.
\(^{13}\)There is precedent for this in the literature. For example, Barro (1991) and Glaeser et al. (1995) both use 1960 government expenditure data as an explanatory variable for 1960–90 economic growth.
### Table 1
**Descriptive Statistics**
| Variable | N | Mean | Std. Dev. | Min | Max |
|---------------------------|-----|------|-----------|------|------|
| Ln Value Added | 219 | 7.05 | 0.35 | 6.07 | 8.01 |
| Public Expenditure | 219 | 5.61 | 2.55 | 0.28 | 15.39|
| Ethnic Fragmentation | 219 | 0.43 | 0.07 | 0.15 | 0.57 |
| Ln Private Capital | 219 | 7.50 | 0.48 | 6.12 | 8.61 |
| Ln City Population | 219 | 12.13| 0.96 | 10.53| 15.54|
| Ln City Size (acres) | 219 | 9.67 | 0.88 | 8.05 | 12.25|
| Ln Real Wage | 219 | 6.14 | 0.21 | 5.29 | 6.64 |
**Note:** Year and city dummies are excluded from this table.
\[(11) \quad \text{LNVALUE}_{j,t} = \sum_t \text{YEAR}_t + \gamma \text{PUBLIC}_{j,t-10} \\
+ \alpha_1 \text{LNCAPITAL}_{j,t} + \alpha_2 \text{LNPOP}_{j,t} + \alpha_3 \text{LNLAND}_{j,t} \\
+ \alpha_4 \text{LNWAGE}_{j,t} + \sum_j \text{CITY}_j + e_{j,t},\]
for \( t = 1890, 1900, 1910, 1920 \)
for \( j = 45 \) cities listed as Albany . . . Worcester.
### Data Sources and Construction of Variables
We assemble a panel dataset for 45 of the largest cities in the United States that spans 40 years by hand-collecting data from the decennial censuses of 1880 to 1920, with 219 observations for cities’ value added by manufacture, value of private capital in manufacturing, public expenditure, ethnic fragmentation, average number of wage earners, city size in acres, and city population.\(^{14}\) See Table 1 for the summary statistics of the variables used in the regression analysis and Table 2 for the correlation matrix. We restrict our attention to
\(^{14}\)We are missing observations for Grand Rapids, Memphis, Omaha, and Trenton in 1880. We also dropped two observations that were substantial outliers (Wilmington in 1880 and Reading in 1920, which were not representative of the usual trend of annual public capital spending) and thus would distort the coefficient estimates. In 1880, Wilmington spent 581 percent and 176 percent more on roads and education, respectively, in real terms compared with the average of 1890 to 1920. Similarly, in 1920 Reading spent 64 percent more on sewers, in real terms compared with the average between 1880 and 1910. Reading’s spending on education cannot be consistently compared due to accounting irregularities (zero dollars recorded for 1880 and 1890 and $4,500 current dollars in 1910).
| | Ln Value Added | Public Expenditure | Ethnic Fragmentation | Ln Private Capital | Ln City Population | Ln City Size (acres) | Ln Real Wage |
|------------------|----------------|--------------------|----------------------|--------------------|--------------------|----------------------|--------------|
| Ln Value Added | 1 | | | | | | |
| Public Expenditure | 0.369 | 1 | | | | | |
| Ethnic Fragmentation | -0.158 | 0.018 | 1 | | | | |
| Ln Private Capital | 0.850 | 0.389 | -0.139 | 1 | | | |
| Ln City Population | 0.524 | 0.300 | 0.137 | 0.422 | 1 | | |
| Ln City Size (acres) | 0.297 | 0.194 | 0.200 | 0.226 | 0.762 | 1 | |
| Ln Real Wage | 0.814 | 0.382 | -0.178 | 0.729 | 0.453 | 0.283 | 1 |
the 1880–1920 period for two reasons. This was a period of rapid industrialization in which there were both high levels of public infrastructure spending and rapid growth of productivity. Comparable data were not available for the years before 1880,\textsuperscript{15} and incorporating additional data beyond 1920 would include data contaminated by the effects of the Great Depression.
Data on the value added by manufacture are published every five years in the \textit{Census of Manufactures} and are available by city and by industry. The value added per worker is the difference in the value of total output less the cost of raw materials, divided by the average number of wage earners employed during the year. These wage earners, up through the working foreman level, are typically production workers, although there is no way to distinguish them from other nonproduction wage earners. The figures for value added per worker by city are deflated by the national Consumer Price Index (CPI) and converted into the natural log of value added per worker in 1890 constant dollars (LNVALUE).\textsuperscript{16} Similar to other Census data, these value-added data are subject to some reporting error, but the method of enumeration is consistent throughout the time period of study. The source for these value-added data are the \textit{Census of Manufactures} for the years 1879, 1889, 1899, 1909, and 1919, but these data are also published in the decennial censuses. Cities’ value added by manufacture for the first three census years (1880, 1890, and 1900) also included “hand and neighborhood industries” but the latter two years (1910 and 1920) only measure “factory” establishments.
In other studies, such as Mitchener and McLean (1999 and 2003), labor productivity is measured by the average manufacturing wage. However, a problem in using wages as a measure of labor productivity arises if public expenditure is regarded as an amenity by city residents, then we can expect to see public expenditure be partly capitalized into wages (but this may also reflect higher taxes within
\textsuperscript{15}See U.S. Department of Commerce (2008) for a discussion of the history of the Census Bureau’s collection of data on U.S. governments.
\textsuperscript{16}The national CPI—constructed by the Bureau of Labor Statistics—is available in the \textit{Historical Statistics of the United States, Colonial Times to 1970} (Series E135–166). We use a national CPI to make price adjustments because during 1880–1920, the commodities and labor markets in the United States were pretty well integrated due to the completion of the transcontinental railway (Walton and Rockoff 2002: 363).
a city). The correlation will be negative because people will desire to live in a city with lots of public goods, thereby bidding the wages down. This capitalization of public expenditure into wages is the reason we are using value added per worker in manufacturing as the measure of labor productivity.
The changes in the level of value added by manufacture or labor productivity (LNVALUE) are correlated with public expenditure by city governments. In this article, we use the public capital definition set out by Corsetti and Roubini (1996: 2) as “government spending [that] affects the productivity of the final goods sector or the human capital accumulation sector.” Data on city governments’ spending were collected from the U.S. Census volumes on *Valuation, Taxation and Public Indebtedness* (1880) and *Wealth, Debt and Taxation* (1890) and from volumes of *Statistics of Cities* and *Financial Statistics of Cities* for all cities with populations of 30,000 or more (for 1904) and 100,000 or more (for 1909 and 1919). Similar to Rauch (1995) our measure of public expenditure per capita (PUBLIC) is the per capita sum of city governments’ spending on roads, sanitation, and water supply systems, but we also include education and health spending. We think it is important to consider these education and health expenditures, which are not included in Rauch (1995), because they are components of public expenditure that may increase human capital accumulation. The expenditures on sewers and water supply systems have major health implications and thus would also have an effect on human capital accumulation.
Different types of public expenditure surely affect city residents’ health, education, and labor productivity in different ways, but we are unable to properly account for these different effects. Using separate variables for each individual category of spending is problematic because of inconsistencies in reporting by local governments. For example, some cities’ irregular accounting methods recorded zero expenditure on water supply systems and schools for some years (the water works may be contracted out to a private operator and the school expenditure could be listed under general expenses instead of under education). Because of those inconsistencies, we take a simple sum of cities’ public expenditure on the five components (roads, water supply systems, sanitation, education, and health) and divide by the population of each city to obtain public expenditure per capita (PUBLIC). We use city population as the denominator because these public goods are largely nonexcludable and used by the entire
population in each city. PUBLIC is also deflated by the national CPI and converted into real public expenditure in 1890 constant dollars. We believe the use of per capita expenditures is superior to the percentage of total expenditure measure that Rauch uses because it more accurately reflects the quantity of resources being devoted to public capital. An even better measure would be expenditures as a percentage of personal income, but the personal income data are not available by city for this time period.
The *Census of Manufactures* also reports the aggregate value of private capital stock, in current dollars, by city and by industry. These capital figures “show the total amount of capital, both owned and borrowed, on the last day of the business year reported.”\(^{17}\) There may be some measurement error in this variable because of the ambiguity of the census questions and the general difficulties of accounting for the value of capital goods. Private capital is a necessary variable to include in our regression analyses because it is an essential input in the manufacturing sector.
In order to construct the index of ethnic fragmentation (ETHNIC), we collected data from the U.S. Census volume on *Census of the Population* for the number of people within each city who are Native White, Foreign White, African-American, and Other (includes Chinese, Japanese, and American Indian) according to the racial classification used by the U.S. Census. Recall that the index is defined as follows:
\[ \text{ETHNIC} = 1 - \sum_i (\text{Race}_i)^2, \]
where \( \text{Race}_i \) indicates the proportion of the population listed by the Census as race \( i \) and \( i = \{\text{Native White, Foreign White, African-American, and Other (which includes Chinese, Japanese and American Indians)}\} \). In our data set, the ETHNIC index ranges from a minimum of 0.15 in Reading, Pennsylvania, in 1880, to a maximum of 0.57 in Memphis for the year 1890. Southern cities—such as New Orleans, Memphis, Richmond, and Atlanta—show high levels of ethnic fragmentation because of their larger proportions of African-American residents. These Southern cities record ethnic fragmentation index numbers between 0.42 and 0.57, whereas cities in the Northeast (except Reading) record ethnic fragmentation index
\(^{17}\)The quote is excerpted from the “Explanation of Terms” from the *Census of Manufactures* (1920: 15–18).
numbers between 0.28 and 0.50. These Northeast cities are fragmented between Native Whites and Foreign Whites.
**Empirical Results**
We use both panel and cross-sectional analyses to investigate the relationship between cities’ productive public expenditure and labor productivity in the manufacturing sector. Table 3 provides the 2SLS estimates of equations (8) and (9) with robust standard errors. As panel A indicates for the first stage, in columns (3)–(6) (including all the regressions where the control variables are included), we see that ethnic fragmentation has the expected negative sign and is statistically significant in explaining differences in public expenditure. This provides evidence that ethnic fragmentation is indeed a good instrument for public expenditure. It is also economically significant because a 0.10 increase in the ethnic fragmentation index results in $2.15 (in column (6) specifically) decrease in real public expenditure per capita, which is about 38 percent of the mean public expenditure per capita. Panel B illustrates that the instrumented estimate PUBLIC_HAT has no statistically significant relationship with labor productivity when private capital and the other control variables are included (columns (10)–(12)).\(^{18}\) However, private capital (LNCAPITAL) is highly significant, both economically and statistically, because a 1 percent increase in LNCAPITAL is associated with a 40.8 percent increase in labor productivity. The YEAR dummies are all negative in relation to the excluded 1920 dummy and are mostly statistically significant at the 1 percent significance level in explaining labor productivity because the available technology in later years plays a crucial role in increasing labor productivity.
Our second approach to address the potentially endogenous relationship between public expenditure and productivity is to regress
\(^{18}\)We also estimated equation (5) by cross-sectional regressions for each year. The cross-sectional regression allows the constant and slope coefficients to change for each year. The different signs, sizes of the coefficients, and significance levels make it harder to formulate an overall interpretation for the entire time period under study, but we found that public expenditure per capita is not statistically significant with respect to labor productivity in each of the years studied. For brevity, those results are not included herein.
initial year values for public expenditure and the other explanatory variables on subsequent growth of the dependent variable as proposed in equation (10). Those results are provided in Table 4.
As column (3) indicates, we find no statistically significant relationship between productive public expenditure and the subsequent 10-year growth of labor productivity.\(^{19}\) The fact that we are measuring the growth of productivity rather than the level helps explain why we find a negative and statistically significant coefficient on private capital. According to economic theory, cities with higher levels of private capital will have higher productivity. So, cities with higher levels of capital in the initial year would be expected to already have higher levels of productivity. If they have reached the point of diminishing marginal returns to capital, we would indeed expect to see lower growth of productivity in the subsequent years, compared to productivity growth in cities with lower levels of capital.
Finally, our third method for addressing endogeneity is to use lagged values for the potentially endogenous explanatory variable, so we regress productivity on public expenditure from 10 years earlier as proposed in equation (11). None of the other explanatory variables are lagged. As Table 5 illustrates, once the other covariates are included, productive public expenditures have no statistically significant relationship with the productivity of manufacturing labor 10 years later. This result mirrors our instrumental variables results in Table 3. While we find a small statistically significant positive coefficient without control variables in column (1), as we add other covariates the coefficient on LAGGED PUBLIC loses statistical significance and becomes smaller and changes to a negative sign. As with both of our other sets of results, the most prominent finding is the strong statistical significance of private capital.
Our results from these three separate estimation techniques find no evidence of a statistically significant relationship between public expenditure and labor productivity in the manufacturing sector during the period 1880–1920. These results confirm the theoretical model and empirical findings of Holtz-Eakin and Schwartz (1995),
\(^{19}\)Running the regressions as four separate cross-sections, instead of as a panel, yields similar results. All coefficients on public expenditure are statistically insignificant as in Table 4. The one difference is that the coefficient in the regression for 1880–90 productivity growth has a positive sign. For brevity, those results are not included herein.
who document an insignificant relationship between public expenditure and productivity growth in states from 1971 to 1986. Rauch (1994) was examining manufacturing employment growth in cities rather than productivity, so our results are not directly comparable. However, our findings are somewhat at odds with his finding of a positive relationship with some categories of public expenditures for the period 1902–1931. Finally, our results are similar to the empirical findings of Glaeser et al. (1995). Their results indicated that with the exception of sanitation spending, public expenditures had no relationship with the income and population growth rates of cities in
the period between 1960 and 1990. Our results also generally confirm their findings for the relationship between urban growth and racial composition and segregation. More recently, using a similar approach but examining metro areas instead of cities, Stansel (2009) also found no significant relationship between local government spending and economic growth over 1960–90. As discussed previously, the general consensus in this local growth literature is that government spending, as a whole, has no significant relationship with growth, however a positive relationship is sometimes found for specific categories of spending.
### Table 4
**Regression Results for Ten-Year Growth Rates in LNVALUE**
| | (1) | (2) | (3) |
|------------------|--------------|--------------|--------------|
| | Dependent variable: DLNVALUE | | |
| PUBLIC | $-0.00964^*$ | $-0.00768$ | $-0.0125$ |
| | (0.00582) | (0.00632) | (0.00911) |
| LNCAPITAL | $-0.120^{***}$ | $-0.187^{***}$ | $-0.265^{**}$ |
| | (0.0445) | (0.0658) | (0.125) |
| LNPOP | $0.0404^{**}$ | $0.0388^{**}$ | $0.218^{**}$ |
| | (0.0198) | (0.0190) | (0.0992) |
| LNLAND | $-0.0109$ | $-0.0204$ | $-0.0956$ |
| | (0.0214) | (0.0232) | (0.0726) |
| LNWAGE | $-0.192^{**}$ | $-0.0248$ | $-0.333$ |
| | (0.0964) | (0.101) | (0.247) |
| 1890 DUMMY | $-0.150^{***}$ | $-0.0988$ | |
| | (0.0388) | (0.0915) | |
| 1900 DUMMY | $-0.0934$ | $-0.0327$ | |
| | (0.0586) | (0.120) | |
| 1910 DUMMY | $0.0194$ | $0.0754$ | |
| | (0.0772) | (0.164) | |
| CONSTANT | $1.948^{***}$ | $1.571^{***}$ | $2.565$ |
| | (0.487) | (0.596) | (2.039) |
| CITY DUMMIES? | No | No | Yes |
| Observations | 174 | 174 | 174 |
| R-squared | 0.245 | 0.356 | 0.515 |
**Note:** Robust standard errors in parentheses.
$^{***} p<0.01$, $^{**} p<0.05$, $^* p<0.1$.
One possible explanation for the consistent finding of an insignificant relationship is that the potential benefit of local government spending may be outweighed by the cost of the taxes necessary to finance that spending. (Unlike the federal government, local governments generally lack the ability to run a budget deficit.) Taxes remove money from the private sector, in which the profit motive tends to ensure its efficient usage, and transfer it to the political sector, in which electoral motives play a strong role. As a result, when local government spending and taxes rise, there
| | (1) | (2) | (3) | (4) | (5) | (6) | (7) |
|----------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
| LAGGED PUBLIC | 0.0317*** | 0.003 | −0.00158 | −0.0017 | −0.00533 | −0.000637 | −0.00429 |
| | −0.00578 | −0.00561 | −0.0052 | −0.00519 | −0.00479 | −0.00447 | −0.00434 |
| LNCAPITAL | 0.585*** | | 0.526*** | 0.524*** | 0.420*** | 0.394*** | 0.448*** |
| | −0.0519 | −0.0556 | −0.0564 | −0.0596 | −0.0882 | −0.0882 | |
| LNPOP | 0.0787*** | | 0.102*** | 0.0748*** | 0.0645*** | 0.217*** | |
| | −0.0149 | −0.0286 | −0.0223 | −0.0743 | −0.0218 | −0.0743 | |
| LINLAND | | −0.0295 | −0.0269 | −0.022 | −0.000298 | | |
| | | −0.0261 | −0.0235 | −0.0226 | −0.034 | | |
| LNWAGE | | | 0.645*** | 0.554*** | 0.221 | | |
| | | | −0.131 | −0.126 | −0.16 | | |
| 1890 DUMMY | | | | −0.110** | −0.00901 | | |
| | | | | −0.0537 | −0.0527 | | |
| 1900 DUMMY | | | | −0.122*** | −0.0736** | | |
| | | | | −0.041 | −0.0356 | | |
| 1910 DUMMY | | | | −0.162*** | −0.136*** | | |
| | | | | −0.0325 | −0.0313 | | |
| CONSTANT | 6.984*** | 2.655*** | 2.172*** | 2.180*** | −0.69 | | |
| | −0.0395 | −0.38 | −0.376 | −0.38 | −0.662 | | |
| CITY DUMMIES? | No | No | No | No | No | Yes | Yes |
| Observations | 174 | 174 | 174 | 174 | 174 | 174 | 174 |
| R-squared | 0.1044 | 0.569 | 0.627 | 0.63 | 0.704 | 0.749 | 0.911 |
NOTE: Robust standard errors in parentheses.
*** p<0.01, ** p<0.05, * p<0.1.
is a reduction in the efficiency of the usage of resources. All else equal, areas that utilize resources more efficiently will tend to have more prosperous economies. As Stansel (2011) illustrates for metropolitan areas and Poulson and Kaplan (2008) illustrate for states, higher tax burdens do tend to be associated with slower economic growth.\(^{20}\)
**Conclusion**
This article provides the first examination of the relationship between public expenditures and labor productivity that focuses on municipalities, rather than states or nations. Despite examining the issue in a context very conducive to finding a positive relationship between public expenditure and productivity (during a period of rapid expansion of both), this article finds no evidence of such a relationship. Once other factors are controlled for, higher levels of productive public expenditure by city governments have no statistically significant impact on labor productivity in the manufacturing sector for 1880–1920. These findings are robust to three different econometric approaches. They are consistent with the findings of much of the other literature in this area, and they have distinct implications for contemporary public policy issues.
There have been efforts in many countries in recent years to dramatically increase public spending as a way to improve economic conditions. In the United States, for example, federal government spending increased by more than $1 trillion dollars between fiscal year 2007 (the peak of the previous expansion) and fiscal year 2012, an increase of 40 percent in just four years. Much of that new spending has focused on infrastructure projects at the state and local level, with the argument often being made that those projects will boost productivity. Our results, and the similar results of others, cast doubt on the ability of that fiscal expansion to achieve its intended effect. These results may be particularly relevant for rapidly growing middle-income countries.\(^{21}\) For local governments in particular, keeping their tax burden low—especially relative to their neighbors—may be a more effective strategy for economic revival.
\(^{20}\)Reed (2008) provides an excellent summary of the voluminous literature in this area, focusing on the states.
\(^{21}\)We are grateful to an anonymous referee for making this point.
References
Alesina, A.; Baqir, R.; and Easterly, W. (1999) “Public Goods and Ethnic Divisions.” *Quarterly Journal of Economics* 114 (4): 1243–84.
Angrist, J. D., and Krueger, A. B. (2001) “Instrumental Variables and the Search for Identification: From Supply and Demand to Natural Experiments.” *Journal of Economic Perspectives* 15 (4): 69–85.
Aschauer, D. A. (1989) “Is Public Expenditure Productive?” *Journal of Monetary Economics* 23 (2): 177–200.
Barro, R. J. (1991) “Economic Growth in a Cross Section of Countries.” *Quarterly Journal of Economics* 106 (2): 407–43.
——— (1997) *Determinants of Economic Growth: A Cross-Country Empirical Study*. Cambridge: MIT Press.
Barro, R. J., and Sala-i-Martin, X. (1991) “Convergence across States and Regions.” Yale Economic Growth Center Discussion Paper No. 629. New Haven, Conn.: Yale University.
Black, D., and Henderson, V. (1999) “The Theory of Urban Growth.” *Journal of Political Economy* 107 (2): 252–84.
Boarnet, M. (1998) “Spillovers and the Locational Effects of Public Infrastructure.” *Journal of Regional Science* 38 (3): 381–400.
Bound, J.; Jaeger, D. A.; and Baker, R. M. (1995) “Problems with Instrumental Variables Estimation When the Correlation between the Instruments and the Endogenous Explanatory Variable Is Weak.” *Journal of the American Statistical Association* 90 (430): 443–50.
Corsetti, G., and Roubini, N. (1996) “Optimal Government Spending and Taxation in Endogenous Growth Models.” NBER Working Paper No. 5851. Cambridge: National Bureau of Economic Research.
Crihfield, J. B., and Panggabean, M. P. H. (1995) “Growth and Convergence in U.S. Cities.” *Journal of Urban Economics* 38 (2): 138–65.
Dalenberg, D. R., and Partridge, M. D. (1995) “The Effects of Taxes, Expenditures, and Public Infrastructure on Metropolitan Area Employment.” *Journal of Regional Science* 35 (4): 617–40.
De Mello, L. R., Jr. (2002) “Public Finance, Government Spending and Economic Growth: The Case of Local Governments in Brazil.” *Applied Economics* 34 (15): 1871–83.
Denaux, Z. S. (2007) “Endogenous Growth, Taxes and Government Spending: Theory and Evidence.” *Review of Development Economics* 11 (1): 124–38.
Deno, K. T. (1988) “The Effect of Public Capital on U.S. Manufacturing Activity: 1970 to 1978.” *Southern Economic Journal* 55 (2): 400–11.
Duffy-Deno, K. T., and Eberts, R. W. (1991) “Public Infrastructure and Regional Economic Development: A Simultaneous Equations Approach.” *Journal of Urban Economics* 30 (3): 329–43.
Eberts, R. W. (1986) “Estimating the Contribution of Urban Public Infrastructure to Regional Growth.” Federal Reserve Bank of Cleveland Working Paper No. 8610.
——— (1990) “Public Infrastructure and Regional Economic Development.” Federal Reserve Bank of Cleveland *Economic Review* 26 (1): 15–27.
Gallman, R. E. (1960) “Commodity Output, 1839–1899.” In National Bureau of Economic Research, *Trends in the American Economy in the Nineteenth Century*, 13–72. Princeton: Princeton University Press.
Glaeser, E. L.; Scheinkman, J. A.; and Shleifer, A. (1995) “Economic Growth in a Cross-Section of Cities.” *Journal of Monetary Economics* 36 (1): 117–43.
Glaeser, E. L., and Shapiro, J. (2003) “Urban Growth in the 1990s: Is City Living Back?” *Journal of Regional Science* 43 (1): 139–65.
Gramlich, E. M. (1994) “Infrastructure Investment: A Review Essay.” *Journal of Economic Literature* 32 (3): 1177–96.
Holtz-Eakin, D. (1994) “Public Sector Capital and the Productivity Puzzle” *Review of Economics and Statistics* 76 (1): 12–21.
Holtz-Eakin, D., and Schwartz, A. E. (1995) “Infrastructure in a Structural Model of Economic Growth.” *Regional Science and Urban Economics* 25 (2): 131–51.
Jiwattanakulpaisarn, P.; Noland, R. B.; Graham, D. J.; and Polak, J. W. (2009) “Highway Infrastructure Investment and County Employment Growth: A Dynamic Panel Regression Analysis.” *Journal of Regional Science* 49 (2): 263–86.
Kalyvitis, S. and Vella, E. (2011) “Public Capital Maintenance, Decentralization, and U.S. Productivity Growth.” *Public Finance Review* 39 (6): 784–809.
Kendrick, J. (1984) “U.S. Economic Policy and Productivity Growth.” *Cato Journal* 4 (2): 387–400.
Lithgart, J. E., and Martin Suarez, R. M. (2011) “The Productivity of Public Capital: A Meta-analysis.” In W. Jonkhoff and W. Manshanden (eds.) *Infrastructure Productivity Evaluation*, 5–32. New York: Springer.
Lucas, R. E., Jr. (1988) “On the Mechanics of Economic Development.” *Journal of Monetary Economics* 22 (1): 3–42.
Menes, R. (1999) “The Effect of Patronage Politics on City Government in American Cities, 1900–1910.” NBER Working Paper No. 6975. Cambridge: National Bureau of Economic Research.
Mitchener, K. J., and McLean, I. W. (1999) “U.S. Regional Growth and Convergence, 1880–1980.” *Journal of Economic History* 59 (4): 1046–42.
——— (2003) “The Productivity of U.S. States Since 1880.” NBER Working Paper No. 9445. Cambridge: National Bureau of Economic Research.
Moomaw, R. L., and Williams, M. (1991) “Total Factor Productivity Growth in Manufacturing: Further Evidence from the States.” *Journal of Regional Science* 31 (1): 17–34.
Morrison, C. J., and Schwartz, A. E. (1992) “State Infrastructure and Productive Performance.” NBER Working Paper No. 3981. Cambridge: National Bureau of Economic Research.
Munnell, A. H. (1990) “Why Has Productivity Declined? Productivity and Public Investment.” Federal Reserve Bank of Boston, *New England Economic Review* (January-February): 3–22.
——— (1992) “Infrastructure Investment and Economic Growth.” *Journal of Economic Perspectives* 6 (4): 189–98.
Peterson, W. (1994) “Overinvestment in Public Sector Capital.” *Cato Journal* 14 (1): 65–73.
Poulson, B. W., and Kaplan, J. G. (2008) “State Income Taxes and Economic Growth.” *Cato Journal*, 28 (1): 53–71.
Rauch, J. E. (1994) “Bureaucracy, Infrastructure, and Economic Growth: Theory and Evidence from U.S. Cities during the Progressive Era.” Department of Economics Discussion Paper No. 94–06. San Diego: University of California, San Diego.
——— (1995) “Bureaucracy, Infrastructure, and Economic Growth: Evidence from U.S. Cities during the Progressive Era.” *American Economic Review* 85 (4): 968–79.
Reed, W. R. (2008) “The Robust Relationship between Taxes and U.S. State Income Growth.” *National Tax Journal* 61 (1): 57–80.
Romer, P. M. (1986) “Increasing Returns and Long-run Growth.” *Journal of Political Economy* 94 (5): 1002–37.
Stansel, D. (2005) “Local Decentralization and Local Economic Growth: A Cross-Sectional Examination of U.S. Metropolitan Areas.” *Journal of Urban Economics* 57 (1): 55–72.
——— (2009) “Local Government Investment and Long-Run Economic Growth.” *Journal of Social, Political, and Economic Studies* 34 (2): 244–59.
——— (2011) “Why Are Some Cities Growing While Others Are Shrinking?” *Cato Journal* 31 (2): 285–303.
U.S. Department of Commerce, Bureau of the Census (1880) *Valuation, Taxation and Public Indebtedness*. Washington: U.S. Government Printing Office.
——— (1890) *Wealth, Debt and Taxation*. Washington: U.S. Government Printing Office.
——— (1904) *Statistics of Cities Having a Population of over 30,000*. Washington: U.S. Government Printing Office.
——— (1910, 1921) *Financial Statistics of Cities Having a Population of over 30,000*. Washington: U.S. Government Printing Office.
——— (1975) *Historical Statistics of the United States, Colonial Times to 1970*. Washington: U.S. Government Printing Office.
——— (2008) *Historical Overview of U.S. Census Bureau Data Collection Activities about Governments: 1850 to 2005*. Washington: U.S. Government Printing Office.
——— (various years) *Census of Manufactures*. Washington: U.S. Government Printing Office.
——— (various years) *Census of the Population*. Washington: U.S. Government Printing Office.
Walton, G. M., and Rockoff, H. (2002) *History of the American Economy*. 9th ed. Toronto: Thomson Learning.
|
Material Change Proposal
Title: Temporary Alteration to the Restricted Zone at St Pancras International – Incorporation of Platform 10a during Olympic and Paralympic Games
To: Shona Nettlingham (London South Eastern Railways)
Sophie Chapman (Eurostar International Limited)
Graham Maymon (East Midlands Trains)
This consultation is issued in accordance with HS1 Station Access Conditions (November 2010) Part 3 by HS1 Ltd.
Proposal for Change: Network Rail (CTRL) is responding to the requirements for effectively managing the additional passengers using the London South Eastern Railways (“Southeastern”) service to Stratford International during the forthcoming Olympic and Paralympic Games, wishes to carry out the following work requiring a temporary Material Change Proposal in respect of St Pancras International station:
Overview
A revision to the Restricted Zone between Platform 10 and the Eurostar Business Premier Lounge to include the platform known variously as Platform 10a or the Queens Platform within the Common Zone of the station. This involves the addition of a temporary timber screen along the entire length of the edge of Platform 10a from the back of the Southeastern Platform and along the concourse to the screen opposite the Betjeman Arms, maintaining the lift within the RZ. The screen will be of timber construction with steel supports and of the same height as the existing RZ screen. A presentation capturing the proposed alteration is contained in Appendix 1.
Sponsor: HS1 Limited
Date of Proposal: 26 April 2012
Representations/Objections by: 11 May 2012 (14 days from date of distribution)
Variation of Station Access Agreement(s).
Does the Proposal require the Station Access Conditions, their Annexes and/or Station Access Agreements to be varied?
No
HS1 Contact Details:
Name: Chinua Labor, Regulatory Contracts Manager
Address: 73 Collier Street, London, N1 9BE
Telephone: 0207 014 2758
E-Mail: email@example.com
Signed for HS1 Limited
Date
26 April 2012
Name of person signing
Chinua Labor
1. **REPRESENTATIONS/OBJECTIONS**
This Proposal for Change is a Material Change Proposal in accordance with Part 3 of the HS1 Station Access Conditions. However, as agreed in the meeting on 26 April 2012, a reduced timescales of 14 days has been agreed by all Users at the station. On this basis, any representation on this proposal must be issued in writing by 11 May 2012.
**Contents:**
1.0. **Background**
2.0. **The Change Proposal**
3.0. **Scheme Benefits**
4.0. **Temporary Arrangements**
5.0. **Funding Arrangements**
6.0. **Proposed Implementation and Dates**
7.0. **Access for All**
8.0. **Other Information**
9.0. **Amendments to Station Access Conditions Annexes**
10.0. **Acceptance**
1. Background
During the forthcoming London 2012 Olympic and Paralympic Games the demand forecasted for the Southeastern high speed shuttle service between St Pancras International and Stratford International is significantly higher than the current business as usual demand. The passenger numbers will in fact double the patronage of the station as a whole compared with the demand usually experienced at the equivalent time of year on the busiest Olympic days. This requires a special Gamestime methodology for the safe and efficient operation of the station including the management of queuing passengers.
Further to extensive modelling work and testing, it has been established that three queuing areas are required to hold passengers and feed them on to the trains. The train service will also be significantly enhanced over this period but despite that during the peaks, queues will still build up.
The use of Platform 10a for queuing passengers on to the Southeastern trains serves a number of important purposes as follows:
- queue holding capacity
- enhanced train loading capability
- enhanced vertical circulation capacity especially for Persons with Reduced Mobility
- separation of people flows especially during the busy afternoon peak
- Contingent access to the platforms in the event of escalator or lift failure.
2. The Change Proposal
2.1 HS1 Station Access Conditions (HS1 SAC) Context
This Proposal for Change is a Material Change Proposal in accordance with Part 3 of the HS1 Station Access Conditions. Therefore approval is required from Eurostar International Limited and East Midlands Trains.
2.2 The Proposed Changes
The following equipment is to be provided/ altered within the International Zone to provide access to the Domestic Southbound Zone at St. Pancras.
1. A temporary steel and timber faced RZ screen along the edge of Platform 10a of the same height as the existing Steel and glass RZ screen extending across the station concourse and tying securely into the existing RZ screen at both ends.
2. Temporary removal of the glass panel and door at the Southeastern end of Platform 10a to maintain the necessary pedestrian walking width.
3. Temporary removal of the emergency egress doors in the RZ screen at the Betjeman end of the new walkway.
3. Scheme Benefits
The Proposal is necessary to manage the forecast passenger numbers who will be using the Southeastern service during the London 2012 Games.
4. Temporary Arrangements:
Suppliers have been advised that inconvenience to passengers is to be avoided by securely segregating the working site and carrying out works requiring possession/power isolation mainly during non-disruptive time.
5. Funding Arrangements:
The overall cost of the work is fully funded and would not require the contribution of the Train Operators.
5.1 Repairs and Maintenance
The work is temporary within the Restricted Zone and will be reinstated following the Games. No repairs or maintenance works are anticipated. Should any maintenance or repairs be necessary that will not be funded by the train operators.
5.2 Long Term Charge & Qualifying Expenditure
There will be no changes Long Term Charge & Qualifying Expenditure. However HS1 recognises that Eurostar will need to be compensated for the loss of this facility for the Games and therefore HS1 Will perform the calculations outside of this document.
6. Proposed Implementation and Dates
Proposed Implementation – May 2012
Proposed Removal – mid September 2012
7. Access for All
All works undertaken will comply with the Accessible Train and Station Design for Disabled People: Code of Practice, ‘Train and Station Services for Disabled Passengers and the Disability Discrimination Act 1995.
8. Other Information
The colour of the hoardings will be Network Rail grey and decorated with only sports pictograms which relate to the Olympic theme. Appendix 2 provides a visual image of the hoardings following the alteration on Platform 10a.
9. Amendments to the Station Access Conditions Annexes
There are no Conditions Change Proposals arising from this Proposal.
10. Acceptance
This Proposal is deemed as a “Material Change Proposal” in accordance with HS1 SAC (Definitions), thus requiring the approval of all Voting Operators.
If you accept this Proposal, please return a copy of this form, completing, signing and dating the box over.
I confirm that my company [ ] approves this Proposal.
Signed................................................. Date ........................
Name of person signing:
(duly authorised signatory) on behalf of:
..........................................................
I confirm that my company [ ] approves this Proposal.
EAST MIDLANDS TRAINS LTD
Signed........................................ Date 26/4/12
Name of person signing: G R MAYMON
(duly authorised signatory) on behalf of:
EAST MIDLANDS TRAINS LTD
ENDThe following is a list of the most common types of data that can be collected and analyzed using the methods described in this paper:
- **Demographic Data**: Information about the age, gender, race, ethnicity, education level, income, employment status, and other characteristics of individuals or households.
- **Behavioral Data**: Information about the behaviors and activities of individuals or households, such as their shopping habits, travel patterns, and social media usage.
- **Geospatial Data**: Information about the location and spatial distribution of individuals or households, such as their addresses, zip codes, and neighborhoods.
- **Health Data**: Information about the health status and medical history of individuals, such as their diagnoses, treatments, and medications.
- **Financial Data**: Information about the financial status and transactions of individuals or households, such as their income, expenses, savings, and investments.
- **Environmental Data**: Information about the environmental conditions and impacts on individuals or households, such as air quality, water quality, and climate change.
- **Social Media Data**: Information about the content and interactions on social media platforms, such as tweets, posts, and comments.
- **Internet Usage Data**: Information about the internet usage and online behavior of individuals or households, such as their browsing history, search queries, and online purchases.
- **Mobile Data**: Information about the mobile phone usage and location of individuals or households, such as their call logs, text messages, and GPS coordinates.
- **Educational Data**: Information about the educational attainment and experiences of individuals, such as their school attendance, grades, and test scores.
- **Cultural Data**: Information about the cultural background and preferences of individuals or households, such as their language, religion, and traditions.
- **Political Data**: Information about the political affiliations and opinions of individuals or households, such as their voting history, campaign contributions, and social media posts.
- **Sports Data**: Information about the sports participation and achievements of individuals, such as their team affiliations, game statistics, and awards.
- **Entertainment Data**: Information about the entertainment consumption and preferences of individuals or households, such as their movie and TV show ratings, music playlists, and gaming history.
- **Retail Data**: Information about the retail purchases and preferences of individuals or households, such as their product reviews, loyalty program participation, and purchase history.
- **Transportation Data**: Information about the transportation modes and routes used by individuals or households, such as their driving records, public transit usage, and ride-sharing app activity.
- **Agricultural Data**: Information about the agricultural production and practices of individuals or households, such as their crop yields, livestock management, and farming techniques.
- **Manufacturing Data**: Information about the manufacturing processes and products of individuals or households, such as their machinery usage, inventory levels, and quality control measures.
- **Construction Data**: Information about the construction projects and materials used by individuals or households, such as their building plans, permits, and inspections.
- **Energy Data**: Information about the energy consumption and sources of individuals or households, such as their electricity bills, gas usage, and solar panel installations.
- **Water Data**: Information about the water usage and conservation efforts of individuals or households, such as their water bills, plumbing repairs, and rainwater harvesting systems.
- **Waste Data**: Information about the waste generation and disposal practices of individuals or households, such as their recycling rates, trash collection schedules, and composting programs.
- **Healthcare Data**: Information about the healthcare services and providers used by individuals or households, such as their doctor visits, hospitalizations, and insurance coverage.
- **Education Data**: Information about the educational institutions and programs attended by individuals or households, such as their school districts, college majors, and vocational training courses.
- **Employment Data**: Information about the job positions and industries worked in by individuals or households, such as their job titles, salary ranges, and benefits packages.
- **Volunteerism Data**: Information about the volunteer activities and organizations supported by individuals or households, such as their charity donations, community service hours, and non-profit board memberships.
- **Charity Data**: Information about the charitable contributions and causes supported by individuals or households, such as their donation amounts, gift card purchases, and volunteer hours.
- **Community Data**: Information about the community events and organizations attended by individuals or households, such as their club memberships, festival participation, and neighborhood watch involvement.
- **Religious Data**: Information about the religious affiliations and practices of individuals or households, such as their church attendance, prayer groups, and spiritual retreats.
- **Family Data**: Information about the family structures and relationships of individuals or households, such as their marriage status, parenting roles, and kinship networks.
- **Friendship Data**: Information about the friendship networks and connections of individuals or households, such as their social circle, networking events, and online communities.
- **Neighborhood Data**: Information about the neighborhood characteristics and dynamics of individuals or households, such as their housing values, crime rates, and local businesses.
- **City Data**: Information about the city planning and development efforts of individuals or households, such as their zoning regulations, infrastructure improvements, and urban renewal projects.
- **State Data**: Information about the state policies and regulations affecting individuals or households, such as their tax laws, welfare programs, and public safety initiatives.
- **Country Data**: Information about the national policies and global trends impacting individuals or households, such as their immigration status, international trade agreements, and global warming initiatives.
- **World Data**: Information about the world events and developments influencing individuals or households, such as their geopolitical relations, international conflicts, and global health crises.
These examples illustrate the wide range of data types that can be collected and analyzed to gain insights into various aspects of human life and society. The specific data sets used will depend on the research question or application at hand, but the methods outlined in this paper can be applied to any type of data to extract meaningful information and draw conclusions.
I confirm that my company London & South Eastern Railway (Southeastern) approves this Material Change Proposal - Temporary Alteration to the Restricted Zone at St Pancras International – Incorporation of Platform 10a during Olympic and Paralympic Games.
Signed........................................... S Nettlingham
Date ........................................... 4-5-12
Name of person signing:
Shona Nettlingham
(duly authorised signatory) on behalf of:
London & South Eastern Railway (Southeastern)
END
I confirm that my company [EIL] approves this Proposal.
Signed........................................ Date 15/5/12
Name of person signing: SOPHIE CHAPMAN
(duly authorised signatory) on behalf of:
EIL
END
|
Review: Datapath for MIPS
- Use datapath figure to represent pipeline
Review: Problems for Computers
- Limits to pipelining: **Hazards** prevent next instruction from executing during its designated clock cycle
- **Structural hazards**: HW cannot support this combination of instructions (single person to fold and put clothes away)
- **Control hazards**: Pipelining of branches & other instructions **stall** the pipeline until the hazard; “bubbles” in the pipeline
- **Data hazards**: Instruction depends on result of prior instruction still in the pipeline (missing sock)
Review: C.f. Branch Delay vs. Load Delay
- Load Delay occurs only if necessary (dependent instructions).
- Branch Delay always happens (part of the ISA).
- Why not have Branch Delay interlocked?
- Answer: Interlocks only work if you can detect hazard ahead of time. By the time we detect a branch, we already need its value ... hence no interlock is possible!
Outline
- Pipeline Control
- Forwarding Control
- Hazard Control
Piped Proc So Far ...
- ALU
- Register File
- Memory
- Control Unit
New Representation: Regs more explicit
IF/DE.Ir = Instruction
DE/EX.A = BusA out of Reg
EX/MEM.S = AluOut
EX/MEM.D = Bus B pass-through for sw
ME/WB.S = AluOut pass-through
ME/WB.M = Mem Result from lw
What’s Missing???
Pipelined Processor (almost) for slides
Idea: Parallel Piped Control …
Pipelined Control
Data Stationary Control
- The Mem Control generates the control signals during Reg/Dec
- Control signals for Exec (ExOp, ALUSrc, …) are used 1 cycle later
- Control signals for Mem (MemW Branch) are used 2 cycles later
- Control signals for Wr (MemtoReg MemW) are used 3 cycles later
Let’s Try it Out
10 lw r1, 36(r2)
14 addi r2, r2, 3
20 sub r3, r4, r5
24 beq r6, r7, 100
28 ori r8, r9, 17
32 add r10, r11, r12
100 and r13, r14, 15
### Start: Fetch 10
- **Inst Mem**: IR = 36 (10)
- **Reg File**
- **Next PC**: PC + 4
- **Mem Ctrl**
- **WB Ctrl**
**Instructions:**
10 lw r1, 36(2)
14 addi r2, r2, 3
20 sub r3, r4, r5
24 beq r6, r7, 100
30 ori r8, r9, 17
34 add r10, r11, r12
100 and r13, r14, 15
### Fetch 14, Decode 10
- **Inst Mem**: IR = 36 (10)
- **Reg File**
- **Next PC**: PC + 4
- **Mem Ctrl**
- **WB Ctrl**
**Instructions:**
10 lw r1, 36(2)
14 addi r2, r2, 3
20 sub r3, r4, r5
24 beq r6, r7, 100
30 ori r8, r9, 17
34 add r10, r11, r12
100 and r13, r14, 15
### Fetch 20, Decode 14, Exec 10
- **Inst Mem**: IR = 36 (10)
- **Reg File**
- **Next PC**: PC + 4
- **Mem Ctrl**
- **WB Ctrl**
**Instructions:**
10 lw r1, 36(2)
14 addi r2, r2, 3
20 sub r3, r4, r5
24 beq r6, r7, 100
30 ori r8, r9, 17
34 add r10, r11, r12
100 and r13, r14, 15
### Fetch 24, Decode 20, Exec 14, Mem 10
- **Inst Mem**: IR = 36 (10)
- **Reg File**
- **Next PC**: PC + 4
- **Mem Ctrl**
- **WB Ctrl**
**Instructions:**
10 lw r1, 36(2)
14 addi r2, r2, 3
20 sub r3, r4, r5
24 beq r6, r7, 100
30 ori r8, r9, 17
34 add r10, r11, r12
100 and r13, r14, 15
### Fetch 30, Dcd 24, Ex 20, Mem 14, WB 10
- **Inst Mem**: IR = 36 (10)
- **Reg File**
- **Next PC**: PC + 4
- **Mem Ctrl**
- **WB Ctrl**
**Instructions:**
10 lw r1, 36(2)
14 addi r2, r2, 3
20 sub r3, r4, r5
24 beq r6, r7, 100
30 ori r8, r9, 17
34 add r10, r11, r12
100 and r13, r14, 15
### Fetch 100, Dcd 30, Ex 24, Mem 20, WB 14
- **Inst Mem**: IR = 36 (10)
- **Reg File**
- **Next PC**: PC + 4
- **Mem Ctrl**
- **WB Ctrl**
**Instructions:**
10 lw r1, 36(2)
14 addi r2, r2, 3
20 sub r3, r4, r5
24 beq r6, r7, 100
30 ori r8, r9, 17
34 add r10, r11, r12
100 and r13, r14, 15
**Note:** Delayed Branch: always execute `ori` after `beq`
**Note:** Always execute `ori` after `beq`
• Remember: $\wedge$ means triggered on edge.
• What is wrong here?
Double-Clocked Signals
• Some signals are double clocked!
• In general, inputs to edge components are their own pipeline regs
Watch out for stalls and such!
Administrivia
• HW 5 – Due Wednesday in class
• ProjWork 3.6 – Due 8/5 & 8/8
• Midterm 2:
• Friday, August 4: 11:00 – 2:00
• 390 Hearst Mining
• Same rules as last time
Outline
• Pipeline Control
• Forwarding Control
• Hazard Control
Review: Forwarding
Fix by Forwarding result as soon as we have it to where we need it:
| Instruction | IF | ID | EX | MEM | WB |
|-------------|----|----|----|-----|----|
| add $10, $11, $12 | $1 | $2 | $3 | $4 | $5 |
| sub $14, $10, $13 | $1 | $2 | $3 | $4 | $5 |
| and $15, $10, $16 | $1 | $2 | $3 | $4 | $5 |
| or $17, $10, $18 | $1 | $2 | $3 | $4 | $5 |
| xor $19, $10, $110 | $1 | $2 | $3 | $4 | $5 |
* “ox” hazard solved by register hardware
Forwarding
In general:
• For each stage i that has reg inputs
- For each stage j after I that has reg output
- If i.reg == j.reg forward j value back to i.
- Some exceptions ($0, invalid)
In particular:
• ALUinput $\leftarrow$ (ALUResult, MemResult)
• MemInput $\leftarrow$ (MemResult)
Pending Writes In Pipeline Registers
- Current operand registers
- Pending writes
- hazard <=
\[
(lt \leftarrow rw_{op} \land regW_{op}) \lor \\
(lt \leftarrow rw_{op} \land regW_{al}) \lor \\
(lt \leftarrow rw_{op} \land regW_{i}) \lor \\
(lt \leftarrow rw_{op} \land regW_{j}) \lor \\
(lt \leftarrow rw_{op} \land regW_{m}) \lor \\
(lt \leftarrow rw_{op} \land regW_{D})
\]
Forwarding Muxes
- Detect nearest valid write op
- operand register and forward into op latches, bypassing remainder of the pipe
- Increase muxes to add paths from pipeline registers
- Data Forwarding = Data Bypassing
What about memory operations?
Tricky situation:
MIPS:
lw 0($10)
sw 0($11)
RTL:
R1 <- Mem[R2 + 1];
Mem[R3+34] <- R1
Solution:
Handle with bypass in memory stage!
Outline
- Pipeline Control
- Forwarding Control
- Hazard Control
Data Hazard: Loads (1/4)
- Forwarding works if value is available (but not written back) before it is needed. But consider ...
```
lw $10, 0($11)
sub $13, $10, $12
and $15, $10, $14
or $17, $10, $16
```
- Need result before it is calculated!
- Must stall use (sub) 1 cycle and then forward. ...
Data Hazard: Loads (2/4)
- Hardware must stall pipeline
- Called “interlock”
```
lw $10, 0($11)
sub $13, $10, $12
and $15, $10, $14
or $17, $10, $16
```
Data Hazard: Loads (3/4)
- Instruction slot after a load is called “load delay slot”
- If that instruction uses the result of the load, then the hardware interlock will stall it for one cycle.
- If the compiler puts an unrelated instruction in that slot, then no stall
- Letting the hardware stall the instruction in the delay slot is equivalent to putting a nop in the slot (except the latter uses more code space)
Data Hazard: Loads (4/4)
- Stall is equivalent to nop
Hazards / Stalling
In general:
- For each stage i that has reg inputs
- If i’s reg is being written later on in the pipe but is not ready yet
- Stages 0 to i: Stall (Turn CEs off so no change)
- Stage i+1: Make a bubble (do nothing)
- Stages i+2 onward: As usual
In particular:
- ALUinput $\leftarrow$ (MemResult)
Hazards / Stalling
Alternative Approach:
- Detect non-forwarding hazards in decode
- Possible since our hazards are formal.
- Not always the case.
- Stalling then becomes:
- Issue nop to EX stage
- Turn off nextPC update (refetch same inst)
- Turn off InstReg update (re-decode same inst)
Stall Logic
• 1. Detect non-resolving hazards.
• 2a. Insert Bubble
• 2b. Stall nextPC, IF/DE
Stall Logic
• Stall-on-issue is used quite a bit
• More complex processors: many cases that stall on issue.
• More complex processors: cases that can’t be detected at decode
- E.g. value needed from mem is not in cache
– proc must stall multiple cycles
By the way …
• Notice that our forwarding and stall logic is stateless!
• Big Idea: Keep it simple!
• Option 1: Store old fetched inst in reg (“stall temp”), keep state reg that says whether to use stall temp or value coming off inst mem.
• Option 2: Re-fetch old value by turning off PC update.
|
A beam search heuristic method for mixed-model scheduling with setups
P.R. McMullen\textsuperscript{a,*}, Peter Tarasewich\textsuperscript{b}
\textsuperscript{a}Babcock Graduate School of Management, Wake Forest University, Winston-Salem, NC 27109, USA
\textsuperscript{b}Northeastern University, College of Computer and Information Science, 360 Huntington Avenue, Boston, MA 02115, USA
Received 1 May 2003; accepted 1 December 2003
Available online 17 July 2004
Abstract
Mixed-model scheduling involves determining a production sequence for multiple products along a single assembly line. While the goal of using this approach is often determining a schedule which keeps the usage of parts as constant as possible, there is an implicit but often unrealistic assumption that setup times are negligible. When setup times are assumed to be significant, the sequencing decision becomes a multi-objective problem which needs to minimize both the usage rate and the number of setups.
However, the objectives of a low usage rate and a minimum number of setups are frequently in opposition with one another. To address this situation, this research presents an efficient frontier approach to support sequencing decisions that are made as a result of the compromise necessitated by these two conflicting goals. A beam-search heuristic is used to effectively generate the efficient frontiers needed to solve the problem.
© 2004 Elsevier B.V. All rights reserved.
Keywords: Just-in-time systems; Beam search; Heuristics; Dual-objective decision-making
1. Introduction
Just-in-time (JIT) is a management philosophy that uses a set of integrated activities to achieve manufacturing flexibility with minimal inventories. Successful implementation of JIT production systems can help organizations achieve competitive priorities such as low cost and consistent quality (Gilbert, 1990; Huson and Nanda, 1995). Specific management practices associated with JIT manufacturing systems vary, but include total quality control, uniform workload, multifunction employees, and reduced setup times (White and Ruch, 1990). Overall, adopters of JIT have shown significant improvements in performance (White, 1993).
One manufacturing problem that is often associated with JIT practices is mixed-model
scheduling. Solving this problem involves determining a production sequence for multiple products along a single assembly line. Manufacturers will use this type of scheduling technique to meet diversified customer demands while maintaining minimal product inventories. Because JIT systems are concerned with having the right parts at the right place at the right time, the goal of a company using this approach is often determining a schedule which keeps the usage of every part in the assembly line as constant as possible.
But the implicit assumption in the mixed-model sequencing problem is that setup times between the different products on the assembly line are negligible. A setup is required each time two consecutive items in the production sequence are different. While it is acknowledged that small setup times are an essential ingredient for JIT success, it is more realistic in certain cases to assume small but non-negligible setup times. This research addresses the mixed-model assembly line problem with significant setup times.
When setup times are taken into account, the goal of the problem changes. It becomes sequencing products as evenly as possible while minimizing the number of setups that occur when switching between different products. A good sequence of products should not only have an acceptable level of product inter-mixing, but also an acceptable number of required set-ups. In this situation, the sequencing decision becomes a multi-objective problem, minimizing both usage rate and number of setups.
The problem is challenging because the goals of a low usage rate and a minimum number of setups are frequently in opposition with one another (McMullen and Frazier, 2000). A manager of a production line must determine desirable sequences with respect between setups and usage rates. To lessen the difficulties in assessing these tradeoffs, this research uses an efficient frontier approach to support the decision that is made as a result of the compromise between the two conflicting goals.
This paper proceeds as follows. First, a description of the mixed-model scheduling problem with setups is given. Next, an efficient frontier approach to aid in the determination of effective solutions to the problem is presented. A beam-search heuristic is then used to effectively generate the efficient frontiers needed to solve the problem. The paper concludes with a summary of the work and future research directions.
2. Problem definition
An assembly line producing more than one product at a given time in an intermixed fashion is known as a mixed-model line. Higher performance with this type of production line can be achieved through optimized product flow. Often a goal in scheduling production on the mixed-model line is keeping the usage of parts for the different products being produced as level as possible. Constant use of parts allows easier implementation of a JIT manufacturing environment due to lower variations in production quantities and work-in-progress inventories. This problem has been widely studied, first by Monden (1983), and more recently by Miltenburg and Sinnamon (1989, 1992, 1995), Sumichrast et al. (1992), Bolat (1994), Tamura et al. (1999), Bard et al. (1992), Inman and Bulfin (1991), Xiaobo and Ohno (1994), and Xiaobo et al. (1999). As another source for the curious reader, Yano and Bolat (1989), and Ghosh and Gagnon (1989) offer research pertaining to the planning and scheduling of assembly systems.
The mixed-model sequencing problem, however, implicitly assumes negligible setup times between the different products on the line. While minimal setup times are a key to JIT success, it is more realistic in certain cases (e.g., automobile assembly) to assume significant setup times. Here, the goal of the problem changes to sequencing products as evenly as possible while minimizing the number of setups that occur—a multi-objective problem, where the objectives are frequently in opposition with each another (McMullen and Frazier, 2000).
Consider the situation where four units of item A need processing, along with two units of item B and one unit of item C. If minimization of setups were desired, one possible sequence would be: AAAABBC. This sequence would result in the minimum of three setups (a required setup is
assumed here for A at the start of the processing). Unfortunately, this sequence does not provide evenness of the material usage rate (or desirable intermixing). The following sequence does provide more stability of the material usage rate: ABA-CABA. Unfortunately, this sequence requires the maximum of seven setups. These two simple examples illustrate the tradeoff between required setups and stability of material usage rate—a factor which complicates decision-making.
A mathematical formulation for the mixed-model scheduling problem with setups is now presented. First, the following parameters are defined, and it is detailed as to whether these parameters are determined by the system (endogenous) or pre-determined (exogenous):
- \( U \): usage rate of a production sequence (endogenous)
- \( S \): number of setups in a production sequence (endogenous)
- \( U_S \): usage rate of production sequence associated with \( S \) setups (endogenous)
- \( a \): number of unique products to be produced (exogenous)
- \( d_i \): demand for product \( i \), \( i = 1, 2, \ldots, a \) (exogenous)
- \( D_T \): total number of units for all products or total demand—also represents number of positions in sequence (exogenous)
- \( s_k \): 1 if setup required; 0 otherwise (endogenous)
- \( x_{i,k} \): total number of units of product \( i \) produced over stages 1 to \( k \), where \( k = 1, 2, \ldots, D_T \) (endogenous)
The problem has two objective functions. The first is the minimization of the number of required setups. The number of setups (\( S \)) in a production sequence is
\[
\text{Minimize: } \quad S = 1 + \sum_{k=2}^{D_T} s_k,
\]
where \( k \) is the index of the position in the sequence. If the item in position \( k \) is different from the item in the previous position (position \( k - 1 \)), then a setup is required. Mathematically, this is as follows: \( s_k = 1 \) is setup is required in position \( k \); Otherwise \( s_k = 0 \). The following assumptions are made regarding setups:
- An initial setup is required regardless of sequence, which is why the index \( k \) starts at 2.
- The time required to perform a setup is assumed to be sequence-independent, so the setup time for an item on a machine is not determined by the item which precedes it.
- The number of setups required represents the required setup time. Therefore, the total setup time is directly proportional to the total number of setups.
The second objective is to optimize the stability of raw material usage rates. Monden (1983) was the first to formally recognize this important objective as it relates to JIT sequencing. Miltenburg (1989) developed a metric to measure this stability of parts usage. This metric is referred to as the Usage Rate (\( U \)), and is defined as follows:
\[
\text{Minimize: } \quad U = \sum_{k=1}^{D_T} \sum_{i=1}^{a} \left( x_{i,k} - k \frac{d_i}{D_T} \right)^2.
\]
Stability of parts usage will be achieved through minimization of the above usage rate. Thus, there are two objectives of interest here for JIT production sequences: minimization of both setups and usage rate.
Both objectives are subject to the following constraints, guaranteeing the pre-specified product-mix:
\[
x_{i,D_T} = d_i, \text{ for } i = 1, 2, \ldots, a.
\]
### 3. An efficient frontier approach
Past research has demonstrated that there is an inherent tradeoff between usage rate and the number of setups McMullen and Frazier (2000). As a result, the two objectives, which oppose each other, must be simultaneously optimized. One way to do this is to construct a composite objective function, which could search for desirable combinations of number of setups and usage rates.
Construction of a composite objective function involves weighting both components of the objective function, which requires the decision-maker to determine weighting schemes. Determination of weighting schemes can be problematic, due to the fact that the user is dealing with two metrics measured in different units—which can have confounding effects on the composite objective function value.
To avoid the potentially dangerous issue of weighting schemes, an efficient frontier approach is used here to obtain sequences having desirable combinations of numbers of required setups and usage rates. This approach also presents the decision-maker with a graphical tool to aid in the process of evaluating trade-offs and selecting the final production sequence to be implemented. This type of problem lends itself well to an efficient frontier approach because there are exactly two objectives (number of setups and usage rate), with one being a discrete measure (number of setups) and the other a continuous measure (usage rate). This means that the decision-maker could strive to obtain minimal values of the continuous measure (usage) for each value of the discrete measure (number of setups). If the number of setups were continuous and not discrete, the approach described above would not be possible. An efficient frontier is a collection of points where one axis typically presents a discrete variable, while the other presents an optimal value of another variable at each unique level of the discrete variable. An efficient frontier approach has become a popular means of addressing multiple objective optimization problems.
The example in Fig. 1 shows how the number of setups and usage rate could be translated into an efficient frontier. Point 1 is on the frontier, while Point 2 is not. Both points have the same number of setups, but the sequence represented by Point 2 has a higher usage rate than the sequence represented by Point 1. In short, the sequence represented by Point 2 is dominated by the sequence represented by Point 1. Similarly, the sequence represented by Point 4 is dominated by the sequence represented by Point 3 for the same reasons, although the number of setups here is different than for the sequences represented by Points 1 and 2.
Any sequence having a combination of number of setups and usage rate that is on the efficient frontier is considered dominant. Any sequence “northeast” of the frontier is considered dominated by the sequences on the frontier.
### 3.1. Determining the best mixed-model schedule using the efficient frontier
Use of an efficient frontier affords managers and other decision-makers with the ability to find sequences providing acceptable levels of both number of setups and usage rates. The way this is done, is rather straightforward: the decision-maker first examines the efficient frontier that has been generated for the sequencing problem of interest and chooses the maximum number of setups that they can be comfortable with. From there, they find the subsequent sequence resulting in the minimum usage rate as presented above.
The continuous nature of the efficient frontier also provides the ability to perform some sensitivity analyses. For example, a decision-maker could find a “neighboring” point along the frontier, where they can exploit a sequence’s property of the usage rate/number of setups tradeoff to find the most appropriate sequence for their needs. Point 5 in Fig. 1 illustrates a possible starting point for such a sensitivity analysis.
This graphical method of displaying a set of efficient solutions provides a simple yet effective way for managers to quickly evaluate the tradeoffs involved in choosing a manufacturing sequence. It
helps to prevent potentially, poor decisions that can result from a single solution calculated from user-determined weighting values.
Given that this research effort is concerned with finding the minimal usage rate for each associated number of required setups, the objective function can be presented in traditional mathematical programming formatting as follows:
\[
\text{Minimize: } U_S = \sum_{k=1}^{D_T} \sum_{i=1}^{a} (x_{i,k} - kd_i/D_T)^2,
\]
for all unique values of \( S \)
(4)
subject to: \( x_{i,D_T} = d_i, \) for \( i = 2, 1, \ldots, a. \)
(5)
### 4. Constructing the efficient frontier
Construction of the efficient frontier for any list of items requiring sequencing requires the decision-maker to address the fact that problems of this type are combinatorial—small increases in problem size result in large increases in computational resources needed to find optimal solutions. The number of unique sequences when there are \( d_i \) units demanded for each of the \( a \) different items is expressed as follows:
\[
\text{Unique sequences} = \frac{\left(\sum_{i=1}^{a} d_i\right)!}{\prod_{i=1}^{a} (d_i!)}.
\]
(6)
Because these types of sequencing problems have so many possible permutations, traditional optimization techniques such as linear or integer programming are not practical approaches. As a result, search heuristics provide an opportunity to find desirable solutions with a reasonable expenditure of computational effort.
#### 4.1. A beam search heuristic
Beam search is a heuristic search procedure that can be used to effectively address the problem of interest here. It is a modification of a breadth-first search (Norvig, 1992). Since its inception, beam search has been used for a variety of job-shop scheduling approaches (Fox, 1983; Ow and Morton, 1988; Ow and Morton, 1989; Ow and Smith, 1988). De et al. (1992) used a beam search approach to construct an efficient frontier exploiting a weighted linear combination of mean and variance of job completion time on a single machine. Nair et al. (1995) used beam search to design product lines with the objective of optimizing the mutual satisfaction of buyers and sellers.
All permutations of the problem addressed here can be represented via a search tree—the leaves at the bottom level of the tree represent all possible sequences. When branching from a node to its lower-level nodes, beam search only permits branching on the \( b \) most promising lower-level nodes (the other nodes are permanently pruned). The parameter \( b \) is referred to as the beam width, and the decision-maker chooses its value. Beam search results in less branching than the breadth-first search, which subsequently results in less computational effort. Another parameter chosen by the decision-maker is the pruning depth, depth, which is the level in the search tree where actual pruning commences. Full branching takes place at all levels of the search tree above depth. Giving the decision-maker control over the value of depth provides them with control as to how much branching will occur prior to pruning. For both beam width \( b \) and depth, larger values result in more sequences being evaluated—this also results in the requirement of more computational resources.
The search tree in Fig. 2 shows all permutations of the simple sequencing problem where there are three different products (\( a = 3 \)). There are two units of item A demanded (\( d_1 = 2 \)), and one unit each of items B and C (\( d_2 = 1, \ d_3 = 1 \)), which results in 12 possible sequences:
\[
\frac{(2 + 1 + 1)!}{(2!)(1!)(1!)} = \frac{24}{2} = 12.
\]
At each level of the search tree, the items to the left of the vertical bracket represent items that have been placed into the sequence of interest, while the items to the right of the vertical bracket represent items that have not yet been put into the sequence. For example, at level 2 of the search
tree, AC|AB suggests that AC are already in the partial sequence, while (the second) A and B are not—they will be placed in the sequence at levels 3 and 4. Note that level 4 represents all possible sequences, or permutations of this simple problem. It is appropriate to note that beam search is presented here as a means of traversal through the enumeration tree—a way to get to the lowest levels of the tree without having to explore all branches of the tree. One could think of beam search as an efficient means of tree traversal.
Fig. 3 shows a beam search solution to the same example problem where the beam width is $b = 2$ and the depth at which pruning commences is depth = 2. At level 2, only the most promising partial sequence (in terms of usage rate) is kept—other partial sequences are pruned. For example, the partial sequences BA|AC and BC|AA are the resultant partial sequences of B|AAC. BA|AC is kept and BC|AA is pruned because BA|AC’s usage rate of 1.375 (from Eq. (2)) is less than BC|AA’s usage rate of 2.375 (from Eq. (2)). In situations where there is a “tie” for lowest usage rate, the “leftmost node is selected for future branching. This is illustrated by the resultant nodes of A|ABC—AB|AC and AC|AB. Here AB|AC is kept and AC|AB is pruned because AB|AC is left of AC|AB despite the fact that they have the same usage rate.
Another important point to make here is that usage rate, not number of setups, is used to determine which nodes are pruned. The number of setups could be used to determine pruning status, but experimentation showed that using this as a criterion resulted in relatively inferior solutions in terms of efficient frontier performance due to “frontier voids”—this issue is further addressed in the results section of the paper.
Fig. 2 shows the “full” enumeration tree for the example problem, while Fig. 3 shows the “partial” enumeration for the example problem made possible by the beam search approach.
### 4.2. Example problem
To illustrate the described methodology, an example problem is presented. Consider the situation where five units of item A are required,
three units of items B, C, and D are required, and one unit of item E. One possible sequence is AAAAABBBCCCCDDDE. All possible sequences for this problem were enumerated—there are 50,450,400 of them. There are as few as five setups possible (the sequence above requires five setups), and as many as 15 possible setups (ABCDEABCDADEABCDE represents an example of this situation). During the enumeration process, the minimum usage rate for each possible number of setups is found. This comprises the optimal efficient frontier. This problem was also addressed via beam search, with a beam width of $b = 2$, and pruning commencing at depth = 4. The minimum usage rate for each possible number of setups was also found for this beam search approach. Both efficient frontiers are presented in Fig. 4.
As one can see from the figure above, the beam search approach does yield consistently higher usage rates as compared to the optimal solution. But of the 50,450,400 enumerated solutions, only 12,660 of them were found to be superior to those obtained via beam search for each possible number of setups. This places the beam search result in the 99.9749th percentile for this example problem—a near-optimal condition. The beam search solution also took only 1.43 minutes to compute versus the 25.54 minutes required for
complete enumeration—a fraction of the time for complete enumeration.
5. Experimentation and results
5.1. Test problems and parameters
Several test problems were used to evaluate the beam search approach in terms of performance compared to optimality as well as performance in terms of CPU time. Problems were essentially obtained via two different sources. The first source was from a specific wood-processing operation—turning/lathing wooden sticks into vertical members for stairway banisters. There is demand for each type of vertical member, and setup time between differing item is non-negligible. The demand dynamics for these items are consistent with the literature Sumichrast and Russell (1990), the second source—illustrating varying degrees of single item product-mix dominance. These test problems are used to evaluate the methodology presented here. These problems appear in the Appendix. These problems were solved according to varying sets of beam search parameters. A listing of these parameters is provided in Table 1.
5.2. Heuristic performance
In terms of beam search performance, two measures are of interest: objective function performance and CPU time. Objective function performance implies a comparison of the efficient frontier obtained via beam search and the optimal efficient frontier obtained via enumeration. CPU time here means a comparison of required CPU time for the beam search approach and the required CPU time for complete enumeration. These algorithms were run on a system with dual Pentium III processors with a clock speed of 500 Mhz.
A comparison of beam search results and complete enumeration is summarized in Table 2. The values in Table 2 require some explanation. The CPU ratio is as follows:
\[
\text{CPU Ratio} = \frac{\text{Beam Search CPU Time}}{\text{Complete Enumeration CPU Time}}
\]
Lower CPU ratios reflect relatively little beam search CPU time as compared to the CPU time for complete enumeration—lower CPU ratios are desired.
Average inferiority is straightforward—the average amount the usage rate for the beam search solution is in excess of the usage rate for the optimal solution at each level of required setups.
Frontier voids represents the number of times the beam search heuristic was unable to find a sequence for a specific number of setups. Fig. 5 presents a graphic example of a situation where frontier voids are the result of a search.
Here, there are two situations where the search did not find a sequence for a specific number of setups: seven setups and eleven setups. Frontier voids suggest the weakness of the search—the
| Parameter label | Beam width ($b$) | Pruning depth (depth) |
|-----------------|------------------|-----------------------|
| 1 | 1 | 2 |
| 2 | 2 | 2 |
| 3 | 2 | 3 |
| 4 | 2 | 4 |
| 5 | 2 | 5 |
| 6 | 3 | 2 |
| 7 | 3 | 3 |
| 8 | 2 | 6 |
Table 2
Performance measures organized by parameter values
| Parameter label | CPU ratio (Std. dev.) | Avg. inferiority % (Std. dev.) | Frontier voids (Std. dev.) |
|-----------------|-----------------------|--------------------------------|----------------------------|
| 1 | 0.0623 (0.0908) | 19.69 (11.93) | 5.667 (2.33) |
| 2 | 0.2237 (0.3091) | 13.77 (5.89) | 1.89 (1.02) |
| 3 | 0.3376 (0.3451) | 10.06 (5.32) | 0.06 (0.24) |
| 4 | 0.5030 (0.4630) | 7.51 (4.58) | 0.00 (0.00) |
| 5 | 0.7800 (0.6620) | 5.49 (3.25) | 0.00 (0.00) |
| 6 | 1.0840 (0.6040) | 3.45 (2.87) | 0.44 (0.51) |
| 7 | 1.9760 (1.0930) | 2.13 (2.24) | 0.06 (0.25) |
| 8 | 0.9670 (0.6620) | 3.21 (2.00) | 0.00 (0.00) |
inability of the search to find solutions for all possible number of setups. Minimum frontier voids are desired. It should be noted, however, that it is quite possible for sequences obtained via complete enumeration to have frontier voids as well.
From inspection of Table 2, it is clear that the solutions obtained via parameter labels 1 and 2 result in undesirable solutions—there are many frontier voids and the level of inferiority is high. These levels of beam width and pruning depth will be considered no further. One may also notice that parameter labels 6, 7 and 8 result in desirable solutions in terms of average inferiority (near-optimal solutions). The problem with these solutions is that they are CPU intensive—they essentially require as much or more CPU time as the optimal solutions obtained via complete enumeration. As a result, solutions obtained via the beam width and pruning depth of parameter labels 6, 7 and 8 will be considered no further.
As a result of using this “elimination” process, only solutions obtained via the beam-width and pruning depth represented by parameter labels 3, 4 and 5 remain. These solutions have reasonably low levels of inferiority, CPU times are consistently less than unity (which suggests less needed CPU time as compared to complete enumeration), and minimal frontier voids. Solutions obtained by these combinations of $b$ and depth are investigated further—Table 3 shows these results.
Table 3 demonstrates that, in terms of percentile performance, any of these approaches result in near-optimal performance. It does seem, however, that increasing the pruning depth to levels of 4 and/or 5 provides improvements that perhaps do not offset the additional CPU resources. Therefore, for the given problems here, use of a beam width of $b = 2$ and a pruning depth of depth = 3 seem most reasonable.
### 5.3. Unique and beneficial features of problem structure
This type of problem has some features that can simplify finding desirable solutions for larger problems. The sequences obtained via the presented methodology can be “mapped,” or extrapolated to obtain solutions for larger problems. Specifically, replication can be employed to accommodate larger problems. Consider the example presented earlier in the problem definition section of the paper: demand for four units of item A, demand for two units of item B and demand for one unit of item C. Assuming that the following sequence is obtained via the heuristic: AABCAAB, the following sequence could be subsequently obtained via replication if demand for each unique item were tripled: AABCAAB|AABCAAB|AABCAAB.
Furthermore, consider the possibility where a practitioner is interested in integrating an additional unique item into the mix. For example, assume that one unit of unique item D is demanded, given the “tripling” of the original product-mix. Item D could be placed somewhere in the middle of the sequence as follows: AABCAAB|ABCDAAB|AABCAAB. Or, if two units of item D are demanded, then these two units
could be placed somewhere in the sequence, such as the following: AABC AAB D AABC AAB D | AABC AAB (if minimal usage rate is desired), or AABC AAB | ABC DDA AB | AABC AAB (if minimal setups are desired).
This replication and extrapolation can be used to retro-fit solutions obtained via the presented methodology to larger problems, which is beneficial considering the combinatorial and memory limitations associated with the presented approach. Wantuck (1989) provides some additional guidelines regarding the “drop-ins” associated with adding new items to the sequence.
6. Conclusion
This paper presented the mixed-model scheduling with setups problem. Two conflicting objectives arise when addressing this problem. These are important objectives with respect to successful JIT implementation. Given these conflicting objectives, an efficient frontier approach has been developed as an aid to decision-makers in evaluating the tradeoffs involved and choosing a final manufacturing sequence. A beam-search heuristic is used to effectively generate the efficient frontiers. Near optimal solutions are obtained with reasonable computational effort. This type of search heuristic, as opposed to others, permits decision-makers to exploit representative components of the entire search space, as opposed to being “at the mercy” of stochastic search mechanisms (i.e., Simulated Annealing, Genetic Algorithms or Tabu Search). As a next step in this research, a formal decision support system might be implemented to allow managers to quickly generate the efficient frontiers for a problem that they might encounter.
This research has its limitations. For beam search, it is necessary to hold “in memory” all nodes of the current level so that relevant information can be passed from the parent node to the child nodes. While the heuristic performs well for the problem sets investigated in this paper, as problem size increases it may be constrained by the amount of computer memory available for processing. The authors are currently addressing this concern as an extension of this study. Another opportunity for future research would be to implement some Artificial Neural Network approaches to this type of problem. Specifically, Kohonen’s self-organizing maps Kohonen (1990) and Hopfield and Tank (1985) approaches could be adapted to address this dual-objective decision-making problem.
Appendix
See Tables 4 and 5.
Table 4
Problem set 1: (number of each product type in product mix—total demand is 12)
| Problem | Item 1 | Item 2 | Item 3 | Item 4 | Item 5 | Total sequences |
|---------|--------|--------|--------|--------|--------|-----------------|
| B | 8 | 1 | 1 | 1 | 1 | 11,880 |
| C | 7 | 2 | 1 | 1 | 1 | 47,520 |
| D | 6 | 3 | 1 | 1 | 1 | 110,880 |
| E | 6 | 2 | 2 | 1 | 1 | 166,320 |
| F | 5 | 3 | 2 | 1 | 1 | 332,640 |
| G | 5 | 2 | 2 | 2 | 1 | 498,960 |
| H | 4 | 3 | 2 | 2 | 1 | 831,600 |
| I | 4 | 4 | 2 | 1 | 1 | 415,800 |
| J | 3 | 3 | 2 | 2 | 2 | 1,663,200 |
Table 5
Problem set 2: (number of each product type in product mix—total demand is 15)
| Problem | Item 1 | Item 2 | Item 3 | Item 4 | Item 5 | Total sequences |
|---------|--------|--------|--------|--------|--------|-----------------|
| B | 11 | 1 | 1 | 1 | 1 | 32,760 |
| C | 10 | 2 | 1 | 1 | 1 | 180,180 |
| D | 9 | 3 | 1 | 1 | 1 | 600,600 |
| E | 7 | 5 | 1 | 1 | 1 | 2,162,160 |
| F | 7 | 3 | 2 | 2 | 1 | 10,810,800 |
| G | 6 | 3 | 3 | 2 | 1 | 25,225,200 |
| H | 5 | 3 | 3 | 3 | 1 | 50,450,400 |
| I | 4 | 3 | 3 | 3 | 2 | 126,126,000 |
| J | 3 | 3 | 3 | 3 | 3 | 168,168,000 |
References
Bard, J., Dar-El, E., Shtub, A., 1992. An analytic framework for sequencing mixed-model assembly lines. International Journal of Production Research 30, 35–48.
Bolat, A., 1994. Sequencing jobs on an automobile assembly line: Objectives and procedures. International Journal of Production Research 32, 1219–1236.
De, P., Ghosh, J.B., Wells, C.E., 1992. Heuristic estimation of the efficient frontier for a bi-criteria scheduling problem. Decision Sciences 23, 596–609.
Fox, M.J., 1983. Constraint-directed search: A case study of job-shop scheduling. Unpublished Doctoral Dissertation, Carnegie-Mellon University, Pittsburgh, 1983.
Ghosh, S., Gagnon, R., 1989. A comprehensive literature review and analysis of the design, balancing and scheduling of assembly systems. International Journal of Production Research 37, 620–637.
Gilbert, J.P., 1990. The state of JIT implementation and development in the USA. International Journal of Production Research 28, 1099–1109.
Hopfield, J.J., Tank, D.W., 1985. “Neural” computation of decisions on optimization problems. Biological Cybernetics 52, 141–152.
Huson, M., Nanda, D., 1995. The impact of just-in-time manufacturing on firm performance in the US. Journal of Operations Management 12, 297–310.
Imman, R., Bulfin, R., 1991. Sequencing JIT mixed-model assembly lines. Management Science 37, 901–904.
Kohonen, T., 1990. The self-organizing map. Proceedings of the IEEE 78, 1464–1480.
McMullen, P.R., Frazier, G.V., 2000. A simulated annealing approach to mixed-model sequencing with multiple objectives on a JIT line. IIE Transactions 32, 679–686.
Miltenburg, J., Sinnamon, G., 1989. Scheduling mixed-model multi-level just-in-time production systems. International Journal of Production Research 27, 1487–1509.
Miltenburg, J., Sinnamon, G., 1992. Algorithms for scheduling multi-level just-in-time production systems. IIE Transactions 24, 121–130.
Miltenburg, J., Sinnamon, G., 1995. Revisiting the mixed-model multi-level just-in-time scheduling problem. International Journal of Production Research 33, 2049–2052.
Miltenburg, J., 1989. Level schedules for mixed-model assembly lines in just-in-time Production systems. Management Science 35, 192–207.
Monden, Y., 1983. Toyota Production System, The Institute of Industrial Engineers, Norcross, GA.
Nair, S.K., Thakur, L.S., Wen, K., 1995. Near optimal solutions for product line design and selection: Beam search heuristics. Management Science 41, 767–785.
Norvig, P., 1992. Paradigms of Artificial Intelligence Programming Case Studies in Common LISP. Morgan-Kaufman Publishers, San Francisco, CA.
Ow, P.S., Morton, T.E., 1988. Filtered beam search in scheduling. International Journal of Production Research 26, 35–62.
Ow, P.S., Morton, T.E., 1989. The single machining early/tardy problem. Management Science 35, 177–191.
Ow, P.S., Smith, S.F., 1988. Viewing scheduling as an opportunistic problem-solving process. Annals of Operations Research 12, 85–108.
Sumichrast, R.T., Russell, R.S., 1990. Evaluating mixed-model assembly line sequencing heuristics for just-in-time production systems. Journal of Operations Management 9, 371–390.
Sumichrast, R.T., Russell, R.S., Taylor, B.W., 1992. A comparative analysis of sequencing procedures for mixed-model assembly lines in a just-in-time production system. International Journal of Production Research 30, 199–214.
Tamura, T., Long, H., Ohno, K., 1999. A sequencing problem to level part usage rates and work loads for a mixed-model assembly line with a bypass subline. International Journal of Production Research 61, 557–564.
Wantuck, K.A., 1989. Just in Time for America. KWA Media, Smithfield, MI.
White, R.E., Ruch, W.A., 1990. The composition and scope of JIT. Operations Management Review 7, 9–18.
White, R.E., 1993. An empirical assessment of JIT in US manufacturers. Production and Inventory Management Journal 34, 38–42.
Xiaobo, K., Ohno, A., 1994. A sequencing problem for a mixed-model assembly line in a JIT production system. Computers and Industrial Engineering 27, 71–74.
Xiaobo, Z., Zhou, Z., Asres, A., 1999. A note on Toyota’s goal of sequencing mixed models on an assembly line. Computers & Industrial Engineering 36, 57–65.
Yano, C., Bolat, A., 1989. Survey, development and applications of algorithms for sequencing paced assembly lines. Journal of Manufacturing and Operations Management 2, 172–198.
|
"Transfer Implementation in Congestion Games"
Itai Arieli\(^1\)
Discussion Paper No. 9-14
October 2014
I would like to thank The Pinhas Sapir Center for Development at Tel Aviv University for their financial support.
We thank two anonymous referees for helpful comments.
\(^1\) Itai Arieli – Faculty of Industrial Engineering and Management, Technion-Israel Institute of Technology. Email: email@example.com
We study an implementation problem faced by a planner who can influence selfish behavior in a roadway network. It is commonly known that Nash equilibrium does not necessarily minimize the total latency on a network and that levying a tax on road users that is equal to the marginal congestion effect each user causes implements the optimal latency state. This holds however only under the assumption that taxes have no effect on the utility of the users. In this paper we consider taxes that satisfy the budget balance condition and that are therefore obtained using a money transfer among the network users. Hence at every stated the overall taxes imposed upon the users sums up to zero. We show that the optimal latency state can be guaranteed as a Nash equilibrium using a simple, easily computable transfer scheme that is obtained from a fixed matrix.
In addition, the resulting game remains a potential game and the levied tax on every edge is a function of its congestion.
1 Introduction
Roadway congestion is a source of enormous economic costs. The underlying assumption that users are selfish and their goal is to minimize their own latency time yields a Nash equilibrium that in general does not, and is even far from, minimizing the total latency. This inefficiency motivates the construction of economic incentives that improve efficiency in equilibrium and has therefore given rise to a large body of literature that studies the influence that taxing roads has on latency time.
As a first illustration of these ideas consider the classical Braess’s paradox (see Figure 1). One unit of traffic commutes from the initial node $s$ to the terminal node $t$. Each edge of the network in Figure 1 is labelled with its latency function, giving the delay incurred by traffic on the link as a function of the amount of traffic that uses the link. At Nash equilibrium, all traffic uses the route $s \rightarrow v \rightarrow w \rightarrow t$ and experiences two units of latency. On the other hand, if one unit of tax is levied on the edge $(v,w)$, then in the Nash equilibrium of the resulting game half of the traffic uses each of the routes $s \rightarrow v \rightarrow t$ and $s \rightarrow w \rightarrow t$. In particular, the route $s \rightarrow v \rightarrow w \rightarrow t$ has a latency of 1 and a cost of 2 with respect to this flow, and hence does not offer an attractive alternative to the users. In this new flow at Nash equilibrium, everyone experiences a latency of $3/2$ and no taxes are paid. This outcome is clearly superior to the original flow at Nash equilibrium in the absence of taxes since $3/2$ is the minimal total latency for this network. The example highlights the known paradox that taxing some of the routes may improve efficiency in equilibrium and lead to a superior outcome in terms of the total latency time.
An old, related idea that was introduced informally by Pigou [7], and implemented formally by Beckman et al. [1] and more recently by Sandholm [11], is the principle of *marginal cost pricing*. To better understand this idea, consider an implementation problem faced by a social planner who would like to implement the optimal latency flow among a continuum population of network users. Under the marginal cost pricing principle each user pays an additional tax that is equal to the marginal delay he causes to the other users. If the users of the roadway network take into consideration both the imposed tax and the
delay on each route, then the proposed tax scheme yields a Nash equilibrium that minimizes the total latency time.
Sandholm [12] (see also [11]) provides a family of tax schemes that is based on marginal cost pricing. Each member of this family *evolutionarily implements* the optimal latency state under a wide range of evolutionarily based behavior adjustment processes, called *revision protocols*, employed by the users. These tax schemes alter the underlying game by pricing every edge at every given time as a function of the local congestion on that particular edge. Hence in practice the planner does not need to know the precise choice of routes by the users. Moreover, the pricing schemes introduced by Sandholm change the potential function of the game to be the total latency function. Therefore the optimal latency flow is globally stable under any reasonable adjustment process employed by the users.
A recent paper by Fleischer et al. [5] provides a general existence result of a tax scheme (or tolls) that implements efficient behavior in equilibrium for a large class of congestion games. Further developments on pricing networks can be found in [2], [4], [6], [13]. As pointed out by Cole et al. [3], these results ignore the disutility caused to the users due to the levied tax. Cole et al. [3] demonstrate that if one takes into consideration the negative effect of the tax in the marginal cost pricing, then the Nash equilibrium of the original game is always superior in terms of latency for every network with linear latency functions. In addition, Cole et al. show that if one takes into consideration the tax levied upon the users when calculating optimal latency, then finding
optimal taxation is computationally hard (see Theorem 6.2 in [3]).
To highlight further the limitation of taxation, consider the example depicted in Figure 2. In the unique Nash equilibrium all traffic uses the upper route with a total latency of 1, and the optimal flow is obtained when half of the users use the upper route and the other half use the lower route. This yields a total latency of \( \frac{1}{4} + \frac{1}{2} = \frac{3}{4} \). Under marginal cost pricing, an additional tax of \( 2x \) is levied on the upper route, which implements \((\frac{1}{2}, \frac{1}{2})\) as the unique equilibrium. If, however, one takes the tax levied into consideration when calculating the total latency time, then the resulting latency at equilibrium is \( \frac{1}{2} \cdot (\frac{3}{2}) + \frac{1}{2} \cdot 1 = \frac{5}{4} \). Furthermore, it can easily be verified that any tax scheme that involves only negative payments does not improve latency in equilibrium.
As an alternative consider a (progressive) tax scheme that prices the upper route by a fixed amount of \( \frac{1}{4} \) and benefits the lower route by a fixed amount of \( \frac{1}{4} \). According to this tax scheme the optimal latency of \((\frac{1}{2}, \frac{1}{2})\) is obtained as a unique equilibrium. Moreover, this tax scheme satisfies the *budget balance condition* in equilibrium. That is, the overall tax paid by the users is 0; the tax that is levied on the users of the upper route subsidizes the benefit given to the users in the lower route, where the users of the upper route pay \( \frac{1}{4} \) to the users of the lower route. Hence under the proposed tax scheme the budget balance condition implies that at equilibrium all taxes are obtained by a money transfer among the network users, and there is no additional payment from the planner. The budget balance condition is only guaranteed in equilibrium, whereas off equilibrium it might not hold.
Here we consider taxes that satisfy the budget balance condition and thus are obtained using a money transfer among the network users. Our goal is to study ways to implement the optimal latency flow as a Nash equilibrium using a
simple transfer scheme. In particular we study transfer scheme that is obtained using a fixed transfer matrix that determines the amount of money transferred between every pair of edges. In addition, we are interested in implementing a transfer scheme for which the resulting game is a potential game. Thus, as in Sandholm [12], the optimal latency flow would be globally stable when the users apply any reasonable myopic adjustment learning rules.
Our main result demonstrates the construction of a transfer matrix for which the resulting game retains a potential function such that its unique equilibrium minimizes latency time. Moreover, the fixed transfer matrix that we construct has the property that the tax levied on every edge solely depends on its congestion and may be calculated in time that is polynomial in the number of edges.\footnote{In contrast, Cole et al. [3] demonstrate that when only fixed positive taxes are allowed, calculating an approximately optimal tax scheme is NP hard.}
\section{Model}
Consider a directed graph $G = (V,E)$ with a source $s \in V$ and a sink $t \in V$. Denote the set of simple $s-t$ routes in $G$ by $\mathcal{P}$, which is assumed to be nonempty. We allow parallel edges but we assume that $G$ has no cycles. Consider one unit of traffic wishing to travel from $s$ to $t$. A flow over the graph $G$ is a probability distribution $x = \{x_i\}_{i \in \mathcal{P}} \in Y = \Delta(\mathcal{P})$ indexed by $s-t$ routes, with $x_i$ representing the proportion of traffic using route $i$ as the chosen route from $s$ to $t$.
For a route $i \in \mathcal{P}$ let $\Phi_i$ be the set of edges that comprises $i$. A flow on routes induces a unique flow on edges, defined as a vector $\{x_e\}_{e \in E}$ where $x_e = \sum_{i:e \in \Phi_i} x_i P$ represents the congestion of edge $e$. We note that a flow on edges may correspond to many different flows on routes. Let $X$ be the set of all possible flows on edges.\footnote{We note that $X$ corresponds to the set of all non-negative weights $\{x_e\}_{e \in E}$ such that the outflow from node $s$ and the inflow to node $t$ are 1, and for any other node $v$ the inflow to $v$ equals the outflow from $v$.}
A congestion game $C = (G,(s,t),(l_e)_{e \in E})$ over $G$ comprises a nonnegative,
continuous, nondecreasing latency function $l_e$ for each edge $e$. $l_e$ describes the delay incurred by traffic on edge $e$ as a function of the congestion $x_e$. The latency of a route $i \in P$ with respect to a flow $x \in X$ is then given by $F_i(x) = \sum_{e \in \Phi_i} l_e(x_e)$. We measure the quality of a flow by its total latency $L(x)$, defined by $L(x) = \sum_{i \in P} F_i(x)x_i$, or, equivalently, by $L(x) = \sum_{e \in E} l_e(x_e)x_e$. We will call an edge flow that minimizes $L(\cdot)$ optimal. Such a flow always exists since $X$ is a compact set and $L(\cdot)$ is a continuous function on $X$. We shall assume that $L(\cdot)$ is strictly convex; hence, it has a unique minimizer over $X$.\footnote{This standard assumption is guaranteed, for example, whenever $x_e l''_e(x_e) + 2l'_e(x_e) > 0$ for every $e \in E$ and $x_e \in [0,1]$.}
A tax scheme $\tau = \{\tau_e\}_{e \in E}$ is a set of functions to be placed on the edges of the network $G$ such that $\tau_e : X \to \mathbb{R}$ where $\tau_e(x)$ is the tax levied on the edge $e$ when the flow is $x$. We further assume that all agents trade time and money equally, and that avoiding one unit of latency time equals one units of money. Denote the game obtained from the tax scheme $\tau$ by $F^\tau$. Under the tax scheme $\tau$, when the flow is induced by $x$ a route $i$ incurs a total cost of
$$F^\tau_i(x) = \sum_{e \in \Phi_i} l_e(x_e) + \tau_e(x).$$
We say that the tax scheme $\tau$ satisfies the budget balance condition at $x \in X$ if
$$\sum_{e \in E} \tau_e(x)x_e = 0.$$
Under the budget balance condition the taxes levied at state $x$ are obtained by a money transfer among the network users, which means that overall tax money is neither wasted nor invested by the planner at state $x$. We shall focus on a tax scheme that satisfies the budget balance condition.
For the game in Figure 2 we show that one can find a tax scheme $\tau = \{\tau_e\}_{e \in E}$ such that $\tau_e$ are constant, the optimal latency $x^*$ comprises a unique equilibrium of $F^\tau$, and $\tau$ satisfies the budget balance condition at $x^*$. The first question that naturally arises is whether one can always find a tax scheme such that the optimal flow would be realized in equilibrium at which the balance budget condition holds. Our first observation demonstrates that this is in fact possible. We state this simple preliminary result.\footnote{The proof of Lemma 1 is presented as part of the proof of our Main Theorem in Section 2.}
Lemma 1. Let \((G, (s, t), (l_e)_{e \in E})\) be a congestion game. There exists a real numbers \(\tau = (\tau_e)_{e \in E}\) (the tax levied on every edge \(e\) is constant and equals \(\tau_e\)) such that the unique equilibrium \(x^*\) of \(F^\tau\) is the unique optimal latency and the tax scheme \(\tau\) satisfies the budget balance condition at \(x^*\)
\[
\sum_{i=1}^{n} x_e^* \tau_e = 0.
\]
(1)
Lemma 1 shows that one can always have a tax scheme such that in equilibrium the tax comprises only money transfer among network users. The problem with this approach can best be understood by considering the framework of Sandholm [12]. In his model a myopic adjustment process is implemented by the users in light of new information on the congestion of an alternative route. When these dynamical processes are considered, Lemma 1 does not provide a satisfactory answer since throughout the learning process, in a non-equilibrium states the taxes that users pay do not necessarily satisfy the budget balance condition. In some states money will be wasted or, alternatively, the planner will have to invest money in order to implement the tax scheme.
We consider next a tax scheme that is based solely on a money transfer among network users. Formally, a transfer scheme is defined as follows.
Definition 1. Let \(C = (G, (s, t), (l_e)_{e \in E})\) be a congestion game. Let \(K\) be a set of distinct pairs of edges, that is, \(K = \{(e, f) : e, f \in E \text{ and } e \neq f\}\). A transfer scheme is a Lipschitz continuous function \(Q : X \to \mathbb{R}_+^K\) such that \(Q_{ef}(x)\) represents the transfer of money from users who are using edge \(e\) to users who are using edge \(f\) at state \(x\).
The new cost \(F_e^Q(x)\) of using edge \(e\) at state \(x\) is determined as follows:
\[
F_e^Q(x) = l_e(x_e) + \sum_{f \neq e} (Q_{ef}(x) - Q_{fe}(x)) \cdot \frac{x_f}{x_e}.
\]
(2)
The left-hand side of the sum \(Q_{ef}(x)\) represents the transfer of money from the users of edge \(e\) to the users of edge \(f\) at state \(x\). The right-hand side represents the transfer from the \(f\) users to the \(e\) users. Given a transfer value \(Q_{fe}(x)\), the actual transfer from the \(f\) to the \(e\) users depends on \(x_f\) and \(x_e\). The expression \(x_f \cdot Q_{fe}(x)\) represents the actual amount of money collected from the \(f\) users to be transferred to the \(e\) users. This amount is evenly distributed
among the $e$ users, and hence the payoff that an $e$ user obtains from the $f$ users is $Q_{fe}(x) \cdot \frac{x_f}{x_e}$. We use the convention that if $Q_{fe}(x)$ is zero or if both $x_e$ and $x_f$ are zero there are no transfers from $f$ to $e$. Note that the definition of transfer scheme above allows infinite payoffs for the cases $x_f > 0$, $x_e = 0$, and $Q_{fe}(x) > 0$. In the Appendix we shall consider the case of a bounded transfer (see also the Remark at the end of the proof of Theorem 1).
Let $F^Q$ be the resulting game from using transfer scheme $Q$. That is, the payoff to a user of route $i$ at state\footnote{int$(X)$ corresponds to the set of flows $x \in X$ with $x_e > 0$ for every edge $e$.} $x \in \text{int}X$ is determined as follows:
$$F^Q_i(x) = \sum_{e \in \Phi_i} F^Q_e(x).$$
It follows directly from the definition that at every state $x \in \text{int}(X)$,
$$\sum_{e \in E} x_e F^Q_e(x) = \sum_{e \in E} x_e F_e(x). \quad (3)$$
This implies that the total latency time is not changed in the presence of the transfer scheme $Q$. Therefore, $F^Q$ satisfies the budget balance condition at every state $x \in \text{int}(X)$. When the transfer scheme of Definition 1 is independent of the current distribution of routes we shall call it \textit{simple}. That is, a transfer scheme is simple when $Q_{ef}(x) = Q_{ef}$ is independent of $x \in X$ for every pair $(e, f) \in K$. We shall henceforth focus solely on simple transfer schemes.
By incorporating a transfer scheme we might lose some of the natural properties of the underlying congestion game. Most importantly, it might no longer be true that the resulting game $F^Q$ admits a potential function. Having a potential function is particularly important in the dynamic setup considered by Sandholm [12], where the players update their choice of routes myopically in accordance with some evolutionary adaptive process. In this framework the existence of a potential function guarantees that for any reasonable adaptive process from any interior initial conditions the flow converges to an equilibrium. In addition, with the presence of a simple transfer scheme $Q$, it might hold that the actual cost of using an edge $e$ depends on the entire flow $x$ and not just on the congestion $x_e$. Therefore, another natural property to consider is that the cost of using edge $e$ solely depends on $x_e$ and not on the entire flow $x$.
For a congestion game $C$, let $n = |E|$ be the number of distinct edges on $G$. We say that the game $F^Q$ resulting from the transfer scheme $Q$ admits a potential function if there exists a function $h : \mathbb{R}^n \to \mathbb{R}$ that is finite and differential on $\text{int}(X)$ such that for every $x \in \text{int}X$, and every route\footnote{Note that $\frac{\partial h}{\partial x_i}(x) = \sum_{e \in \Phi_i} \frac{\partial h}{\partial x_e}(x)$.} $i$,
$$\frac{\partial h}{\partial x_i}(x) = F^Q_i(x).$$
Our main goal is therefore to define a transfer scheme $Q$ that satisfies the following properties:
1. $F^Q$ attains a unique equilibria $x^*$ that corresponds to the optimal flow.
2. $F^Q$ admits a potential function.
3. For every edge $e$ and state $x \in \text{int}(X)$, the payoff $F^Q_e(x)$ depends solely on $x_e$.
4. The matrix $Q$ is computable in a time that is polynomial in the number of edges.
Our Main Theorem asserts that:
**Theorem 1.** For every congestion game there exists a simple transfer scheme $Q$, that satisfies the above properties.
Our result demonstrates the existence of a uniform transfer scheme such that the only equilibrium of the resulting game lies in the point that minimizes the latency time. The fact that we can have a transfer scheme such that the resulting game remains a potential game means that, as in Sandholm [12], for any reasonable adjustment processes of the users the unique optimal latency state is globally stable.
## 3 Proof of Theorem 1 and Lemma 1
Let $\{x^*_e\}_{e \in E}$ be the unique optimal flow in $X$. That is, it is the point that satisfies:
$$\sum_{e \in E} x^*_e l_e(x^*_e) \leq \sum_{e \in E} x_e l_e(x_e) \quad \forall x \in X.$$
There exists a constant tax scheme $\tau = \{\tau_e\}_{e \in E}$ such that $\tau_e \geq 0$ for every $e$, and the optimal latency $x^*$ is the unique equilibrium of the game $F^\tau$. As an example of such a tax scheme we can take $\tau_e = x_e^* p'(x_e^*)$, the marginal cost price. Let $E'$ be the set of edges $e$ for which $x_e^* > 0$, and assume that for every edge $e \in E \setminus E'$, it holds that $\tau_e = a$. This can be clearly obtained by increasing the tax on some unused edges.
Let $(e_1, \ldots, e_k)$ be the edges pointing out of the source node $s$. We note that every user must use a unique edge $e_j$ for some $1 \leq j \leq k$. That is, for every route $i$ there exists a unique $j$ such that $e_j \in \Phi_i$. Since $\tau_e \geq 0$ we have that $\sum_{e \in E} \tau_e x_e^* \geq 0$. For every $h \geq 0$ define a the tax scheme $\{\tau_e^h\}_{e \in E}$ as follows:
$$
\tau_e^h = \begin{cases}
\tau_e - h & \text{if } e = e_j \text{ for some } 1 \leq j \leq k, \text{ and } e \in E', \\
\tau_e & \text{if } e \neq e_j \text{ for } j = 1, \ldots, k \text{ or } e \in E \setminus E'.
\end{cases}
$$
Note that $x^*$ is still an equilibrium of $F^{\tau^h}$ for every $h \geq 0$. To see this note that the cost of routes that are used by a positive fraction of the population in $x^*$ is smaller by exactly $h$ in $F^{\tau^h}$ compared to $F^\tau$. And, the cost of an unused route does not decrease by more than $h$ in $F^{\tau^h}$.
Let $h_0 \geq 0$ be such that\footnote{Note that $h_0 = \sum_{e \in E} x_e^* \tau_e$.}
$$
\sum_{e \in E} \tau_e^{h_0} x_e^* = 0. \tag{4}
$$
Therefore the tax scheme $\{\tau_e^{h_0}\}_{e \in E}$ satisfies the budget balance condition at $x^*$, and hence it is obtained by a money transfer among the network users. Based on this we shall define a simple transfer scheme that satisfies the required properties. For every $e \in E$, let $\kappa_e = \tau_e^{h_0}$. For every $e \in E'$, define a vector $v^e \in \mathbb{R}^n$ as follows:
$$
v_f^e = \begin{cases}
-\frac{1-x_e^*}{x_e^*} & \text{if } f = e, \\
1 & \text{if } f \neq e.
\end{cases}
$$
Let $M \subset \mathbb{R}^n$ be the following subspace:
$$
M = \{y \in \mathbb{R}^n | \sum_{e \in E} y_e x_e^* = 0 \text{ and } y_e = y_f \text{ for every } e, f \in E \setminus E'\}.
$$
Note that by definition the vector $\kappa = (\kappa_e)_{e \in E}$ and all the vectors $v^e$ for $e \in E'$, lie in $M$. Note further that the set $\{v^e\}_{e \in E'}$ is a spanning set of $M$. It can be easily verified that one can write any vector $y \in M$ as a positive linear combination of the vectors $\{v^e\}_{e \in E'}$. Hence in particular there exists $\{q_e\}_{e \in E'} \subset \mathbb{R}_+^n$ such that $\sum_{e \in E'} q_e v^e = \kappa$. Let $q_e = 0$ for every $e \in E \setminus E'$. Define the matrix $Q$ as follows: $Q_{ef} = q_f$ for every $(e, f) \in K$. We shall show that the matrix $Q$ has the desired properties. By equation (2) the game $F^Q$ is defined as follows:
$$F^Q_e(x) = l_e(x_e) + \sum_{f \neq e} [q_f - (q_e \cdot \frac{x_f}{x_e})] = l_e(x_e) + \sum_{f \neq e} q_f - q_e \cdot \frac{1 - x_e}{x_e}$$
$$= l_e(x_e) + \sum_f q_f - \frac{q_e}{x_e}.$$
Hence $F^Q_e(x)$ is a function of $x_e$. By definition, $F^Q$ has the following potential function:
$$h(x) = \sum_{e \in E} \int_0^{x_e} l_e(x_e) dx + x_e(\sum_f q_f) - q_e \ln(x_e).$$
To see this note that for every route $i \in P$,
$$\frac{\partial h}{\partial x_i}(x) = \sum_{e \in \Phi_i} [l_e(x_e) + \sum_f q_f - \frac{q_e}{x_e}] = \sum_{e \in \Phi_i} F^Q_e(x) = F^Q_i(x).$$
To see that $x^*$ is an equilibrium for $F^Q$ note that for every $e \in E$ such that $x^*_e > 0$ the payoff $F^Q_e(x^*_e)$ can be written as follows:
$$F^Q_e(x^*_e) = l_e(x^*_e) + \sum_{f \neq e} q_f - \frac{q_e(1 - x^*_e)}{x^*_e}$$
$$= l_e(x^*_e) + (\sum_f q_f v^f)_e \quad (5)$$
$$= l_e(x^*) + \kappa_e = F^\kappa_e(x^*_e). \quad (6)$$
Equality (5) follows from the definition of the vectors $\{v^f\}_{f \in E'}$. And, equality (6) follows from the definition of the non negative numbers $\{q_f\}_{f \in E}$. For every $e \in E \setminus E'$ such that $x^*_e = 0$ it follows from the definition of the game $F^Q$ and from the fact that $q_e = 0$ that
$$F^Q_e(x^*_e) = l_e(0) + \sum_f q_f = l_e(0) + a = F^\kappa_e(x^*_e).$$
Hence the payoff $F^Q_i(x^*)$ to the $i$ user is equal to the payoff $F^\kappa_i(x^*)$ for every $i \in P$. By construction $x^*$ is an equilibrium of $F^\kappa$ and so it is an equilibrium of
$F^Q$. Note that the potential function $h(x)$ is convex and as such has a unique minimizer at $X$. Since any equilibrium of the game $F^Q$ is a local minimizer of the potential function one can deduce that $x^*$ is the unique equilibrium of $F^Q$.
It remains to show that the matrix $Q$ can be calculated in polynomial time. First note that an optimal tax scheme $\tau$ such that $F^\tau$ has $x^*$ as its unique equilibrium can be obtained as the solution of a linear programming problem. The number of constraints is polynomial in the size of the graph $G$ and hence such $\tau$ can be computed in polynomial time (see Cole et al. [4] for details). The tax scheme $\kappa = \tau^{h_0}$ can obviously be obtained from $\tau$ in linear time. And finally the calculation of the transfer matrix $Q$ that yields the appropriate transfer matrix can be obtained from $\kappa$ by another linear programming problem with a number of constraints that are polynomial in $G$. This concludes the proof of our Main Theorem.
**Remark 3.1.** As noted, the payoffs in the game $F^Q$ that results from the transfer scheme $Q$ introduced in Theorem 1 may be unbounded on int$(X)$ and infinite on the boundaries of $X$. In particular, when $q_e > 0$ the payoff from using edge $e$ goes to $-\infty$ as $x_e$ goes to 0. A natural question is whether one can redefine the game in such a way that makes both the payoffs and the potential function bounded over all $X$. We claim that to some extent this is indeed possible.
To do so, consider a bounded implementation mechanism for the same simple transfer matrix guaranteed by Theorem 1, under which the transfer value to every edge $e$ is bounded by $M > 0$. That is, assume that every edge $e$ cannot receive a money transfer that is greater than $M$ from the other users at every state $x$. When such a bound is imposed, the resulting game has a bounded Lipschitz continuous payoff function that is well defined over all $X$, is still a potential game with a potential function that is differentiable over all $X$, and the tax levied on every edge $e$ is still a function solely of the congestion on this edge. In addition, for every large enough $M$ the optimal latency flow $x^*$ is still a unique equilibrium of the resulting game. The only property that is violated at some states is the budget balance condition. I.e., for some states close to the boundary, money can be wasted and the tax may not be obtained
by a money transfer among the network users. However, we claim that for every reasonable revision protocol implemented by the users the tax that is wasted during the learning process gets arbitrarily close to zero as $M$ grows large. That is, controlling for the value of $M$ can guarantee that the money being wasted throughout the learning process becomes negligible. We outline the above argument in the Appendix.
4 Conclusion
In this work we propose a tax scheme that is based on a money transfer among network users. We demonstrate the existence of an easily computable transfer matrix for which the resulting game has the optimal latency state as its unique equilibrium and it achieves a potential function. Moreover, similarly to marginal cost pricing, the levied tax on every edge $e$ depends solely on the congestion $x_e$.
A Appendix: Bounded Implementation
Let $Q$ be the matrix guaranteed by Theorem 1 and let $F^{Q,M}$ be the game obtained when the transfer to any edge $e$ is bounded by $M > 0$. The new payoff from using edge $e$ is then,
$$F_e^{Q,M}(x) = l_e(x_e) + \sum_{f \neq e} q_f - \min\{ (q_e \frac{1-x_e}{x_e}), M \}. $$
We note that the function $F_e^{Q,M}$ is Lipschitz continuous for every edge $e$. This in turn determines a Lipschitz continuous function for every route $i \in P$.
For every edge $e$ such that $q_e > 0$, let $x_e^M = \frac{q_e}{M+q_e}$ and let $c_e = x_e^M(q_e + M) - q_e \ln(x_e^M)$. For every such $e$ define the function $\eta_e(x_e)$ as follows:
$$\eta_e(x_e) = \begin{cases}
x_e(\sum_f q_f) - q_e \ln(x_e) & \text{if } x_e \geq x_e^M \\
x_e(\sum_{f \neq e} q_f) - x_e M + c_e & \text{if } x_e < x_e^M.
\end{cases} \quad (7)$$
Note that by the choice $c_e$ the function $\eta_e(x_e)$ is continuous and differentiable. For any edge $e$ for which $q_e = 0$ let $\eta_e(x_e) = x_e(\sum_{f \neq e} q_f)$. The resulting
differentiable potential function $h^M$ of the game $F^{Q,M}$ may be defined by the functions $\{\eta_e(x_e)\}_{e \in E}$ as follows:
$$h^M(x) = \sum_{e \in E} \left[ \int_0^{x_e} l_e(x) \, dx + \eta_e(x_e) \right].$$
Let $X^M = \{ x \in X : \forall e \in E, \ x_e \geq x_e^M \}$. Note that $X^M$ approaches $X$ (in the Hausdorff distance) as $M$ grows. Since $\tau_e(x_e)$ are weakly convex it follows that the potential $h^M$ is a weakly convex function and is strictly convex over $X^M$. Hence, as in Theorem 1, there exists $M_0 > 0$ such that the optimal latency $x^*$ is the unique equilibrium of $F^{Q,M}$ for every $M \geq M_0$.
The game $F^{Q,M}$ no longer satisfies the budget balance condition. That is, if we let $\tau_e^M(x_e) = \sum_{f \neq e} q_f - \min\{(q_e \frac{1-x_e}{x_e}), M\}$ be the levied tax on edge $e$ in the game $F^{Q,M}$, then for $x \not\in X^M$ it holds that,\footnote{Note that $V^M(X)$ is uniformly bounded by $n \sum_{e \in E} q_e$ for every $M$ and $x$.}
$$V^M(x) = \sum_{e \in E} x_e \tau_e^M(x_e) > 0.$$
The budget balance condition does hold for any $x \in X^M$. Let $|\mathcal{P}| = m$. A \textit{revision protocol} $\rho$ is a Lipschitz function,
$$\rho : X \times \mathbb{R}^m \to \mathbb{R}_+^{m \times m}.$$
For each vector payoff $\pi \in \mathbb{R}^m$ over the routes, every state $x$, and all pairs of distinct routes $i, j \in \mathcal{P}$, the function $\rho_{ij}(x, \pi)$ determine the switching rate of revision from route $i$ to route $j$.\footnote{See Chapter 4.1.2 in [9] for details.} Any revision protocol and initial state $y \in \Delta(\mathcal{P})$ determines a differential equation $z : \mathbb{R}_+ \to Y$ that describes the learning process as follows:
$$\forall i \in \mathcal{P}, \ \dot{z}_i(t) = \sum_{j \in \mathcal{P}} z_j(t) \rho_{ji}(z(t), F^{Q,M}(z(t))) - z_i(t) \rho_{ij}(z(t), F^M(z(t))). \quad (8)$$
$\{z(t)\}_{t \geq 0}$ naturally defines a flow on edges $\{x(t)\}_{t \geq 0}$.
Consider,
$$\int_0^\infty V^M(x(t)) \, dt. \quad (9)$$
This expression represents the tax that is lost during the learning process.
Under mild conditions on the revision protocol the flow $\{x(t)\}_{t \geq 0}$ defined by
equation (8) converges to the equilibrium $x^*$ of $F^{Q,M}$ for every $M \geq M_0$.\footnote{This indeed holds for many studied learning dynamics such as pairwise comparisons, projection dynamic, and better-reply dynamics, as well as whenever the potential function serves as a Lyapunov function for the learning process described in equation (7). Again, see Chapter 7.1 in [9].} Therefore, the flow $\{x(t)\}_{t \geq 0}$ must enter $X^M$ in a bounded time, independently of the initial conditions. Since $X^M$ approaches $X$ for large $M$, this bounded time must approach zero as $M$ grows. Since $V^M(x)$ is bounded and zero on $X^M$, it must be the case that, (9) goes to zero as $M$ increases.
\begin{thebibliography}{9}
\bibitem{Beckmann} M. Beckmann, C.B. McGuire, C.B. Winsten, \textit{Studies in the Economics of Transportation}, Yale University Press, 1956.
\bibitem{Bergendorff} P. Bergendorff, D.W. Hearn, M.V. Ramana, Congestion toll pricing of traffic networks, in: P.M. Pardalos, D.W. Hearn, W.W. Hager (Eds.), \textit{Network Optimization}, Springer, 1997, pp. 51-71.
\bibitem{Cole} R. Cole, Y. Dodis, T. Roughgarden, How much can taxes help selfish routing? \textit{Journal of Computer and System Sciences} 72, 444-467, 2006.
\bibitem{Cole2} R. Cole, Y. Dodis, T. Roughgarden, Pricing network edges for heterogeneous selfish users. \textit{ACM Symposium on Theory of Computing}, 521-530, 2003.
\bibitem{Fleischer} L. Fleischer, K. Jain, M. Mahdian, Tolls for heterogeneous selfish users in multicommodity networks and generalized congestion games, in: \textit{Proceedings of the 45th Symposium on Foundations of Computer Science}, 2004, pp. 277-285.
\bibitem{Hearn} D.W. Hearn, M.V. Ramana, Solving congestion toll pricing models, in: P. Marcotte, S. Nguyen (Eds.), \textit{Equilibrium and Advanced Transportation Modeling}, Kluwer Academic Publishers, Dordrecht, 1998, pp. 109-124.
\bibitem{Pigou} A. C. Pigou, \textit{The Economics of Welfare}, Macmillan, 1920.
\bibitem{Tardos} T. Roughgarden and E. Tardos. How bad is selfish routing? \textit{Journal of the ACM} 49(2), 236-259, 2002.
\end{thebibliography}
[9] B. Sandholm, *Population Games and Evolutionary Dynamics*, MIT Press, 2010.
[10] B. Sandholm, Pigouvian pricing and stochastic evolutionary implementation. *Journal of Economic Theory* 132, 367-382, 2007.
[11] B. Sandholm, Negative externalities and evolutionary implementation. *Review of Economic Studies*, 72, 885-915, 2005.
[12] B. Sandholm, Evolutionary implementation and congestion pricing. *Review of Economic Studies*, 69, 667-689, 2002.
[13] M.J. Smith, The marginal cost taxation of a transportation network. *Transportation Res Part B* 13(3), 237-242, 1979.
|
1 Introduction
Text-to-image generation is a challenging task with many potential applications. Many approaches have been explored in recent years, the majority of which focus on using generative adversarial networks (GANs). One of the biggest challenges in text-to-image generation is ensuring that the generated image is not only visually realistic, but also semantically aligned with the input text; after all, a photo-realistic result that is unrelated to the text does not properly address the task.
In this project, we will address the task of caption-to-image generation both by using variations of ACGANs and by modifying the MirrorGAN model proposed by Qiao et al. [1]. More specifically, we hope to modify the initial embeddings used by both approaches to see if more complex ways of encoding the caption can allow us to produce better images. By improving the parts of our models closely related to semantically aligning the generated image to the input caption, we hope to be able to generate high quality images that clearly correspond to the conditioning text.
2 Related Work
Many GAN variations have been explored for the task of caption-conditioned image generation.
Auxiliary Classifier GANs (ACGANs) [2] are a form of GAN for conditional image synthesis. Instead of providing the discriminator with the class label, as traditionally done in conditional GANs, the discriminator is tasked with predicting the class label. The authors claim that this increases the generation performance as the generator learns to both generate realistic images that the discriminator classifies as the correct class.
Xu et al.’s AttnGAN [3] uses an attention mechanism that allows the image generator to focus on different aspects of the text for drawing different regions of the image. In addition, the AttnGAN uses a deep attentional multimodal similarity model (DAMSM) that computes the similarity between the generated image and the input text; this allows the model to ensure the image is not only well generated to seem photorealistic, but also relates well to the text description.
The MirrorGAN of Qiao et al. [1] is similarly composed of three stages. The caption is first turned into a semantic text embedding; the embedding is then fed into cascaded image generators using both word-level and sentence-level attention. Finally, an image captioning model is used to align the caption from the generated image with the input text description. This idea is similar to the DAMSM from the AttnGAN; however, where the DAMSM attempts to compute a text and image embedding that are in the same space, MirrorGAN attempts to directly translate the generated image into a textual equivalent.
In this project, we make use of several different word/sentence embeddings. InferSent [4] is a sentence embedding model developed by Facebook that is claimed to be useful at various downstream tasks. BERT [5] is a language representation model achieving state-of-the-art performance on many tasks like measuring the similarity of two sentences. A limitation of BERT, however, is that while it can be used to produce word embeddings, computing sentence similarity requires both sentences to be fed into the model. Sentence-BERT [6] is a modification of BERT that produces semantically meaningful sentence embeddings that can be compared with cosine similarity. The authors claim that Sentence-BERT greatly reduces the computation needed to compute the sentence similarity since a quadratic number of model invocations is no longer needed.
3 Dataset
We use the CUB-200-2011 birds dataset [7], processed in the same fashion as Zhang et al. for the StackGAN [8]. The dataset consists of a total of 11,788 images in 200 classes of bird species. For our purposes, we separate out 150 classes (altogether 8,855 images) for training and the remaining 50 classes (2,933 images) for testing. Images are cropped to ensure that the bird bounding boxes in each image have an object-image size ratio greater than 0.75, therefore roughly normalizing the size of the bird in each picture.
Captions are included for these images from Reed et al. [9], collected through the use of Amazon Mechanical Turk. Each image in the dataset is associated with 10 captions.
4 Evaluation
We performed both qualitative and quantitative evaluation on our models.
Qualitatively, we sampled images from our model and manually compare to the text input, looking at whether the sampled image looks like a bird and matches the caption.
Quantitatively, we used the Inception Score. This metric uses a Inception V3 model (pre-trained on the birds dataset) to classify many generated models; predictions are then combined to capture both image quality (how well the generated image looks like a specific object) and image diversity (whether there was a wide range of objects generated). The metric uses the following formula:
$$IS(x) = \exp \left( \mathbb{E}_x \left[ KL\left(p(y|x)||p(y)\right) \right] \right)$$
where $x$ represents the image and $y$ represents the class.
Since the ideal label distribution $p(y|x)$ and ideal marginal distribution $p(y)$ should be very different, with the first having one clear peak and the second being relatively uniform, we use KL divergence to measure the similarity of the two distributions. A high KL divergence means that the two distributions are very different; therefore, the higher the Inception Score, the better.
We note that although Inception Score is good for capturing the quality and diversity of generated images, it can’t help us understand how well the generated images semantically match with the input captions. For that, we rely on our qualitative evaluation.
5 ACGAN
The ACGAN (Auxiliary Classifier GAN) is a variant of the traditional GAN architecture where the generator $G$ generates images $X_{fake} = G(c, z)$ where $c$ is the class label one-hot vector and $z$ is random noise (we used 128-dimensional $z$). In our case here, $c$ may also be a sentence embedding, such as from InferSent or BERT. The input is combined with some random noise and given to the generator, which generates an image. The discriminator the estimates realism of the image $P(real = 1|X)$ and class labels, $\hat{P}(C|X)$. Note that even when the generator is given a sentence embedding and not the bird species, the discriminator still tries to predict the bird species only. See the future work section for possible alternatives.
Here, $L_S$ is the log-likelihood of being real and $L_C$ is the likelihood of the correct class:
$$L_S = \mathbb{E}[\log P(real = 1 | X_{real})] + \mathbb{E}[\log P(real = 0 | X_{fake})]$$
$$L_C = \mathbb{E}[\log P(C = c | X_{real})] + \mathbb{E}[\log P(C = c | X_{fake})]$$
The discriminator tries to maximize $L_S + L_C$ while the generator tries to maximize $L_C - L_S$.
6 MirrorGAN
As described above, the MirrorGAN is composed of three separate modules: the STEM module, which represents the caption as a text embedding; the GLAM module, which cascades multiple image generation networks using both word-level and sentence-level attention; and the STREAM module, which uses image captioning to regenerate a text description from the generated image.
6.1 Modules
The STEM module, or Semantic Text Embedding Module, consists of a RNN network that takes in a text description and extracts both word embeddings and sentence embeddings. The base STEM module from the original MirrorGAN is a bidirectional GRU with 128 hidden units, producing 256-dimensional embeddings.
The GLAM module, or Global-Local Collaborative Attentive Module, has a structure very similar to AttnGAN. At each stage of the cascading image generators, we use both a word-level and sentence-level attention model. Each model takes in the relevant embedding and visual feature. The embedding $e$ is converted into a common semantic space of visual features using a perceptron layer $U$ and is then multiplied with the input visual feature vector $f$ to get an attention score. An attentive context feature is then computed by taking the inner product of the attention score with the converted word embedding $Ue$. The resulting attentive context features from the two models are concatenated with each other as well as with the input visual feature vector to compose the new visual feature vector.
The STREAM module, or Semantic Text REgeneration and Alignment Module, computes a text description from the generated image; this generated description can then be compared to the original text description in order to semantically align them. The STREAM module in the original MirrorGAN uses a common encoder-decoder framework: the encoder is an Inception V3 network pretrained on ImageNet, while the decoder is an LSTM with 512 hidden units.
6.2 Loss Functions
Both the generator and the discriminator use a loss based on both visual realism - how realistic a generated image looks - and semantic consistency - how well the image matches with sentence semantics. From this, we
have the following equation for the loss functions of generator $G_i$ and discriminator $D_i$:
$$\mathcal{L}_{G_i} = -\frac{1}{2} E_{I_i \sim p_{I_i}} \log(D_i(I_i)) - \frac{1}{2} E_{I_i \sim p_{I_i}} \log(D_i(I_i, s))$$
$$\mathcal{L}_{D_i} = -\frac{1}{2} E_{I_i^{GT} \sim p_{I_i^{GT}}} \log(D_i(I_i^{GT})) - \frac{1}{2} E_{I_i \sim p_{I_i}} \log(1 - D_i(I_i))$$
$$- \frac{1}{2} E_{I_i^{GT} \sim p_{I_i^{GT}}} \log(D_i(I_i^{GT}, s)) - \frac{1}{2} E_{I_i \sim p_{I_i}} \log(1 - D_i(I_i, s))$$
where $I_i$ is a generated image sampled from distribution $p_{I_i}$ in the $i^{th}$ stage, $I_i^{GT}$ is a real image sampled from distribution $p_{I_i^{GT}}$ in the $i^{th}$ stage and $s$ is the input sentence embedding.
For the generator, we also use a text-semantic reconstruction loss aligning the original text description with the resulting description from the STREAM module. This loss is described as
$$\mathcal{L}_{stream} = -\sum_{t=0}^{L-1} \log p_t(T_t)$$
where $T$ is the text description and $L$ represents the sentence length.
The final objective functions of the generator and discriminator across all $m$ stages are defined below:
$$\mathcal{L}_G = \sum_{i=0}^{m-1} \mathcal{L}_{G_i} + \lambda \mathcal{L}_{stream} \quad , \quad \mathcal{L}_D = \sum_{i=0}^{m-1} \mathcal{L}_{D_i}$$
7 Methods
Due to constraints in both time and compute, we limited our work to generating 64x64 color images. The ground truth bird images were appropriately downsized to match.
7.1 Baselines
For our baselines, we trained ACGAN models conditioned on one-hot class (bird species) and InferSent [4] vector, using a pre-trained 4096-dimensional InferSent model from Facebook. The one-hot model addresses a slightly different task (class-to-image generation), but allows us to understand what kind of image quality we can expect to reach. Other than the one-hot models, all other models are only provided with the sentence embedding and not the bird species.
We also use Qiao et al.’s implementation of the MirrorGAN as a baseline in order to compare how the performance our modified MirrorGAN compares. Due to our limitations, the GLAM module of our MirrorGAN only has one attention model. We use a publicly-available implementation of MirrorGAN on GitHub [10].
Both ACGANs were trained for 2000 epochs each using an Adam optimizer with learning rate 0.0002. The MirrorGAN was only trained for 450 epochs due to time limitations, using an Adam optimizer with learning rate 0.0001.
7.2 Experiments
We experimented with several variations of the ACGAN as well as a variation of the MirrorGAN by incorporating BERT embeddings using the pretrained BERT-Base (Uncased) model provided by Devlin et al. [5].
For our first experiment, we converted the input English caption into a series of 768-dimension word embeddings using the pre-trained BERT embeddings. These word embeddings are then averaged to form a sentence embedding, which is given to the ACGAN to train on.
Similarly, our second experiment calculated a sentence embedding based on BERT which is given as input to the ACGAN; however, unlike the first approach, the sentence embeddings are calculated directly from the caption using Reimers and Gurevych’s Sentence-BERT [6], instead of aggregating individual word embeddings.
For our third experiment, we incorporated BERT embeddings into the MirrorGAN by replacing the embedding layer of the STEM module with pre-trained BERT weights. This modification means that the MirrorGAN trains with 768-dimension embeddings as opposed to the previous 256-dimension embeddings.
As with our baselines, both ACGANs were trained for 2000 epochs using an Adam optimizer with learning rate 0.0002, while the MirrorGAN was trained for 450 epochs using an Adam optimizer with learning rate 0.0001.
8 Results
8.1 Generated Images
In order to qualitatively compare the output of our models, we chose two captions and used them as input to each of our models. See the captions below, followed by the actual class of bird:
- *this is a bird with a white belly, a blue wing and head and a small black beak* - Cerulean Warbler
- *this bird is yellow with black and has a very short beak* - American Goldfinch
The real images corresponding to these captions are as follows:

The generated images can be seen below:

From these generated images, we can see that the ACGAN models are generally the best at producing bird-like images, with outlines and forms that clearly resemble birds in addition to bird-like details such as feathering and beaks. The images generated from the MirrorGAN models, in contrast, sometimes approach a vaguely bird-like shape but are very blurry or lack details such as beaks and feathering. On occasion, the output is even too blurry or general to tell that it is supposed to approximate a bird.
In terms of matching the caption, the ACGAN Sentence-BERT model seems to perform the best, incorporating both the blue and white coloring of the Cerulean Warbler and the yellow coloring and short beak of the American Goldfinch. The ACGAN one-hot model was able to capture the blue of the Cerulean Warbler (and indeed very closely matched the real image) but produced a red and black bird instead of a yellow and black bird for the American Goldfinch, while the ACGAN BERT model did a good job of matching the caption for the American Goldfinch but completely missed the "blue" requirement of the Cerulean Warbler.
The ACGAN InferSent model produced adequate bird images, but missed the details from the captions such as coloring. The MirrorGAN produced poor bird-images, but may have possibly taken in account some of the color-related words in the captions, though these may have been reflected in the background of the image.
as opposed to the bird; we can see that the left image has a blue background with a vaguely black and white shape that was likely intended to be the bird, while the right image has a yellow-green background. There appears to be some slight improvement when incorporating BERT embeddings into the MirrorGAN. On the left, we can see what is clearly a bird shape with what may be a "small black beak", although it lacks details of the caption such as "blue wing". The MirrorGAN BERT model did much better with the American Goldfinch caption, producing a yellow and black bird with a very short beak. Despite the improvement however, the generated images are still fuzzy and vague compared to the images produced by ACGAN models.
We were generally disappointed with the performance of our MirrorGAN models, which performed far below our expectations. However, this could be due to several factors. The MirrorGAN models generally trained at a much slower rate than our ACGAN models; accordingly, we were only able to train them for 450 epochs, compared to the much larger 2000 epochs of the ACGAN models. We did notice that our model continued to improve all the way to 450 epochs, so it is possible that if we continued to train our models for a few weeks, we could eventually produce much better bird images. Another important note is the fact that we had to reduce the GLAM module of our models to a single attention model in order to work with our time constraints. If we had the time to use multiple stages in order to get the full effect of cascading image generators, it is highly likely we could generate better images that more closely matched the caption; after all, the purpose of multiple stages was so subsequent stages could refine an initial image outline and fix any errors that might be present (such as incorrect coloring).
We also noticed that both of our MirrorGAN models suffered from mode collapse.

Figure 5 is an example of 16 images produced by the MirrorGAN model, given the same caption but with different random noise added each time. We can see that many of the images are very similar (for example, the one in the very top left and the second image in the second row). The same trend was seen in the MirrorGAN BERT model and also in the ACGAN models as shown in figure 6. There, you can see that all of the generated images have similar backgrounds, although there is some variation in the bird shapes.
### 8.2 Quantitative Evaluation
We evaluated the Inception Score of each model using a pre-trained bird classifier provided with the StackGAN [8] code on GitHub. As stated before, the ground truth bird images were resized to 64 x 64 so that the metric is comparable across models.
As expected, the Inception Score of all the generated models are lower than that for real images. We can see that the ACGAN one-hot model has a high Inception Score, matched only by the ACGAN Sentence-BERT model; however, we must note that the ACGAN one-hot was trained for an easier task, only having to generate from a class rather than from a specific caption.
We note that at least with the ACGANs, using more complex embeddings seem to improve the model; using Sentence-BERT embeddings led to a higher Inception Score than just averaged BERT embeddings, which was again better than InferSent embeddings. The ACGAN with Sentence-BERT embeddings was the best at the caption-to-image generation task, having the highest Inception Score. We also note that quantitatively, the
ACGANs all outperform the MirrorGANS. This is not surprising given what we saw qualitatively with the general blurriness of the MirrorGANs.
Interestingly, the MirrorGAN BERT model had a lower Inception Score than the base MirrorGAN model despite the generated images looking slightly better in many cases. This may be due to lack of diversity in generated images rather than worse image quality; we noticed that the MirrorGAN BERT model seemed to suffer from mode collapse more than the MirrorGAN model. We also note that although we trained both MirrorGAN models for the same number of epochs, the MirrorGAN BERT model used higher-dimension embeddings; it is possible that it needs more training epochs to equal the quantitative performance of the base MirrorGAN.
9 Conclusion
In this project, we approached the task of caption-to-image generation using ACGANs and MirrorGANs, with variations largely through the use of different embeddings to try to better align captions to the generated images. We found some success with using complex embeddings like BERT and Sentence-BERT with the ACGAN, able to produce bird-like images that could generally match aspects of the caption such as color. Simpler embeddings would often result in bird-like images, but ones that missed important details of the caption. Attempts with the MirrorGAN were less successful; the generated birds were often blurry and lacking in detail, and at times didn’t even resemble a bird. However, this is likely due to the time constraints that led us reduce the number of stages in the MirrorGAN’s GLAM module.
For the ACGAN model, future work may include changing the discriminator to predict the sentence embedding instead of the bird species. This will likely improve performance since it will allow the model to better capture differences between birds of the same species and give the generator more training signal about how the sentence maps to the image.
Given more time, we would like to increase the number of stages in the MirrorGAN so that every generated image has chances to be refined and corrected. Additionally, our project focused only on modifying the STEM module; however, there is a lot of work that can still be done with the STREAM module, which helps realign
the generated image to the input caption. The current STREAM module is a relatively standard encoder-decoder framework, with an Inception V3 encoder and a vanilla LSTM decoder. The decoder especially could be improved in many ways, such as adding attention to the vanilla LSTM or replacing it altogether with a hierarchical LSTM as described by Song et al. [11] to take advantage of the successes of deep neural networks. By using a better STREAM module, the MirrorGAN may be able to ensure that its generated images are more semantically aligned to the input caption, making sure that images produced are not only realistic but relevant.
Code
Our code can be viewed at https://github.com/looi/CS236.
References
[1] Tingting Qiao, Jing Zhang, Duanqing Xu, and Dacheng Tao. Mirrorgan: Learning text-to-image generation by redescription, 2019. arXiv:1903.05854.
[2] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2642–2651. JMLR. org, 2017.
[3] Attngan: Fine-grained text to image generation with attentional generative adversarial networks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 1316–1324. IEEE Computer Society, 12 2018. doi:10.1109/CVPR.2018.00143.
[4] Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364, 2017.
[5] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[6] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.
[7] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
[8] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks, 2016. arXiv:1612.03242.
[9] Scott Reed, Zeynep Akata, Bernt Schiele, and Honglak Lee. Learning deep representations of fine-grained visual descriptions, 2016. arXiv:1605.05395.
[10] Komiya-M. Mirrorgan. https://https://github.com/komiya-m/MirrorGAN, 2019.
[11] Jingkuan Song, Xiangpeng Li, Lianli Gao, and Heng Tao Shen. Hierarchical lstms with adaptive attention for visual captioning, 2018. arXiv:1812.11004.
|
Over a decade ago in the first edition of the *Handbook of Forgiveness* (Worthington, 2005), Toussaint and Webb (2005) conducted a broad review of the associations between forgiveness and global mental health and well-being and turned up 18 empirical articles. Even though an entire chapter in the original *Handbook* was dedicated to theorizing about why forgiveness might be related to physical health (Harris & Thoresen, 2005), in the early 2000s the empirical foundation, at almost any level, was quite modest. Today that has changed. Recently, the book, *Forgiveness and Health: Scientific Evidence and Theories Relating Forgiveness to Better Health* (Toussaint, Worthington, & Williams, 2015), had as its sole focus to review literature on forgiveness and health, and only one chapter was devoted to mental health. In just over a decade (2005–2015), the understanding of forgiveness and health went from theorizing bolstered by only a small but promising collection of articles to an extensive literature that required an entire volume to adequately summarize. This parallels explosive growth in research on forgiveness in general. Nonetheless, the focus on forgiveness and physical health per se has lagged behind the study of forgiveness and its association with other outcomes such as mental health and happiness. In the present chapter, we focus exclusively on the topic of forgiveness and physical health. We review methods and findings of existing empirical research and attempt to outline key directions for continuing work in this area.
**Forgiveness Defined**
Three features of forgiveness are important in our understanding of how forgiveness relates to physical health. First, forgiveness can be both a trait and state. Trait forgivingness appears to be most important for health. Traits (or dispositions) are thought to be stable over time and across situations, and it is this consistent influence on one’s experience throughout life that is thought to be most importantly connected to health. Of course, forgiving states are important, and other contributors to this handbook (e.g., see Chapter 16) have demonstrated how momentary states of forgiveness can influence important physiological change underlying physical health. However, current models of physiological homeostasis suggest that it is allostatic load, biological dysregulation across multiple physiological systems due to accumulated wear-and-tear that adversely affects the human physical condition (McEwen, 2005). Forgiving and unforgiving personalities most likely influence chronic allostatic load (Toussaint, Shields, Dorn, & Slavich, 2014). Second,
two primary dimensions of forgiveness have received considerable attention as they relate to health. These are forgiveness of others and self-forgiveness. Forgiveness of others involves reducing unforgiving thoughts, feelings, motivations, and behaviors directed toward an offender and often replacing them with love-based, altruistic thoughts, feelings, motivations, and behaviors (Worthington, 2005). Self-forgiveness involves reducing self-condemning emotions such as shame and guilt, accepting responsibility of one’s wrongdoing, and then replacing self-condemnation with compassion, generosity, and love (Woodyatt, Worthington, Wenzel, & Griffin, 2017; see also Chapter 18). Other dimensions of forgiveness certainly exist, such as feeling forgiven by God or seeking forgiveness from others and even societal forgiveness (Enright et al., 2016), but they have received less attention in health research. Third, although the decision to forgive is crucial in facilitating forgiveness, it is likely the emotional side of forgiveness that is connected most directly to physical health (Worthington, Witvliet, Pietrini, & Miller, 2007). Theories of stress rely on the affective system as the conduit through which environmental stimuli elicit physiological changes that ultimately influence physical health conditions (McEwen, 2005). (We do observe that decisions to forgive affect relationships, and relationship quantity and quality can influence physical health.) There are many aspects of forgiveness, but these three issues stand out as important for physical health. In this sense, forgiveness of others or self-forgiveness can be thought of as traits or states that may have a health-protecting influence on how individuals respond to emotionally arousing interpersonal or intrapersonal events. Forgiveness of others and self-forgiveness can also be thought of as virtues, coping styles, or resilience characteristics that protect individuals from ongoing stress resulting from interpersonal transgressions of others or self-condemnation of oneself (Toussaint, Webb, & Hirsch, 2017; Worthington & Scherer, 2004).
**Method of Review**
We searched the psychological and medical literatures on December 22, 2018, using title, keyword, and abstract searches of the databases PsycINFO and PubMed. PsycINFO was searched with the term, “forg*” and physical health,” and PubMed was searched with “forgiv* and physical health” (searching “forgav* and physical health” resulted in zero articles). There were no publication-year restrictions. Inclusion criteria were: (a) at least one measure of forgiveness, and (b) at least one explicit measure of physical health. Exclusion criteria were: (a) experimental manipulations or interventions that combined forgiveness with other constructs (e.g., gratitude, mindfulness); and (b) health-related outcomes focused exclusively on mental health, happiness, suicide, global non-health-related quality of life (see Chapter 18), physiological parameters or biomarkers (see Chapter 16), or combined mental and physical health composites.
The PsycINFO search turned up 105 articles, and the PubMed search added another 36 articles. Google Scholar was searched using “forgiveness and physical health” and no non-redundant articles were identified for inclusion. The reference sections of recent reviews (Cheadle & Toussaint, 2015; Davis et al., 2015) were examined. Six non-redundant articles were added. Thus, the total number of articles was 147. The titles, keywords, and abstracts of these articles were reviewed by the first author for relevance and 61 articles were retained. After reviewing the full-text of the 61 articles, 10 articles were eliminated because they did not meet inclusion/exclusion criteria. Of the 51 articles reviewed, 4 had a second study included in the article leaving 55 studies for review. Although summary statistics for the review will include all 55 studies spanning 2000 to 2018, only recent studies published between 2014 and 2018 will be highlighted to feature current work not captured in previous reviews that likely concluded in 2014 (Cheadle & Toussaint, 2015; Davis et al., 2015). A full list of studies on forgiveness and health can be obtained from the first author (or through the e-resources associated with this chapter). A summary of the results are provided in Table 17.1.
Table 17.1 Literature Review Summary of Research on Forgiveness and Health
| Content area of review | N (%) of studies | N (%) evidence of benefit |
|-----------------------------------------|------------------|---------------------------|
| Total studies | 55 (100%) | 40 (73%) |
| **Forgiveness** | | |
| Forgiveness of others | 42 (76%) | 28 (67%) |
| Self-forgiveness | 20 (35%) | 13 (65%) |
| Other forgiveness dimensions | 9 (16%) | 8 (100%) |
| **Health** | | |
| Self-rated health | 13 (24%) | 12 (92%) |
| Health-related quality of life | 17 (31%) | 11 (65%) |
| Symptoms | 20 (36%) | 14 (70%) |
| Physical activity | 3 (5%) | 2 (67%) |
| **Sample characteristics** | | |
| Undergraduate | 13 (24%) | 10 (77%) |
| Community | 27 (49%) | 22 (81%) |
| Patient | 11 (20%) | 6 (55%) |
| Mixed sample | 4 (7%) | 2 (50%) |
| **Design** | | |
| Cross-sectional | 47 (85%) | 34 (72%) |
| Longitudinal | 4 (9%) | 3 (60%) |
| Experimental/intervention | 3 (6%) | 3 (100%) |
*Note.* Content areas are not mutually exclusive.
Results
Publishing Trends
Research on forgiveness and physical health has been ongoing since 2000. For every five-year span of time, 2000–2004, 2005–2009, 2010–2014, 2015–2018 (4 years), the number of articles has been 6, 9, 21, 14 (last 4 years), respectively. There is growing interest in how forgiveness is related to physical health.
Dimensions of Forgiveness
**Forgiveness of others.** Forgiveness of others and self-forgiveness are the predominant dimensions of forgiveness that have been examined for their relation to physical health. Of the 55 total studies in this review 42 (76%) included a measure of forgiveness of others. Of these studies, 28 (67%) showed at least one statistically significant association of forgiveness of others with physical health and all were in the healthy direction. As one example of recent research evaluating the connection between forgiveness of others and physical health, Toussaint et al. (2018) showed in two cross-sectional samples of middle-aged working adults \((N_1 = 108; N_2 = 154)\) that forgiveness of a specific event and trait forgivingness were both associated with fewer health complaints.
**Self-forgiveness.** Of the 55 total studies in this review just over a third \((n = 20; 36\%)\) included a measure of self-forgiveness. This is less than half of the studies that examined forgiveness of others and physical health. This probably reflects the belated onset of research on self-forgiveness relative to forgiveness of others (Hall & Fincham, 2005). Of these studies, 13 (65%) showed at least one statistically significant association of self-forgiveness with physical health, and all were in the healthy direction. As an example of research examining self-forgiveness and physical health, Bassett et al.
(2016) showed, in two cross-sectional studies of college student and community samples ($N_1 = 36$; $N_2 = 141$), that self-forgiveness was related to fewer physical symptoms and improved ratings of physical health.
**Other dimensions of forgiveness.** Of the 55 total studies in this review only nine (16%) included other measures of forgiveness such as feeling forgiven by God, forgiving uncontrollable situations, or seeking forgiveness. This is the smallest group of studies in this review. But, this might reflect the relative lack of conceptual development in these areas. Theoretical and measurement issues concerned with these dimensions of forgiveness have not been as fully addressed. This is evident because many of the measures were developed specifically for a given study and not psychometrically well supported. All nine of the studies examining other dimensions of forgiveness and physical health showed at least one statistically significant association between physical health and other dimensions of forgiveness and all were in the healthy direction. An intriguing example is a cross-sectional study by Krause and Ironson (2017) that examined the connection between feeling forgiven by God and the health outcomes of waist–hip ratio and exercise frequency. Interestingly, feeling forgiven by God was associated with unhealthy waist–hip ratios and less frequent exercise, but this association was most strongly evident for participants who were less committed to their faith. Although intriguing, explanations for these findings are unclear, in part, because these are cross-sectional analyses. Perhaps less strongly committed adherents find forgiveness by God a license for indulgent and unhealthy behavior, and perhaps these effects are similar to the sometimes unhealthy effects of self-forgiveness (Wohl & Thompson, 2011) or feeling God’s unconditional forgiveness (Toussaint, Owen, & Cheadle, 2012). Feeling forgiven by God without acceptance of fault and responsibility, sometimes referred to as pseudo-self-forgiveness, might pose more health challenges than benefits. Alternatively, those with less favorable waist–hip ratios, who exercise less, and yet feel forgiven by God are less prone to anxiety and perfectionism which drives them to be less fastidious about weight control and exercise as well as more likely to accept God’s forgiveness.
**Dimensions of Physical Health**
**Self-rated health.** A commonly used and meaningful measure of physical health is one’s perception of health (Idler & Benyamini, 1997). Although this measure may include an emotional component, it is a powerful predictor of mortality and changes in physical health over time. Of the 55 studies reviewed herein 13 (24%) studies included self-rated health as a primary health outcome measure. Of these 13 studies, 12 (92%) showed a statistically significant beneficial association between self-rated health and some dimension of forgiveness.
**Health-related quality of life.** Varying definitions of health-related quality of life emphasize health perceptions similar to self-rated health, pure quality of life dimensions (i.e., psychological well-being), and the unique influences of health on one’s quality of life (Karimi & Brazier, 2016). Of the 55 studies, 17 (31%) included a measure of health-related quality of life. Of these 17 studies, 11 (65%) showed a statistically significant positive association with some dimension of forgiveness. Of these 17 studies, 15 (88%) used some version of the Medical Outcomes Study Short-Form Health Survey (Ware & Sherbourne, 1992). There are several different versions of the base Short-Form 36 (SF-36) but all assess to some degree eight core dimensions of health-related functioning including: (a) physical limitations, (b) social limitations, (c) physical health-related role interference, (d) bodily pain, (e) general mental health, (f) emotional health-related role interference, (g) vitality (energy and fatigue), and (h) general health perceptions. This is a broad construct and our review, while focused exclusively on physical health outcomes, would be incomplete by excluding health-related quality of life. However, readers should examine associations between dimensions of forgiveness and health-related quality of life cautiously because this construct is not exclusively the domain of physical health and includes broader concepts of mental and social well-being.
Symptoms. Somatic symptoms are a common reason for healthcare consultations and are related to reduced quality of life, greater healthcare use, increased absenteeism, and job loss (Joustra, Janssens, Schenk, & Rosmalen, 2018). Sometimes symptoms can be fleeting (e.g., headache) and other times can indicate much more serious conditions (e.g., heart problems) (Robbins & Kirmayer, 1991). Of the 55 studies, 20 (36%) included a measure of somatic symptoms. Of these 20 that included a measure of somatic symptoms, 14 (70%) showed a healthy association between some dimension of forgiveness and somatic symptoms.
Physical activity. Two articles were included in this review not because they examined forgiveness and associations with physical health per se, but because they focused on physical activity, one of the most powerful contributors to good physical health (Hills, Street, & Byrne, 2015). The first article involved two studies (Struthers, van Monsjou, Ayoub, & Guilfoyle, 2017). In the first correlational study, the investigators sought to determine whether, compared to non-exercising individuals, individuals who frequently engaged in different types of exercise including aerobic, anaerobic, and stretching exercises showed different levels of readiness to forgive a romantic partner. When compared to people who frequently engaged in aerobic and stretching exercises, individuals who did not exercise or frequently engaged in anaerobic exercise showed lower levels of readiness to forgive their romantic partners. In the second experimental study, Struthers et al. (2017) randomly assigned participants to take part in 30 minutes of aerobic, anaerobic, stretching, or no exercise. Results supported those of the first study: When compared to participants who engaged in aerobic or stretching exercises, participants who did not exercise or engaged in anaerobic exercise showed lower levels of readiness to forgive a purported offender for her wrongdoing.
Both studies point to the conclusion that aerobic and stretching exercises might be beneficial for forgiveness (with the important caveat that the experimental study involved only a 30-minute intervention and should not be over-interpreted). To the extent that physical activity increases forgiveness of others and forgiveness of others, in turn, has an independent and salubrious effect on physical health, then the benefits of physical activity on health are not only in the well-established direct fashion but they may also act through important mediators such as forgiveness. Furthermore, aerobic and stretching exercises are both well-established in promoting cardiovascular health (Agarwal, 2012; Kruse & Scheuermann, 2017) and are central components of many Eastern approaches to meditation and wellness (e.g., Qigong; Xiong, Wang, Li, & Zhang, 2015) perhaps suggesting Eastern traditional practices promote forgiveness and direct and indirect (through forgiveness) health benefits.
The second article of note here is a study focused on how self-forgiveness is related to lapses in physical activity (Schumacher, Arigo, & Thomas, 2017). In this study, participants’ levels of self-forgiveness and physical activity were assessed weekly for six weeks. Self-forgiveness was assessed with a single item and physical activity lapses were self-reported as well as objectively measured using fit-bits (www.fitbit.com). Results showed that individuals with higher levels of self-forgiveness in a given week were more likely to have lapses in their physical activity the following week. Given the importance of physical activity (Hills et al., 2015), any impediment to engaging in physical activity has to be taken seriously. Self-forgiveness should not offer an easy excuse for unhealthy behavior (Woodyatt et al., 2017), but a previous study coincides with this finding perhaps suggesting a potential association between self-forgiveness and health risk (Wohl & Thompson, 2011).
Sample and Design Characteristics
Given psychology’s reliance on college student participants (Hanel & Vione, 2016), it is surprising to find that out of the 55 studies reviewed herein only 13 (24%) relied exclusively on college students, 10 of which (77%) showed an association between forgiveness and health. The other 76 percent of the studies used participants from the community ($n = 27$; 49%), patient groups ($n = 11$; 20%), or mixed college and community or high school samples ($n = 4$; 7%). Of the community, patient, and mixed sample
studies, 22 (85%), 6 (55%), 2 (50%) showed an association between forgiveness and health, respectively. Community samples were diverse including sedentary young adults, middle-aged male prisoners, Iraqi refugees, and members of four different religions including Buddhists, Christians, Jews, and Muslims. Patient samples were similarly diverse including patients with fibromyalgia, spinal cord injury, chronic heart failure, traumatic brain injury, stroke, chronic pain, HIV, arthritis, chronic obstructive pulmonary disease, diabetes, and posttraumatic stress disorder. The average sample size was 877 participants, but this average was considerably skewed by a few very large samples (Ns > 2000). The median sample size was just over 261 participants (minimum = 11, maximum = 10,283). Regarding the design of these studies, most were cross-sectional studies \((n = 47; 85\%)\), a few were longitudinal \((n = 5; 9\%)\), and only three were experimental/interventions (6%). Of the 47 cross-sectional, 5 longitudinal, and 3 experimental/intervention studies, 34 (72%), 3 (60%), and 3 (100%) showed an association or effect of forgiveness on health, respectively. Of the 52 correlational studies, 12 (23%) were conducted with college students, 23 (42%) were conducted with community samples (3 [13%] of which used probability-based samples), and 10 (19%) were conducted with patients samples. Three (5%) other cross-sectional, correlational studies were conducted with mixed college and community samples or with high school students. Three (5%) longitudinal studies were conducted with probability-based national samples of elders aged 66 years and older and one was conducted with a mixed college and community sample. The experiment used a college student sample. One intervention studied a community sample of middle-aged adults, and the other intervention studied fibromyalgia patients.
**Perspective**
In this section, we identify the most important findings to emerge from this review. Although forgiveness is consistently associated with better physical health, there have been relatively few studies, and most have been cross-sectional and correlational in nature.
**There Have Been Relatively Few Forgiveness and Health Studies**
First, in the roughly 20 years since the study of forgiveness began in earnest, hundreds of studies have been conducted. Those that have examined any dimension of forgiveness and physical health comprise just 55 of these studies. This is a very small proportion of the total work on forgiveness. This raises the question as to *why*. There are several possible reasons. The most common participant under study by psychologists is the college student (Hanel & Vione, 2016) and variability of college student physical health is often much less than that of middle-aged and older adults. Hence, the most easily accessible participant of study is of less utility to those psychologists and other social scientists interested in studying forgiveness and physical health because variability in health outcomes is a necessary prerequisite of understanding health and illness. Also, health studies are infrequent because studying major health outcomes of forgiveness is not easily amenable to an experimental design. If we could create experimental manipulations to improve health, not only would forgiveness experiments be plentiful, but also other health-promoting experimental manipulations would be proliferating. In addition, health studies may be underrepresented because health conditions usually take a long time to develop and often the course of illness and its potential development into chronic illness or disease is unpredictable. Hence, the study of how forgiveness is related to health requires patience. Prospective research requires years or even decades to elapse before learning of the outcomes of the work. In summary, the study of forgiveness and health is hard to do with tools and samples most common and convenient for psychologists and other social scientists. Ideally, these studies could be done in college students using experimental methods and while measuring major illness and disease outcomes. But, this is not reality. Consequently, studies of forgiveness and physical health continue to be published more slowly than many other areas of interest.
Forgiveness Is Consistently Related to Health
The second notable conclusion from our review is that, although the number of studies is small, the consistency of the findings is remarkable. Across all 55 studies that we reviewed, 73 percent showed some indication that forgiveness had a favorable relationship or effect on health. As this is a qualitative review of the literature, we did not summarize effect sizes or examine moderating effects, but there is a good deal of variability in measures of forgiveness and health, designs, and samples and yet the results showed that in some categories of review (e.g., mixed samples) a minimum of 50 percent and in other categories of review (e.g., experiments/interventions) a maximum of 100 percent of the studies showed favorable connections between forgiveness and health.
One explanation for the consistent findings is that there is a robust connection between forgiveness and good health and good research has consistently identified it. Of course, other explanations exist. It could be widely prevalent selection bias in the samples, systematic measurement error, or other design flaws. Our review did not uncover any of these likely suspects to explain the consistency of the findings, and so we are inclined to support the hypothesis that forgiveness is healthy. That said, publication bias favors this outcome and 27 percent of the literature showing null results remains a substantial proportion of the studies. Studies showing null results did not differ noticeably from studies supporting the forgiveness and health hypothesis in design, sample size, sample composition, or forgiveness measures. It was interesting to observe that only one study of 13 using self-rated health as an outcome showed null results. No other difference in health measurement was evident. Another possibility is that the benefits of forgiveness apply to only some outcomes that have been over-studied.
Most Evidence Is from Correlational Studies
A third finding worth highlighting is that, as with many issues in human health, the evidence base in overwhelmingly correlational. In fact, 85 percent of the studies of forgiveness and health are based on cross-sectional, correlational designs and 72 percent of those studies show a beneficial association between forgiveness and health. Yet, the causal role of forgiveness in the broader health equation has not been established. Only two intervention studies show that forgiveness causes improvements in physical health. This is not to discount the value of correlational research. For instance, one of the landmark studies prompting the anti-smoking movement was based on correlational methods (Hammond & Horn, 1954). It is important to note, however, that to be more useful correlational studies need to employ large, representative samples and implement longitudinal or prospective designs that follow participants over extended periods of time. Correlational studies utilizing cross-sectional designs typically precede more sophisticated modeling of phenomena, so we expect that more advanced designs and insightful studies will continue to be published in the years to come.
Research Agenda
Looking forward, the research needed to carefully evaluate the connection between forgiveness and physical health should possess important key features. Forthcoming research on forgiveness and health should employ longitudinal or prospective cohort designs that allow investigators to establish temporal precedence. In the current review, we have taken the perspective that forgiveness benefits health. But most of the literature cannot rule out the competing hypothesis that people in good physical health who are well rested and enjoying good vitality may simply be more willing or able to forgive. Some recent longitudinal research suggests that the time-lagged associations support a forgiveness →health model (Seawell, Toussaint, & Cheadle, 2014), but there are only a few studies and existing work is based on only two time points. Examining trajectories of change and daily reciprocal
influences of forgiveness and health may reveal other findings. If physical health does turn out to affect forgiveness, the importance of forgiveness for health is not diminished by that conclusion. In fact, it may be strengthened by it. If good physical health promotes forgiveness and forgiveness, in turn, promotes good physical health uniquely and independently, it may be that the eventual evidence will support a positive feedback loop model of forgiveness and health. Only future longitudinal work across adequate periods of time with many time points of data will reveal the answers.
Another key issue to address is the third-variable problem. The forgiveness and health link might be accounted for through personality traits (e.g., agreeableness, neuroticism) that correlate with both forgiveness and health. Common genetic contributors to both forgiving dispositions and good health might exist. Perhaps health behaviors, such as good sleep hygiene, lead to more forgiving and better health. Certainly, other dispositions, biological factors, and behaviors predispose people to be more forgiving, which might also lead to better health over time. In addition, people who are more emotionally and socially capable might attract cohorts who simply don’t offend as often, leading to less harm and more propensity to forgive when a relational breach occurs. Future work should account for these confounding influences.
Forgiveness experiments and interventions can help to illuminate the causal ordering of forgiveness and health and control confounding variables, but they come with their own challenges for external validity and generalizability. Experiments that manipulate forgiveness and show changes in physical symptoms or perceptions of health are possible but are in short supply. Perhaps this is because the key factors involved in making immediate and meaningful change in forgiveness within an experimental laboratory setting that will produce a measurable change in health still remain elusive. Prayers, meditations, and relaxation strategies have effectively promoted forgiveness (Oman, Shapiro, Thoresen, Plante, & Flinders, 2008), but few connections to physical health have been made. Taking what has been learned from psycho-educational intervention studies and applying it to an experimental social psychological or personality paradigm could yield invigorating insights into forgiveness and health. For instance, what is the most effective and efficient emotional forgiveness manipulation that might make a just noticeable difference in immediate physical health? Future laboratory studies are a must.
Little of the research has a strong theoretical foundation. Most articles clearly define forgiveness and attend to careful measurement of health, but no model ties together the linkages between forgiveness and health. Perhaps the stress-and-coping model of forgiveness (Worthington & Scherer, 2004) provides guidance. Research designed explicitly to test the propositions of the stress-coping-theory of forgiveness could contribute to a consistent paradigmatic body of work that would offer easier comparison across studies and more useful meta-analytic summaries. To develop a stress-and-coping paradigm of forgiveness and health research, standardized measures of the stress of unforgiveness with construct-valid items (e.g., “Holding this grudge has been very stressful and makes me feel bad”) and similar construct-valid measures of forgiveness as a coping mechanism per se (e.g., “I’ve chosen to deal with the pain and stress of being hurt in this way through forgiveness”) are needed. Studies must evaluate coping through forgiveness as both a potential moderator and mediator of stress and health relationships. Coping through forgiveness may also be relevant for lifetime interpersonal and/or intrapersonal stress, as well as stressful events occurring at community, social, national, and international levels.
In a reversal of the usual psychological literature, the understanding of forgiveness and health has been largely focused on middle-aged and older adults and individuals with chronic illness and disease—not undergraduates. Even with less variability in physical health among young people than among older people, we still need to ask, how does forgiveness relate to common health issues with relatively healthy adults—i.e., colds, flus, stomach viruses, allergies, headache, and indigestion? Almost 30 years ago a landmark study in stress and health showed that stress was related to susceptibility to the common cold in participants with an average age of 33 (Cohen, Tyrrell, & Smith, 1991).
Could forgiveness moderate or mediate these associations? Is it reasonable to hypothesize that the stress of carrying offenses and burdens could translate into increased risk for one of humankind’s most common afflictions and that this risk could be to some extent mitigated by forgiveness?
**Conclusion**
The present review consists of 55 studies published in almost 20 years. Of these studies, 73 percent showed some connection between forgiveness and health. Clearly, there is promise in pursuing research in this area. We no longer face down the scary, saber-tooth tiger, for which our evolved neuro-endocrine systems were designed, but instead there is no shortage of opportunities to disappoint and offend ourselves and others. Indeed, such opportunities for disappointment and offense have existed since time immemorial and while we have solved “the tiger problem,” we are seemingly nowhere near solving the “offensive experiences problem.” The weight of intrapersonal and interpersonal transgressions and the stress created by these events could quite possibly be the most underrated source of stress in our modern world. How we cope with this, by forgiving or not, appears to have consequences for our physical health. Future work should aim to improve the methods, samples, and measures that provide the basis of our conclusions. Only with dedicated efforts to scrutinize the connection between forgiveness and health will we know whether forgiving can help improve or prolong our way of healthy living. If the results of future sophisticated designs and analyses remain consistent with our current knowledge, hindsight may show us that forgiveness has been one of the most overlooked remedies to what ails our physical health as a species.
**References**
Agarwal, S. K. (2012). Cardiovascular benefits of exercise. *International Journal of General Medicine, 5*, 541–545.
Bassett, R. L., Carrier, E., Charleson, K., Pak, N. R., Schwingel, R., Majors, A., . . . & Bloser, C. (2016). Is it really more blessed to give than to receive? A consideration of forgiveness and perceived health. *Journal of Psychology and Theology, 44*(1), 28–41.
Cheadle, A. C. D., & Toussaint, L. L. (2015). Forgiveness and physical health in healthy populations. In L. L. Toussaint, E. L. Worthington, Jr. & D. R. Williams (Eds.) *Forgiveness and health: Scientific evidence and theories relating forgiveness to better health* (pp. 91–106). Cham, Switzerland: Springer Science + Business Media.
Cohen, S., Tyrrell, D. A. J., & Smith, A. P. (1991). Psychological stress and susceptibility to the common cold. *New England Journal of Medicine, 325*(9), 606–612.
Davis, D. E., Ho, M. Y., Griffin, B. J., Bell, C., Hook, J. N., Van Tongeren, D. R., . . . & Westbrook, C. J. (2015). Forgiving the self and physical and mental health correlates: A meta-analytic review. *Journal of Counseling Psychology, 62*(2), 329–335.
Enright, R. D., Lee, Y.-R., Hirshberg, M. J., Litts, B. K., Schirmer, E. B., Irwin, A. I., . . . & Song, J. Y. (2016). Examining group forgiveness: Conceptual and empirical issues. *Peace and Conflict: Journal of Peace Psychology, 22*(2), 153–162.
Hall, J. H., & Fincham, F. D. (2005). Self-forgiveness: The stepchild of forgiveness research. *Journal of Social and Clinical Psychology, 24*(5), 621–637.
Hammond, E. C., & Horn, D. (1954). The relationship between human smoking habits and death rates: A follow-up study of 187,766 men. *Journal of the American Medical Association, 155*(15), 1316–1328.
Hanel, P. H. P., & Vione, K. C. (2016). Do student samples provide an accurate estimate of the general public? *PLoS ONE, 11*(12), 1–10.
Harris, A. H. S., & Thoresen, C. E. (2005). Forgiveness, unforgiveness, health, and disease. In E. L. Worthington, Jr. (Ed.), *Handbook of forgiveness* (pp. 321–333). New York, NY: Brunner-Routledge.
Hills, A. P., Street, S. J., & Byrne, N. M. (2015). Physical activity and health: “What is old is new again”. *Advances in Food and Nutrition Research, 75*, 77–95.
Idler, E. L., & Benyamini, Y. (1997). Self-rated health and mortality: A review of twenty-seven community studies. *Journal of Health and Social Behavior, 38*(1), 21–37.
Joustra, M. L., Janssens, K. A. M., Schenk, H. M., & Rosmalen, J. G. M. (2018). The four week time frame for somatic symptom questionnaires reflects subjective symptom burden best. *Journal of Psychosomatic Research, 104*, 16–21.
Karimi, M., & Brazier, J. (2016). Health, health-related quality of life, and quality of life: What is the difference? *Pharmacoeconomics, 34*(7), 645–649.
Krause, N., & Ironson, G. (2017). Forgiveness by god, religious commitment, and waist/hip ratios. *Journal of Applied Biobehavioral Research, 22*(4), 1–12.
Kruse, N. T., & Scheuermann, B. W. (2017). Cardiovascular responses to skeletal muscle stretching: “Stretching” the truth or a new exercise paradigm for cardiovascular medicine? *Sports Med, 47*(12), 2507–2520.
McEwen, B. S. (2005). Stressed or stressed out: What is the difference? *Journal of Psychiatry and Neuroscience, 30*(5), 315–318.
Oman, D., Shapiro, S. L., Thoresen, C. E., Plante, T. G., & Flinders, T. (2008). Meditation lowers stress and supports forgiveness among college students: A randomized controlled trial. *Journal of American College Health, 56*(5), 569–578.
Robbins, J. M., & Kirmayer, L. J. (1991). Attributions of common somatic symptoms. *Psychological Medicine, 21*(4), 1029–1045.
Schumacher, L. M., Arigo, D., & Thomas, C. (2017). Understanding physical activity lapses among women: Responses to lapses and the potential buffering effect of social support. *Journal of Behavioral Medicine, 40*(5), 740–749.
Seawell, A. H., Toussaint, L. L., & Cheadle, A. C. D. (2014). Prospective associations between unforgiveness and physical health and positive mediating mechanisms in a nationally representative sample of older adults. *Psychology & Health, 29*(4), 375–389.
Struthers, C. W., van Monsjou, E., Ayoub, M., & Guilfoyle, J. R. (2017). Fit to forgive: Effect of mode of exercise on capacity to override grudges and forgiveness. *Frontiers in Psychology, 8*, 1–10.
Toussaint, L. L., Owen, A. D., & Cheadle, A. (2012). Forgive to live: Forgiveness, health, and longevity. *Journal of Behavioral Medicine, 35*(4), 375–386.
Toussaint, L. L., Shields, G. S., Dorn, G., & Slavich, G. M. (2014). Effects of lifetime stress exposure on mental and physical health in young adulthood: How stress degrades and forgiveness protects health. *Journal of Health Psychology, 21*(6), 1004–1014.
Toussaint, L. L., & Webb, J. R. (2005). Theoretical and empirical connections between forgiveness and mental health and well-being. In E. L. Worthington, Jr. (Ed.) *Handbook of forgiveness* (pp. 349–362). New York, NY: Brunner-Routledge.
Toussaint, L. L., Webb, J. R., & Hirsch, J. K. (2017). Self-forgiveness and health: A stress-and-coping model. In L. Woodyatt, E. L. Worthington, Jr., M. Wenzel & B. J. Griffin (Eds.) *Handbook of the psychology of self-forgiveness* (pp. 87–99). Dordrecht, Netherlands: Springer Nature.
Toussaint, L. L., Worthington, E. L., Jr., Van Tongeren, D. R., Hook, J., Berry, J. W., Shivy, V. A., . . . & Davis, D. E. (2018). Forgiveness working: Forgiveness, health, and productivity in the workplace. *American Journal of Health Promotion, 32*(1), 59–67.
Toussaint, L. L., Worthington, E. L., Jr., & Williams, D. R. (2015). *Forgiveness and health: Scientific evidence and theories relating forgiveness to better health*. Cham, Switzerland: Springer Science + Business Media.
Ware, J. E., & Sherbourne, C. D. (1992). The MOS 36-item short-form health survey (SF-36): I. Conceptual framework and item selection. *Medical Care, 30*(6), 473–483.
Wohl, M. J. A., & Thompson, A. (2011). A dark side to self-forgiveness: Forgiving the self and its association with chronic unhealthy behaviour. *British Journal of Social Psychology, 50*(2), 354–364.
Woodyatt, L., Worthington, E. L., Jr., Wenzel, M., & Griffin, B. J. (2017). Orientation to the psychology of self-forgiveness. In L. Woodyatt, E. L. Worthington, Jr., M. Wenzel & B. J. Griffin (Eds.) *Handbook of the psychology of self-forgiveness* (pp. 3–16). Dordrecht, Netherlands: Springer Nature.
Worthington, E. L., Jr. (2005). *Handbook of forgiveness*. New York: Routledge.
Worthington, E. L., Jr., & Scherer, M. (2004). Forgiveness is an emotion-focused coping strategy that can reduce health risks and promote health resilience: Theory, review, and hypotheses. *Psychology & Health, 19*(3), 385–405.
Worthington, E. L., Jr., Witvliet, C. V. O., Pietrini, P., & Miller, A. J. (2007). Forgiveness, health, and well-being: A review of evidence for emotional versus decisional forgiveness, dispositional forgivingness, and reduced unforgiveness. *Journal of Behavioral Medicine, 30*(4), 291–302.
Xiong, X., Wang, P., Li, X., & Zhang, Y. (2015). Qigong for hypertension: A systematic review. *Medicine, 94*(1), 1–14.
|
Exploiting Batch Processing on Streaming Architectures to Solve 2D Elliptic Finite Element Problems: A Hybridized Discontinuous Galerkin (HDG) Case Study
James King · Sergey Yakovlev · Zhisong Fu · Robert M. Kirby · Spencer J. Sherwin
Received: 14 March 2013 / Revised: 9 September 2013 / Accepted: 8 November 2013
© Springer Science+Business Media New York 2013
Abstract Numerical methods for elliptic partial differential equations (PDEs) within both continuous and hybridized discontinuous Galerkin (HDG) frameworks share the same general structure: local (elemental) matrix generation followed by a global linear system assembly and solve. The lack of inter-element communication and easily parallelizable nature of the local matrix generation stage coupled with the parallelization techniques developed for the linear system solvers make a numerical scheme for elliptic PDEs a good candidate for implementation on streaming architectures such as modern graphical processing units (GPUs). We propose an algorithmic pipeline for mapping an elliptic finite element method to the GPU and perform a case study for a particular method within the HDG framework. This study provides comparison between CPU and GPU implementations of the method as well as highlights certain performance-critical implementation details. The choice of the HDG method for the case study was dictated by the computationally-heavy local matrix generation stage as well as the reduced trace-based communication pattern, which together make the method amenable to the fine-grained parallelism of GPUs. We demonstrate that the HDG method is...
well-suited for GPU implementation, obtaining total speedups on the order of 30–35 times over a serial CPU implementation for moderately sized problems.
**Keywords** High-order finite elements · Spectral/$hp$ elements · Discontinuous Galerkin method · Hybridization · Streaming processors · Graphical processing units (GPUs)
1 Introduction
In the last decade, commodity streaming processors such as those found in graphical processing units (GPUs) have arisen as a driving platform for heterogeneous parallel processing with strong scalability, power and computational efficiency [1]. In the past few years, a number of algorithms have been developed to harness the processing power of GPUs for a number of problems which require multi-element processing techniques [2, 3]. This work is motivated by our attempt to find effective ways of mapping continuous and hybridized discontinuous Galerkin (HDG) methods to the GPU. Significant gains in performance have been made when combining GPUs with discontinuous Galerkin (DG) for hyperbolic problems (e.g. [4]); in this work, we focus on whether similar gains can be achieved when solving elliptic problems.
Note that within a hyperbolic setting, each time step of a DG method algorithmically consists of a single parallel update step where the inter-element communication is limited to the numerical flux computation that is performed locally. In the case of many elliptic operator discretizations, however, one is required to solve a linear system in order to find the values of globally coupled unknowns. The linear system in question can be reduced in size if static condensation (Schur Complement) technique is applied, but it has to be solved nevertheless. Depending on the choice of linear solver, the system matrix can either be explicitly assembled or stored as a collection of elemental matrices accompanied by the local-to-global mapping data. In this particular work we have chosen to explicitly assemble the system matrix on the GPU to match the CPU code used for comparison.
Due to the different structure of numerical methods for elliptic PDEs and the unavoidable global coupling of unknowns, one usually breaks the solution process into several of stages: local (elemental) matrix generation, global linear system matrix assembly, and global linear system solve. If static condensation is applied and the global linear system is solved for the trace solution (solution on the boundary of elements), there is an additional stage of recovering the elemental solution from the trace data. Each of the stages outlined above benefits from parallelization on the GPU to a different degree: the local matrix generation stage benefits from parallelization much more than the assembly and global solve stages, due to the fact that operations performed are completely independent for different elements.
The goals this paper pursues are the following: (a) to provide the reader with an intuition regarding the overall benefit that parallelization on streaming architectures provides to numerical methods for elliptic problems as well as per-stage benefits and the runtime trends for different stages; (b) to propose a pipeline for solving 2D elliptic finite element problems on GPUs and provide a case study to understand the benefits of GPU implementation for numerical problems formulated within the HDG framework; (c) to propose a per-edge assembly as a more efficient approach than the traditional per-element assembly, given the structure of the HDG method and the restrictions of the current generation of SIMD hardware. The key ingredients to our proposed approach are the mathematical nature of the HDG method and the batch processing capabilities (and algorithmic limitations) of the GPU. The choice of method for our case study is motivated by the fact that the local matrix generation stage, which benefits the most from parallelization, is much more computationally intensive for the
HDG method as opposed to the CG method. We now provide background concerning the HDG method and discuss the batch processing capabilities of the GPU.
1.1 Background
DG methods have seen considerable success in a variety of applications due to ease of implementation, ability to use arbitrary unstructured geometries, and suitability for parallelization. The local support of the basis functions in DG methods allows for domain decomposition at the element level which lends itself well to parallel implementations (e.g. [5, 6]). A number of recent works have demonstrated that DG methods are well-suited for implementation on a GPU [7, 8], for reasons of memory reference locality, regularity of access patterns, and dense arithmetic computations. Computational performance of DG methods is closely tied to polynomial order. As polynomial order increases on DG methods, memory bandwidth becomes less of a bottleneck as the floating point arithmetic operations become the dominant factor. The increase in floating point operation throughput on GPUs has led to implementations of high-order DG methods on the GPU [9].
However DG methods still suffer from and are often criticized for the need to employ significantly more degrees of freedom than other numerical methods [10], which results in a bigger global linear system to solve. The introduction of the HDG method in Cockburn et al. [11] successfully resolved this issue by providing a method within the DG framework whose only globally coupled degrees of freedom were those of the scalar unknown on the borders of the elements. The HDG method uses a formulation which expresses all of the unknowns in terms of the numerical trace of the hybrid scalar variable $\lambda$. This method greatly reduces the global linear system size, while maintaining properties that make DG methods apt to parallelization. The elemental nature of DG methods have encouraged many to assert that they should be “easily parallelizable” (e.g. [4, 12, 13]). Due to weak coupling between elements in the HDG method, there is less inter-element communication needed which is advantageous for scaling the method to a parallel implementation. The combination of a batch collection of local (elemental) problems which needs to be computed and the reduced trace-based communication pattern of HDG conceptually makes this method well-suited to the fine-grained parallelism of streaming architectures such as modern GPUs. It is the local (elemental) batch nature of the decomposition which directs us to investigate the GPU implementation of the method. In the next subsection we provide an overview of batched operations, describe the current state of batch processing in existing software packages, and explain why it was relevant to create our own batch processing framework.
1.2 Batched Operations
Batch processing is the act of grouping some number of like tasks and computing them as a “batch” in parallel. This generally involves a large set of data whose elements can be processed independently of each other. Batch processing eliminates much of the overhead of iterative non-batched operations. “Batch” processing is well-suited to GPUs due to the SIMD architecture which allows for high parallelization of large streams of data. Basic linear algebra subprograms (BLAS) are a common example of large scale operations that benefit significantly from batch processing. The HDG method specifically benefits from batched BLAS Level 2 (matrix–vector multiplication) and BLAS Level 3 (matrix–matrix multiplication) operations.
Finding efficient implementations for solving linear algebra problems is one of the most active areas of research in GPU computing. The NVIDIA CUBLAS [14] and AMD
APPML [15] are well-known solutions for BLAS functions on GPUs. While CUBLAS is specifically designed for the NVIDIA GPU architecture based on CUDA [14], the AMD solution using OpenCL [16] is a more general cross platform solution for both GPU and multi-CPU architectures. CUBLAS has constantly improved based on a successive number of research attempts by Volkov [17], Dongarra [18, 19] etc. This led to a speed improvement of one to two orders of magnitude for many functions from the first release version till now. In recent releases, CUBLAS and other similar packages have been providing batch processing support to improve processing efficiency on multi-element processing tasks. The support is, however, not complete as currently CUBLAS only supports batch mode processing for BLAS Level 3, but not for functions within BLAS Level 1 and BLAS Level 2.
It is due to these limitations of existing software that the authors were prompted to create a batch processing framework. We developed a batch processing framework for the GPU which uses the same philosophy present in CUBLAS. However, we augmented it with additional operations such as matrix-vector multiplication and matrix inversion. The framework is generalized such that it is not limited specifically to linear algebra operations; however, due to the finite element context of this paper, we restricted our focus to linear algebra operations.
1.3 Outline
The paper is organized as follows. In Sect. 2 we present the mathematical formulation of the HDG method. In Sect. 3 we introduce all the necessary implementation building blocks: polynomial expansion bases, matrix form of the equations from Sect. 2, trace assembly and spread operators, etc. Sect. 4 and its subsections present details that are specific to GPU implementation of the HDG method. First we describe the implementation pipeline followed by the description of the local matrix generation in Sect. 4.1, the global system matrix assembly in Sect. 4.2, and the global solve and subsequent local solve in Sect. 4.3. In Sect. 5 we present numerical results which include a comparison of CPU and GPU implementations of HDG method. Finally, in Sect. 6 we conclude with potential directions for future research along with a summary of the results.
2 Mathematical Formulation of HDG
In this section we introduce the HDG method for the following elliptic diffusion problem with mixed Dirichlet and Neumann boundary conditions:
\begin{align}
-\nabla^2 u(x) &= f(x) \quad x \in \Omega, \\
u(x) &= g_D(x) \quad x \in \partial \Omega_D, \\
n \cdot \nabla u(x) &= g_N(x) \quad x \in \partial \Omega_N,
\end{align}
where $\partial \Omega_D \cup \partial \Omega_N = \partial \Omega$ and $\partial \Omega_D \cap \partial \Omega_N = \emptyset$. The formulation above can be generalized in many ways which can be treated in a similar manner. For example, by considering a diffusion tensor which is given by a symmetric positive definite matrix and by adding convection and reaction terms.
In Sects. 2.2–2.4 we define the HDG methods. We start by presenting the global weak formulation in Sect. 2.2. In Sect. 2.3, we define local problems: a collection of elemental operators that express the approximation inside each element in terms of the approximation at its border. Finally, we provide a global formulation with which we determine the approximation on the border of the elements in Sect. 2.4. The resulting global boundary system is significantly smaller than the full system one would solve without solving local problems first. Once the solution has been obtained on the boundaries of the elements, the primary solution over each element can be determined independently through a forward-application of the elemental operators. However before proceeding we first define the partitioning of the domain and the finite element spaces in Sect. 2.1.
2.1 Partitioning of the Domain and the Spectral/hp Element Spaces
We begin by discretizing our domain. We assume \( \mathcal{T}(\Omega) \) is a two-dimensional tessellation of \( \Omega \). Let \( \Omega^e \in \mathcal{T}(\Omega) \) be a non-overlapping element within the tessellation such that if \( e_1 \neq e_2 \) then \( \partial \Omega^{e_1} \cap \partial \Omega^{e_2} = \emptyset \). By \( N_{el} \), we denote the number of elements (or cardinality) of \( \mathcal{T}(\Omega) \).
Let \( \partial \Omega^e \) denote the boundary of the element \( \Omega^e \) (i.e. \( \bar{\Omega}^e \setminus \Omega^e \)) and \( \partial \Omega^e_i \) denote an individual edge of \( \partial \Omega^e \) such that \( 1 \leq i \leq N^e_e \) where \( N^e_e \) denotes the number of edges of element \( e \). We then denote by \( \Gamma \) the set of boundaries \( \partial \Omega^e \) of all the elements \( \Omega^e \) of \( \mathcal{T}(\Omega) \). Finally, we denote by \( N_\Gamma \) the number of edges (or cardinality) of \( \Gamma \).
For simplicity, we assume that the tessellation \( \mathcal{T}(\Omega) \) consists of conforming elements. Note that HDG formulation can be extended to non-conforming meshes. We do not consider the case of a non-conforming mesh in this work, as it would complicate the implementation while not enhancing the contribution statement in any way. We say that \( \Gamma^l \) is an interior edge of the tessellation \( \mathcal{T}(\Omega) \) if there are two elements of the tessellation, \( \Omega^e \) and \( \Omega^f \), such that \( \Gamma^l = \partial \Omega^e \cap \partial \Omega^f \) and the length of \( \Gamma^l \) is not zero. We say that \( \Gamma^l \) is a boundary edge of the tessellation \( \mathcal{T}(\Omega) \) if there is an element of the tessellation, \( \Omega^e \), such that \( \Gamma^l = \partial \Omega^e \cap \partial \Omega \) and the length of \( \Gamma^l \) is not zero.
As it will be useful later, let us define a collection of index mapping functions, that allow us to relate the local edges of an element \( \Omega^e \), namely, \( \partial \Omega^e_1, \ldots, \partial \Omega^e_{N^e_e} \), with the global edges of \( \Gamma \), that is, with \( \Gamma^1, \ldots, \Gamma^{N_\Gamma} \). Thus, since the \( j \)th edge of the element \( \Omega^e \), \( \partial \Omega^e_j \), is the \( l \)th edge \( \Gamma^l \) of the set of edges \( \Gamma \), we set \( \sigma(e, j) = l \) so that we can write \( \partial \Omega^e_j = \Gamma^{\sigma(e, j)} \).
Next, we define the finite element spaces associated with the partition \( \mathcal{T}(\Omega) \). To begin, for a two-dimensional problem we set
\[
V_h := \{ v \in L^2(\Omega) : v|_{\Omega^e} \in P(\Omega^e) \quad \forall \ \Omega^e \in \mathcal{T}(\Omega) \}, \tag{2a}
\]
\[
\Sigma_h := \{ \tau \in [L^2(\Omega)]^2 : \tau|_{\Omega^e} \in \Sigma(\Omega^e) \quad \forall \ \Omega^e \in \mathcal{T}(\Omega) \}, \tag{2b}
\]
\[
M_h := \{ \mu \in L^2(\Gamma) : \mu|_{\Gamma^l} \in P(\Gamma^l) \quad \forall \ \Gamma^l \in \Gamma \}, \tag{2c}
\]
where \( P(\Gamma^l) = S_P(\Gamma^l) \) is the polynomial space over the standard segment, \( P(\Omega^e) = T_P(\Omega^e) \) is the space of polynomials of total degree \( P \) defined on a standard triangular region and \( P(\Omega^e) = Q_P(\Omega^e) \) is the space of tensor-product polynomials of degree \( P \) on a standard quadrilateral region, defined as
\[
S_P(\Gamma^l) = \{ s^P : 0 \leq p \leq P; (x_1(s), x_2(s)) \in \Gamma^l; -1 \leq s \leq 1 \},
\]
\[
T_P(\Omega^e) = \{ \xi_1^p \xi_2^q : 0 \leq p + q \leq P; (x_1(\xi_1, \xi_2), x_2(\xi_1, \xi_2)) \in \Omega^e; -1 \leq \xi_1 + \xi_2 \leq 0 \},
\]
\[
Q_P(\Omega^e) = \{ \xi_1^p \xi_2^q : 0 \leq p, q \leq P; (x_1(\xi_1, \xi_2), x_2(\xi_1, \xi_2)) \in \Omega^e; -1 \leq \xi_1, \xi_2 \leq 1 \}.
\]
Similarly \( \Sigma(\Omega^e) = [T_P(\Omega^e)]^2 \) or \( \Sigma(\Omega^e) = [Q_P(\Omega^e)]^2 \). For curvilinear regions the expansions are only polynomials when mapped to a straight-sided standard region [20,21].
2.2 The HDG Method
The HDG method is defined in the following way. We start by rewriting the original problem (1) in auxiliary or mixed form as two first-order differential equations by introducing an auxiliary flux variable \( q = \nabla u \). This gives us:
\[
- \nabla \cdot q = f(x) \quad x \in \Omega, \\
q = \nabla u(x) \quad x \in \Omega, \\
u(x) = g_D(x) \quad x \in \partial \Omega_D, \\
q \cdot n = g_N(x) \quad x \in \partial \Omega_N.
\]
(3a)
(3b)
(3c)
(3d)
The HDG method seeks an approximation to \((u, q), (u^{\text{DG}}, q^{\text{PG}})\), in the space \(V_h \times \Sigma_h\), and determines it by requiring that
\[
\sum_{\Omega^e \in T(\Omega)} \int_{\Omega^e} (\nabla v \cdot q^{\text{DG}}) \, dx - \sum_{\Omega^e \in T(\Omega)} \int_{\partial \Omega^e} v (n^e \cdot \tilde{q}^{\text{DG}}) \, ds = \sum_{\Omega^e \in T(\Omega)} \int_{\Omega^e} v \, f \, dx,
\]
(4a)
\[
\sum_{\Omega^e \in T(\Omega)} \int_{\Omega^e} (w \cdot q^{\text{DG}}) \, dx = - \sum_{\Omega^e \in T(\Omega)} \int_{\Omega^e} (\nabla \cdot w) \, u^{\text{DG}} \, dx + \sum_{\Omega^e \in T(\Omega)} \int_{\partial \Omega^e} (w \cdot n^e) \, \tilde{u}^{\text{DG}} \, ds,
\]
(4b)
for all \((v, w) \in V_h(\Omega) \times \Sigma_h(\Omega)\), where the numerical traces \(\tilde{u}^{\text{DG}}\) and \(\tilde{q}^{\text{DG}}\) are defined in terms of the approximate solution \((u^{\text{DG}}, q^{\text{DG}})\).
2.3 Local Problems of the HDG Method
We begin by assuming that the function
\[
\lambda := \tilde{u}^{\text{DG}} \in M_h,
\]
(5a)
is known, for any element \(\Omega^e\), from the global formulation of the HDG method. The restriction of the HDG solution to the element \(\Omega^e\), \((u^e, q^e)\) is then the function in \(P(\Omega^e) \times \Sigma(\Omega^e)\) and satisfies the following Equations:
\[
\int_{\Omega^e} (\nabla v \cdot q^e) \, dx - \int_{\partial \Omega^e} v (n^e \cdot \tilde{q}^e) \, ds = \int_{\Omega^e} v \, f \, dx,
\]
(5b)
\[
\int_{\Omega^e} (w \cdot q^e) \, dx = - \int_{\Omega^e} (\nabla \cdot w) \, u^e \, dx + \int_{\partial \Omega^e} (w \cdot n^e) \, \lambda \, ds,
\]
(5c)
for all \((v, w) \in P(\Omega^e) \times \Sigma(\Omega^e)\). To allow us to solve the above equations locally, the numerical trace of the flux is chosen in such a way that it depends only on \(\lambda\) and on \((u^e, q^e)\):
\[
\tilde{q}^e(x) = q^e(x) - \tau(u^e(x) - \lambda(x))n^e \quad \text{on } \partial \Omega^e
\]
(5d)
where \(\tau\) is a positive function. For the HDG method taking \(\tau\) to be positive ensures that the method is well defined. The results in [22–24] indicate that the best choice is to take \(\tau\) to be of order one. Note that \(\tau\) is a function of the set of borders of the elements of the discretization, and so, it is allowed to be different per element and per edge. Thus, if we are dealing with the element whose global number is \(e\), we denote the value of \(\tau\) on the edge whose local number is \(i\) by \(\tau^{e,i}\).
2.4 The Global Formulation for $\lambda$
Here we denote the solution of (5b)–(5c) when $f = 0$ and when $\lambda = 0$ by $(U_\lambda, Q_\lambda)$ and $(U_f, Q_f)$, respectively, and define our approximation to be
$$(u^{\text{HDG}}, q^{\text{HDG}}) = (U_\lambda, Q_\lambda) + (U_f, Q_f).$$
Note that for the HDG decomposition allows us to express $U_\lambda$, $Q_\lambda$ in terms of $\lambda$ when $f = 0$.
It remains to determine $\lambda$. To do so, we require that the boundary conditions be weakly satisfied \textit{and} that the normal component of the numerical trace of the flux $\tilde{q}$ given by (5d) be single valued. This renders this numerical trace \textit{conservative}, a highly valued property for this type of methods; see Arnold et al. [25].
So, we say that $\lambda$ is the function in $\mathcal{M}_h$ such that
\begin{align}
\lambda &= P_h(g_D) \quad \text{on } \partial \Omega_D, \\
\sum_{\Omega_e \in T_h} \int_{\partial \Omega^e} \mu \tilde{q} \cdot n &= \int_{\partial \Omega_N} \mu g_N,
\end{align}
for all $\mu \in \mathcal{M}_h^0$ such that $\mu = 0$ on $\partial \Omega_D$. Here $P_h$ denotes the $L^2$-projection into the space of restrictions to $\partial \Omega_D$ of functions of $\mathcal{M}_h$.
3 HDG Discrete Matrix Formulation and Implementation Considerations
In this section, to get a better appreciation of the implementation of the HDG approach, we consider the matrix representation of the HDG equations. The intention here is to introduce the notation and provide the basis for the discussion in the following sections. More details regarding the matrix formulation can be found in Kirby et al. [26].
We start by taking $u^e(x), q^e(x) = [q_1, q_2]^T$, and $\lambda^l(x)$ to be finite expansions in terms of the basis $\phi_j^e(x)$ for the expansions over elements and the basis $\psi_j^l(x)$ over the traces of the form:
$$u^e(x) = \sum_{j=1}^{N_e^e} \phi_j^e(x) \hat{u}^e[j] \quad q^e_l(x) = \sum_{j=1}^{N_q^e} \phi_j^e(x) \hat{q}^e_l[j] \quad \lambda^l(x) = \sum_{j=1}^{N_\lambda^l} \psi_j^l(x) \hat{\lambda}^l[j],$$
where $u^e(x): \Omega^e \to \mathbb{R}$, $q^e(x): \Omega^e \to \mathbb{R}^2$ and $\lambda^l(x): \Gamma^l \to \mathbb{R}$.
In our numerical implementation, we have applied a spectral/\textit{hp} element type discretization which is described in detail in Karniadakis and Sherwin [20]. In this work we use the modified Jacobi polynomial expansions on a triangle in the form of generalized tensor products. This expansion was originally proposed by Dubiner [27] and is also detailed in Karniadakis and Sherwin [20], Sherwin and Karniadakis [21]. We have selected this basis due to computational considerations: tensorial nature of the basis coupled with the decomposition into an \textit{interior} and \textit{boundary} modes [20,21] benefits the HDG implementation. In particular, when computing a boundary integral of an elemental basis function, edge basis function together with edge-to-element mapping can be used. This fact will be further commented upon in the following sections.
3.1 Matrix Form of the Equations of the HDG Local Solvers
We can now define the matrix form of the local solvers. Following a standard Galerkin formulation, we set the scalar test functions $v^e$ to be represented by $\phi_i^e(x)$ where $i = 1, \ldots, N_h^e$, and let our vector test function $w^e$ be represented by $e_k \phi_i$ where $e_1 = [1, 0]^T$ and $e_2 = [0, 1]^T$. We next define the following matrices:
$$\mathbb{D}_k^e[i, j] = \left( \phi_i^e, \frac{\partial \phi_j^e}{\partial x_k} \right)_{\Omega^e}, \quad \mathbb{M}_k^e[i, j] = \left( \phi_i^e, \phi_j^e \right)_{\Omega^e}$$
$$\mathbb{E}_l^e[i, j] = \left( \phi_i^e, \phi_j^e \right)_{\partial \Omega_l^e}, \quad \mathbb{P}_k^e[i, j] = \left( \phi_i^e, \phi_j^e n_k^e \right)_{\partial \Omega_l^e}$$
$$\mathbb{F}_l^e[i, j] = \left( \phi_i^e, \psi_j^{\sigma(e,l)} \right)_{\partial \Omega_l^e}, \quad \mathbb{R}_k^e[i, j] = \left( \phi_i^e, \psi_j^{\sigma(e,l)} n_k^e \right)_{\partial \Omega_l^e}.$$
Note that we choose the trace expansion to match the expansions used along the edge of the elemental expansion and the local coordinates are aligned, that is $\psi_j^{\sigma(e,l)}(s) = \phi_{k(j)}(s)$ (which is typical of a modified expansion basis defined earlier). With this choice, $\mathbb{E}_l^e$ contains the same entries as $\mathbb{F}_l^e$ and similarly $\mathbb{R}_k^e$ contains the same entries as $\mathbb{P}_k^e$.
After inserting the finite expansion of the trial functions into Eqs. (5b) and (5c), and using the definition of the flux given in Eq. (5d), the equations for the local solvers can be written in matrix form as:
$$A^e v^e + C^e \lambda^e = w^e.$$
where $f^e[i] = (\phi_i, f)_{\Omega^e}$, $w^e = (f^e, 0, 0)^T$ and $v^e = (\hat{u}^e, \hat{q}_1^e, \hat{q}_2^e)^T$ is the concatenation of all the unknowns into one vector.
In case of a triangular element, $\lambda^e = \left( \lambda^{\sigma(e,1)}, \lambda^{\sigma(e,2)}, \lambda^{\sigma(e,3)} \right)^T$ and matrices $A^e$ and $C^e$ are defined as follows:
$$A^e = \begin{pmatrix}
\sum_{l=1}^{N_h^e} \tau^{\sigma(e,l)} \mathbb{E}_l^e - \mathbb{D}_1^e & -\mathbb{D}_2^e \\
(\mathbb{D}_1^e)^T & \mathbb{M}^e & 0 \\
(\mathbb{D}_2^e)^T & 0 & \mathbb{M}^e
\end{pmatrix},$$
$$C^e = \begin{pmatrix}
-\tau^{\sigma,1} \mathbb{P}_1^e & -\tau^{\sigma,2} \mathbb{P}_2^e & -\tau^{\sigma,3} \mathbb{P}_3^e \\
-\mathbb{P}_{11}^e & -\mathbb{P}_{12}^e & -\mathbb{P}_{13}^e \\
-\mathbb{P}_{21}^e & -\mathbb{P}_{22}^e & -\mathbb{P}_{23}^e
\end{pmatrix}.$$
We note that each block matrix $A^e$ is invertible since every local solver involves the DG discretization of an elemental domain with weakly enforced Dirichlet boundary conditions $\lambda^e$. Therefore each local elemental problem is well-posed and invertible.
In the following sections, in order to solve local problems (7) (express $v^e$ in terms of $\lambda^e$), we will require the application of the inverse of $A^e$. Instead of inverting the full size matrix $A^e$ we have chosen to form $(A^e)^{-1}$ in a block-wise fashion, which would involve the inversion of much smaller elemental matrices:
$$(A^e)^{-1} = \begin{pmatrix}
Z^e \mathbb{D}_1^e (M^e)^{-1} & Z^e \mathbb{D}_2^e (M^e)^{-1} \\
-(M^e)^{-1} (\mathbb{D}_1^e)^T Z^e & (M^e)^{-1} [\mathbb{I} - (\mathbb{D}_1^e)^T Z^e \mathbb{D}_1^e (M^e)^{-1}]^{-1} & -(M^e)^{-1} (\mathbb{D}_1^e)^T Z^e \mathbb{D}_2^e (M^e)^{-1} \\
-(M^e)^{-1} (\mathbb{D}_2^e)^T Z^e & -(M^e)^{-1} (\mathbb{D}_2^e)^T Z^e \mathbb{D}_1^e (M^e)^{-1} & (M^e)^{-1} [\mathbb{I} - (\mathbb{D}_2^e)^T Z^e \mathbb{D}_2^e (M^e)^{-1}]^{-1}
\end{pmatrix}.$$
where
\[
Z^e = \left( \sum_{l=1}^{N^e_k} \tau^{(e,l)} E^e_l + D^e_1 (M^e)^{-1} (D^e_1)^T + D^e_2 (M^e)^{-1} (D^e_2)^T \right)^{-1}
\]
(11)
and we have explicitly used the fact that \( M^e = (M^e)^T \) and \( Z^e = (Z^e)^T \).
### 3.2 Matrix Form of the Global Equation for \( \lambda \)
Using the matrices from the previous section we can write the transmission condition (6b) in a similar matrix form. First we introduce the matrices:
\[
\tilde{F}^{l,e}[i, j] = \left\langle \psi^l_i, \phi^e_j \right\rangle_{\Gamma^l}, \quad \tilde{F}^{l,e}_k[i, j] = \left\langle \psi^l_i, \phi^e_j n^e_k \right\rangle_{\Gamma^l}, \quad \tilde{G}^l[i, j] = \left\langle \psi^l_i, \psi^l_j \right\rangle_{\Gamma^l}.
\]
After defining \( g_N^l[i] = [g_N, \psi^l_i]_{\Gamma^l \cap \partial \Omega_N} \), the transmission condition (6b) for a single edge can be written as:
\[
B^e v^e + G^e \hat{\lambda}^e + B^f v^f + G^f \hat{\lambda}^f = g_N^l,
\]
(12)
where matrices \( B^e \) and \( G^e \) are defined as follows:
\[
B^e = \begin{pmatrix}
-\tau^{e,1} (\tilde{F}^e_1)^T & (\tilde{F}^e_{11})^T & (\tilde{F}^e_{21})^T \\
-\tau^{e,2} (\tilde{F}^e_2)^T & (\tilde{F}^e_{12})^T & (\tilde{F}^e_{22})^T \\
-\tau^{e,3} (\tilde{F}^e_3)^T & (\tilde{F}^e_{13})^T & (\tilde{F}^e_{23})^T
\end{pmatrix}
\]
(13)
\[
G^e = \begin{pmatrix}
\tau^{e,1} \tilde{G}^{\sigma(e,1)} & 0 & 0 \\
0 & \tau^{e,2} \tilde{G}^{\sigma(e,2)} & 0 \\
0 & 0 & \tau^{e,3} \tilde{G}^{\sigma(e,3)}
\end{pmatrix}.
\]
(14)
Here we are assuming that \( l = \sigma(e, i) = \sigma(f, j) \), that is, that the elements \( e \) and \( f \) have the common internal edge \( \Gamma^l \). While forming matrix \( B^e \) we use the following two identities which relate previously defined matrices:
\[
\tilde{F}^e_l = \left( \tilde{F}^{\sigma(e,l), e} \right)^T, \quad \tilde{F}^e_{kl} = \left( \tilde{F}^{\sigma(e,l), e}_k \right)^T
\]
We see that the transmission condition can be constructed from elemental contributions. In the next section, we show how to use our elemental local solvers given by Eqs. (7) and (12) to obtain a matrix equation for \( \lambda \) only.
### 3.3 Assembling the Transmission Condition from Elemental Contributions
The last component we require to form the global trace system is the elemental trace spreading operator \( A^e_{HDG} \) that will copy the global trace space information into the local (elemental) storage denoted by \( \hat{\lambda}^e \) in Sects. 3.1 and 3.2. Let \( \Delta^l \) denote the vector of degrees of freedom on the edge \( \Gamma^l \) and let \( \Delta \) be the concatenation of these vectors for all the edges of the triangulation. The size of \( \Delta \) is therefore
\[
N_\lambda = \sum_{l \in \Gamma} N^l_\lambda,
\]
where \( N^l_\lambda \) is the number degrees of freedom of \( \lambda \) on the interior edge \( \Gamma^l \).
We define the elemental trace space spreading operator $\mathcal{A}_\text{HDG}^e$ as a matrix of size $(\sum_{l \in \partial G^e} N_b^l) \times N_\lambda$ which “spreads” or scatters the unique trace space values to their local edge vectors. For each element $e$, which consists of $N_b^e$ edges, let $\hat{\Delta}^{e,l}$ denote the local copy of the trace-space information as portrayed in Fig. 1.
With this notation in place we can replace $\hat{\Delta}^e$ by $\mathcal{A}_\text{HDG}^e \Delta$ in local solver Eqs. (7):
$$\mathbb{A}^e \underline{v}^e + \mathbb{C}^e \mathcal{A}_\text{HDG}^e \Delta = \underline{w}^e$$
(15)
We can similarly write the transmission conditions (12) between interfaces as:
$$\sum_{e=1}^{|T(\Omega)|} (\mathcal{A}_\text{HDG}^e)^T \left[ \mathbb{B}^e \underline{v}^e + \mathbb{G}^e \mathcal{A}_\text{HDG}^e \Delta \right] = \underline{g}_N$$
(16)
where the sum over elements along with the left application of the transpose of the spreading operator acts to “assemble” (sum up) the elemental contributions corresponding to each trace space edge and where $\underline{g}_N$ denotes the concatenation of the individual edge Neumann conditions $g_N^e$.
Manipulating Eq. (15) to solve for $\underline{v}^e$ and inserting it into Eq. (16) yields:
$$\sum_{e=1}^{|T(\Omega)|} (\mathcal{A}_\text{HDG}^e)^T \left[ \mathbb{B}^e (\mathbb{A}^e)^{-1} (\underline{w} - \mathbb{C}^e \mathcal{A}_\text{HDG}^e \Delta) + \mathbb{G}^e \mathcal{A}_\text{HDG}^e \Delta \right] = \underline{g}_N$$
which can be reorganized to arrive at matrix equation for $\lambda$:
$$\mathbf{K} \Delta = \mathbf{F},$$
(17)
where
$$\mathbf{K} = \sum_{e=1}^{|T(\Omega)|} (\mathcal{A}_\text{HDG}^e)^T \mathbb{K}^e \mathcal{A}_\text{HDG}^e = \sum_{e=1}^{|T(\Omega)|} (\mathcal{A}_\text{HDG}^e)^T \left[ \mathbb{G}^e - \mathbb{B}^e (\mathbb{A}^e)^{-1} \mathbb{C}^e \right] \mathcal{A}_\text{HDG}^e.$$
and
\[ F = g_N - \sum_{e=1}^{|T(\Omega)|} (A_{HDG}^e)^T B^e (\mathbb{A}^e)^{-1} w_e. \]
(18)
We observe that \( K \) is constructed elementally through the sub-matrices \( K^e \) which can also be considered as the Schur complement of a larger matrix system which consists of combining Eqs. (15) and (16). We would like to remark that the “assembly” in this section is used in the sense of an operator: system matrix \( K \) does not necessarily need to be formed explicitly but can also be stored as a collection of elemental matrices and corresponding mappings.
4 Implementation Pipeline
We formulated our approach as a pipeline which illustrates the division of tasks between CPU (host) and GPU (Fig. 2). Initial setup steps are handled by the CPU after which the majority of the work is performed on the GPU and finally the resulting elemental solution is passed back to the CPU. Initially, the host parses the mesh file to determine the number of elements, forcing function, and mesh configuration. From this information the CPU can generate the data set that is required by the GPU to compute the finite element solution. This is followed by the generation of the \( B^e, (M^e)^{-1}, D^e_k \) elemental matrices, edge to element mappings, global edge permutation lists and the right hand side vector \( F \). This data is then transferred to the GPU.
The GPU handles the bulk of the operations in our HDG implementation. The first step is the construction of the local elemental matrices through batch processing. The local elemental matrices \( Z^e, C^e, B^e, U^e, \) and \( Q^e_k \) are formed from the mass and derivative matrices passed over by the host.
To solve the global trace system we require the assembly of the global matrix \( K \) from the elemental matrices \( K^e \) using the assembly process discussed in Sect. 3.3. We formulate the construction of the elemental \( K^e \) matrices as follows:
\[
K^e = G^e - B^e \begin{bmatrix} U^e \\ Q^e_0 \\ Q^e_1 \end{bmatrix}.
\]
where \( \mathbb{U}^e \) and \( \mathbb{Q}_k^e \) are formulated as:
\[
\mathbb{U}^e = -[\begin{array}{ccc}
1 & 0 & 0 \\
\end{array}] (\mathbb{A}^e)^{-1} C^e = -Z^e [\begin{array}{ccc}
1 & D_1^e (M^e)^{-1} & D_2^e (M^e)^{-1} \\
\end{array}] C^e
\]
\[
\mathbb{Q}_0^e = -[0 \quad 1 \quad 0] (\mathbb{A}^e)^{-1} C^e, \quad \mathbb{Q}_1^e = -[0 \quad 0 \quad 1] (\mathbb{A}^e)^{-1} C^e
\]
Note that that the action of \((\mathbb{A}^e)^{-1}\) can be evaluated using definition (10) and so does not need to be directly constructed. The matrices in the first block-row of \((\mathbb{A}^e)^{-1}\) can be reused in the formulation of the second and third block-rows, thereby reducing the computational cost of constructing the matrix.
We next determine the trace space solution \( \underline{\Lambda} = K^{-1} F \) where, as was demonstrated in Kirby et al. [26], \( F \) can be evaluated using \( \mathbb{U}^e \) as
\[
F = g_N + \sum_{e=1}^{|\mathcal{T}(\Omega)|} (A_{HDG}^e)^T (\mathbb{U}^e)^T f^e
\]
Finally we recover the elemental trace solution \( \lambda^e = A_{HDG}^e \Lambda \) and obtain the elemental primitive solution \( \hat{u}^e \) from Eq. (7) as
\[
\hat{u}^e = Z^e f^e + U^e \lambda^e.
\]
4.1 Building the Local Problems on the GPU
The local matrices are created using a batch processing scheme. The generation of the local matrices can be conducted in a matrix-free manner, but we choose to construct the matrices to take advantage of BLAS Level 3 batched matrix functions. We have found this to be a more computationally efficient approach on the GPU. Each step of the local matrix generation process is executed as a batch operating on all elements in the mesh. The batched matrix operations assign a thread block to each elemental matrix. In most cases a thread is assigned to operate on each element of a matrix, which are processed concurrently by the GPU in the various assembly and matrix operations.
Before we proceed to discuss the details of the local matrix generation we would like to make note of a certain implementation detail: the use of the edge to element map. As was previously mentioned in Sect. 3.1, we choose the trace expansion to match the elemental expansion along the element’s edge. This choice allows us to use edge expansions together with the edge to element map to generate some of the matrices in a more efficient manner. For example, in Eq. (11) we use the edge to element map to form a sparse matrix \( E_l^e[i, j] = \left( \phi_i^e, \phi_j^e \right)_{\partial \Omega_l^e} \) from the entries of a dense matrix \( E_l^e[m, n] = \left( \psi_m^e, \psi_n^e \right)_{\partial \Omega_l^e} \). This approach is also used in the formation of the \( B_{kl}^e \), \( B_l^e \) and \( B_{kl}^e \) matrices.
The goal of the local matrix generation process (steps B1 and B2) is to form matrices \( K^e \) for every element in the mesh. In order to facilitate this, the following matrices must be generated: \( Z^e \), block entries of \((\mathbb{A}^e)^{-1}\), \( C^e \), \( B^e \) and \( G^e \). The \( Z^e \) and \( U^e \) matrices will be saved for later computations while the rest of the matrices are discarded after use to reduce memory constraints.
The construction process first requires the \( Z^e \) matrices to be formed from the values of the elemental mass and derivative matrices. The matrices \( M^e \), \( D_k^e \) and \( E_l^e \) are utilized in the formation of the \((Z^e)^{-1}\) matrices (Eq. 11), which is then inverted in a batch matrix inversion process using Gaussian elimination. Pivoting is not necessary due to the symmetry of the matrices. Next, the block entries of the \((\mathbb{A}^e)^{-1}\) matrices are formed from combinations of the \( Z^e \), \( D_k^e \) and \((M^e)^{-1}\) matrices (definition 10). The entries from the first block-row of
$(A^e)^{-1}$ are used in the formulation of the second and third block-rows and do not need to be explicitly recomputed. The $U^e$ and $Q^e_k$ elemental matrices are created through the multiplication of the block rows of $(h^e)^{-1}$ and matrix $C^e$. Note that matrix $B^e = (C^e)^T \tilde{I}$, where
$$\tilde{I} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{pmatrix},$$
which simplifies the formation process of the $C^e$ and $B^e$ matrices.
The final step of the local matrix generation involves constructing the local $K^e$ matrices which are formed from the explicit matrix-matrix multiplication of the $B^e$ matrices with the concatenated $U^e$ and $Q^e_k$ matrices. This is subtracted from the diagonal $G^e$ matrix, which is not formed explicitly, to form $K^e$. Note that matrices $M^e$, $Z^e$, and $K^e$ matrices will be symmetric which halves the required storage space. The elemental operations at each step are independent of each other so the batches can be broken up into smaller tiles to conform to memory constraints or to be distributed across multiple processing units. This process results in the local $K^e$ matrices being generated for each element which are then used to assemble the global $K$ matrix.
4.2 Assembling the Local Problems on the GPU
In this section we describe the assembly of the global linear system matrix $K$ from the elemental matrices $K^e$. A typical CG or DG element-based approach to the assembly process, when parallelized, has to employ atomic operations to avoid race conditions. In this paper we propose an edge based assembly process that eliminates the need of expensive GPU atomic operations and avoids race conditions by using reduction operations. The reduction list is generated with a sorting operation which is relatively efficient on GPUs. This lock-free approach is better suited for the SIMD architecture of the GPU where each thread is acting on a separate edge in the mesh. In this way we avoid any race conditions during the assembly process while still maximizing throughput on the GPU.
Next, we describe the proposed method for triangular meshes. Note that this approach can be straightforwardly extended to quadrilateral meshes. In order to evaluate a single entry of the global matrix $K$ we need to determine the indices of entries to which local matrices $K^e$ will be assembled. To do this, we need to know which element(s) a given edge $l_i$ belongs to. Given the input triangle list that stores the global edge indices of each triangle, we can generate the edge neighbor list that stores the neighboring triangle indices for each edge. Having the edge neighbor list, we assign the assembly task of each row of $K$ to a thread. Each thread uses the edge neighbor list and the triangle list to find the element index $e$ as well as the entry indices of $K^e$ to fetch the appropriate data and perform the assembly operation on the corresponding row of $K$.
To give a better illustration of the assembly process, let us consider a simple mesh displayed in Fig. 3. This mesh consists of two triangles $e_0$ and $e_1$ and five edges: $l_0$ through $l_4$. To further simplify our example, let us assume that we have only one degree of freedom per edge. $K^e$ is therefore a $3 \times 3$ matrix and $K$ is a $5 \times 5$ matrix. Element $e_0$ consists of edges $l_0 = \sigma(e_0, 0)$, $l_1 = \sigma(e_0, 1)$ and $l_2 = \sigma(e_0, 2)$ and element $e_1$ consists of edges $l_0 = \sigma(e_1, 0)$, $l_4 = \sigma(e_1, 1)$ and $l_3 = \sigma(e_1, 2)$.
For our example, the triangle list would be $\{0,1,2,0,4,3\}$. Using it we can create an edge neighbor list $\{0,0,0,1,1,1\}$ that stores the index of a triangle to which each edge from the first list belongs. Next we sort the triangle list by edge index and permute the edge neighbor list.
according to the sorting. Now the triangle list and edge neighbor list are \{0,0,1,2,3,4\} and \{0,1,0,0,1,1\} respectively. These new lists indicate that edge $l_0$ neighbors triangles $e_0$ and $e_1$, and that edge $l_1$ has neighbors only one triangle $e_0$, etc. Figure 4 demonstrates the assembly process of the 0th row (corresponding to the $l_0$ edge) of the $\mathbf{K}$ matrix from the entries of elemental matrices $\mathbb{K}^{e_0}$ and $\mathbb{K}^{e_1}$.
In practice, the global matrix $\mathbf{K}$ is $N_\lambda^I N_T \times N_\lambda^I N_T$ and usually sparse. For triangular meshes, each row of $\mathbf{K}$ has at most $5N_\lambda^I$ non-zero values and all the interior edges (edges that do not fall on Dirichlet boundary) have exactly $5N_\lambda^I$ non-zero values. The fact that the number of non-zero entries per row of $\mathbf{K}$ is constant (apart from the rows corresponding to the Dirichlet boundary edges) determines our choice of the Ellpack (ELL) sparse matrix data structure [28] to store $\mathbf{K}$. The ELL data structure contains two arrays. One consists of column indices and the other of matrix values, both of which are of the size $N_\lambda^I N_T \times 5N_\lambda^I$. The former array stores the column indices of the non-zero values in the matrix, and the latter array stores the non-zero values. For rows that have less than $5N_\lambda^I$ non-zero values, sentinel values ($-1$ usually) are stored in the column-indices array. Each thread, which is in charge of assembling one row of $\mathbf{K}$, locates its neighboring triangle indices from the edge neighbor list and then obtains the edge indices of these neighboring triangles from the triangle list. The edge indices are then written into the column-indices array of the ELL matrix. Lastly, the local matrix values of the neighboring triangles are assembled into $\mathbf{K}$.
The global assembly process can be summarized as follows:
**Data:** Triangle List TL
//Generate edge neighbor list EL;
**for** $i \leftarrow 0$ **to** NumTriangles−1 **do**
EL[i*3] $\leftarrow i$;
EL[i*3+1] $\leftarrow i$;
EL[i*3+2] $\leftarrow i$;
**end**
//Sort triangle list by edge index
TL $\leftarrow$ Sort(TL);
//Permute edge neighbor list according to sorted order of triangle list
EL $\leftarrow$ Permute(TL);
//Compute the Edge Count List (ECL) through reduction by key, which is the number of neighboring triangles on each edge
ECL $\leftarrow$ ReduceByKey(TL);
//Calculate a prefix sum on the reduced list (RL) to find the offsets in the sorted triangle list
RL $\leftarrow$ Scan(ECL);
Local-to-Global Mapping(TL, RL, EL);
**Algorithm 1:** Global Assembly
**Data:** TL, RL, EL
**foreach** edge $e$ **do**
//Locate the neighboring triangles of edge $e$ from the permuted edge neighbor list and offset list
Tris $\leftarrow$ Neighbors(EL, RL, $e$);
TEdges $\leftarrow$ Edges(Tris);
//Obtain the global indices (GI) of the edges of these triangles from the triangle list
GI $\leftarrow$ TL[Tris*3 + 0,1,2];
//Store the global indices in the column-indices array (CI)
CI $\leftarrow$ GI;
//Compute the local indices (LI) of each edge in the neighboring triangles
LI $\leftarrow$ Local Indices(Tris);
//Locate the entries in the local matrices of the corresponding neighboring triangles according to the local indices, and add those entries to the corresponding locations in the the global K matrix
**foreach** index $i$ **in** LI **do**
K(Map($i$)) $\leftarrow$ K(Map($i$)) + $K^e[i]$;
**end**
**end**
**Algorithm 2:** Local-to-Global Mapping
the the triangles from array, corresponding and add those global K
**Remark 1** We would like to stress the importance of the edge-only inter-element connectivity provided by the HDG method. This property ensures that the sparsity (number of nonzero entries per row) of the global linear system matrix depends only on the element types used and not on the mesh structure (e.g. vertex degree). The other benefit provided by the HDG method is the ability to assemble the system matrix by-edges as opposed to by-elements,
which removes the need for costly atomic assembly operations. Now, if we look at the CG method, elements are connected through both edge degrees of freedom and vertex degrees of freedom. This through-the-vertex element connectivity makes it both unfeasible to use the compact ELL system matrix representation for a general mesh and makes it hard to avoid atomic operations in the assembly process.
**Remark 2** We note that there are multiple ways to address the issue of evaluating the discrete system. A full global system need not be assembled in some cases. One can use a local matrix approach or a global matrix approach. In the local matrix approach, a local operator matrix is applied to each elemental matrix. This allows for on the fly assembly without the need to construct a global matrix system. The global matrix approach assembles a global matrix system from the local elemental contributions. Vos et al. [29] describe these approaches in detail for the continuous Galerkin (FEM) method. In either case, information from multiple elements must be used to compute any given portion of the final system. This requires the use of some synchronized ordering within the mapping process. There are several methods for handling this ordering. One such method is to use atomic operations to ensure that each element in the final system is updated without race conditions. Another method is to use asynchronous ordering and pass the updates to a communication interface which handles the updates in a synchronized fashion. This is demonstrated in the work by Goddeke et al. [30,31], in which they use MPI to handle the many-to-one mapping through asynchronous ordering. In either case a many-to-one mapping exists and a synchronized ordering must be used to prevent race conditions. We chose to use the global approach to compare our results to the previous work by Kirby et al. [26], in which the authors also used the global approach.
### 4.3 Trace Space Solve and Local Problem Spreading on the GPU
The final steps of the process construct the elemental primitive solution $\hat{u}^e$ (B5 and B6 of the GPU pipeline). This requires retrieving the elemental solution from the trace solution. We form the element-wise vector of local $\hat{\lambda}^e$ coefficients by scattering the coefficients of the global trace solution $\hat{\Lambda}$ produced by the sparse solve. The values are scattered back out to the local vectors using the edge to triangle list. Each interior edge will be scattered to two elements and each boundary edge will be scattered to one element. This is equivalent to the operation performed by the trace space spreading operator $A_{HDG}^e$ which we conduct in a matrix free manner.
Obtaining the elemental solution involves two batched matrix–vector multiplications across all elements followed by a vector–vector sum:
$$\hat{u}^e = Z^e f^e + U^e \hat{\lambda}^e.$$
After the local element modes are computed they are transferred back to the CPU as a vector grouped by element.
### 5 Numerical Results
In this section we discuss the performance of the GPU implementation of the HDG method using the Helmholtz equation as a test case. In the end of the section we also provide a short discussion of the CG method GPU implementation based on the preliminary data collected. For verification and runtime comparison we use a CPU implementation of the Helmholtz solver existing within the Nektar++ framework v3.2 [32]. Nektar++ is a freely-available
highly-optimized finite element framework. The code is robust and efficient, and it allows for ease of reproducibility of our CPU test results. Our implementation also takes advantage of the GPU parallel primitives in the CUDA Cusp and Thrust libraries [33, 34]. All the tests referenced in this section were performed on a machine with a Nvidia Tesla M2090 GPU, 128 GB of memory, and an Intel Xeon E5630 CPU running at 2.53 GHz. The system was using openSUSE 12.1 with CUDA runtime version 4.2.
The numerical simulation considers the Helmholtz equation
\[
\nabla^2 u(x) - \lambda u(x) = f(x) \quad x \in \Omega,
\]
\[
u(x) = g_D(x) \quad x \in \partial \Omega_D,
\]
where \( \lambda = 1 \), \( \Omega = [0, 1]^2 \) and \( f(x) \) and \( g_D(x) \) are selected to give an exact solution of the form:
\[
u(x, y) = \sin(2\pi x)\sin(2\pi y).
\]
Tests were performed on a series of regular triangular meshes, produced by taking a uniform quadrilateral mesh and splitting each quadrilateral element diagonally into two triangles. We define the level of mesh refinement by the number of equispaced segments along each side of the domain. The notation \( n \times n \) further used in this section corresponds to a mesh comprised of \( n \times n \) quads, each split into 2 triangles. We consider meshes of size \( 20 \times 20 = 800 \) elements, \( 40 \times 40 = 3,200 \) elements, and \( 80 \times 80 = 12,800 \) elements. Although we tested this method on structured meshes, the algorithm does not depend upon the mesh structure and can easily operate over unstructured meshes. In order to help ensure that the sensitivity of the timing routines does not influence the results, we averaged the data over 3 separate runs.
To verify the correctness of our implementations we compared our solution for the Helmholtz equation with the corresponding analytic solution using the \( L^2 \) and \( L^\infty \) error norms. The parameter \( \tau \) for the HDG solver (see Eq. (5d)) was set to 1 for both CPU and GPU implementations. We observe that our implementations produce solutions that match that of the analytic solution to within machine precision. Numerical errors produced by the GPU implementation are presented in Table 1.
Next we consider the total run-time comparison between GPU and CPU implementations of the HDG method. Table 2 presents timing results of both implementations across the entire range of test meshes as well as the relative speedup factors. Columns 2, 5 and 8 indicate the time required by the GPU implementation to complete steps A2–B6 of the HDG pipeline, including time to transfer the data to the GPU but excluding the transfer time of the solution vector back to the CPU. Columns 3, 6 and 9 indicate the time taken by the CPU
| Order | GPU \( L^\infty \) error | Order of convergence | GPU \( L^2 \) error | Order of convergence |
|-------|---------------------------|----------------------|---------------------|----------------------|
| 1 | 1.59334e−02 | – | 3.95318e−03 | – |
| 2 | 4.95546e−04 | 5.01 | 8.04917e−05 | 5.62 |
| 3 | 1.10739e−05 | 5.48 | 1.3446e−06 | 5.90 |
| 4 | 1.93802e−07 | 5.84 | 1.88309e−08 | 6.16 |
| 5 | 5.71909e−09 | 5.08 | 1.07007e−09 | 4.14 |
| 6 | 1.40495e−08 | −1.30 | 4.63559e−09 | −2.12 |
| 7 | 2.46212e−08 | −0.81 | 5.77189e−09 | −0.32 |
| 8 | 5.19398e−08 | −1.08 | 1.44714e−08 | −1.33 |
| 9 | 1.17087e−07 | −1.17 | 2.92382e−08 | −1.01 |
Table 2 Total run time data for CPU and GPU implementation of Helmholtz problem (time is measured in ms)
| Order | $20 \times 20$ mesh | | | $40 \times 40$ mesh | | | $80 \times 80$ mesh | | |
|-------|---------------------|-------|-------|---------------------|-------|-------|---------------------|-------|-------|
| | GPU | CPU | Speedup | GPU | CPU | Speedup | GPU | CPU | Speedup |
| 1 | 117 | 268 | 2.29 | 231 | 1,427 | 6.19 | 559 | 9,889 | 17.69 |
| 2 | 170 | 483 | 2.84 | 323 | 2,843 | 8.8 | 858 | 24,459 | 28.5 |
| 3 | 264 | 828 | 3.14 | 480 | 5,145 | 10.71 | 1,508 | 54,728 | 36.28 |
| 4 | 383 | 1,414 | 3.69 | 853 | 8,896 | 10.43 | 2,777 | 105,896| 38.13 |
| 5 | 526 | 2,268 | 4.31 | 1,387 | 15,165 | 10.94 | 4,894 | 180,373| 36.85 |
| 6 | 769 | 3,484 | 4.53 | 2,295 | 24,873 | 10.84 | 8,165 | 289,319| 35.44 |
| 7 | 1,136 | 5,251 | 4.62 | 3,550 | 36,869 | 10.39 | 12,879 | 436,217| 33.87 |
| 8 | 1,613 | 7,683 | 4.76 | 5,393 | 54,474 | 10.1 | 20,072 | 630,613| 31.42 |
| 9 | 2,214 | 11,451 | 5.17 | 7,489 | 79,604 | 10.63 | 28,481 | 883,340| 31.02 |
implementation to complete the equivalent steps with no induced transfer time. It can be observed that GPU implementation scales well with the increase in mesh size, and the GPU implementation gains performance improvement on the order of $30 \times$ over a well optimized serial CPU implementation. Note, that the performance of the GPU implementation can be increased even further by moving additional code for local matrix generation from CPU to GPU (step A2 of the pipeline). The results indicate the method demonstrates strong scaling with respect to mesh size.
In order to provide the reader with a better intuition on the scaling of different stages of the GPU solver with respect to mesh size and polynomial order, we broke the GPU implementation into four stages which were individually timed. The local matrix generation stage corresponds to steps B1 and B2 of the GPU process plus the transfer of required data from CPU to GPU. The transfer time and processing times in this stage are additive, and there is no concurrent processing while transferring data from the host to the GPU. This represents a worst-case scenario for timing results as the performance would only increase with concurrent processing while transferring data. The global assembly stage represents step B3 of the GPU process. The global solve stage is step B4, and the local solve stage corresponds to steps B5 and B6 (not including the time to transfer the solution back to the host). We note that the GPU implementation requires the most allocated memory in the global assembly process, during which the $\mathbb{Z}^e$, $\mathbb{U}^e$, $\mathbb{K}^e$, and $\mathbf{K}$ matrices must be allocated. This point is a memory bottleneck in the system. Table 3 illustrates the memory constraints for each mesh size across the range of polynomial orders. The GPU is generally more memory constrained than the CPU and it will eventually reach a limit based on mesh size and polynomial order. The $\mathbb{Z}^e$ and $\mathbb{U}^e$ matrices could be deallocated and recalculated again in step B6 to lower memory constraints.
Tables 4, 5 and 6 provide the timing results of the individual stages for the $20 \times 20$, $40 \times 40$, and $80 \times 80$ meshes respectively. As can be seen from Tables 4, 5 and 6, for smaller problem sizes (in terms of both polynomial order and element count) global solve is the dominating factor; however, as the problems size increases, the balance shifts in favor of local matrix generation stage. Figure 5 demonstrates the trend in the distribution of total run-time between different stages for a moderately sized problem: run-time taken by the local matrix generation grows quickly as polynomial order increases, reaching approximately 50% of the total run-time for polynomial order $P = 9$ on an $80 \times 80$ mesh.
Table 3 GPU memory requirements (in kB) for each mesh and polynomial order
| Polynomial order | $20 \times 20$ mesh | $40 \times 40$ mesh | $80 \times 80$ mesh |
|------------------|---------------------|---------------------|---------------------|
| 1 | 685 | 2,727 | 14,869 |
| 2 | 1,887 | 7,517 | 41,818 |
| 3 | 3,968 | 15,821 | 89,211 |
| 4 | 7,160 | 28,560 | 162,624 |
| 5 | 11,693 | 46,656 | 267,633 |
| 6 | 17,797 | 71,031 | 409,813 |
| 7 | 25,703 | 102,605 | 594,740 |
| 8 | 35,640 | 142,301 | 827,989 |
| 9 | 47,840 | 191,040 | 1,115,136 |
Table 4 Timing data for the four major stages of GPU implementation on $20 \times 20$ mesh (time is measured in ms)
| Polynomial order | Local matrix generation—HDG | Global assembly | Global solve | Local solve |
|------------------|-----------------------------|-----------------|--------------|-------------|
| 1 | 7 | 18 | 75 | 2 |
| 2 | 9 | 51 | 106 | 2 |
| 3 | 11 | 47 | 113 | 2 |
| 4 | 14 | 56 | 161 | 2 |
| 5 | 21 | 42 | 162 | 2 |
| 6 | 40 | 95 | 215 | 2 |
| 7 | 60 | 113 | 203 | 2 |
| 8 | 107 | 121 | 253 | 2 |
| 9 | 155 | 132 | 246 | 3 |
Table 5 Timing data for the four major stages of GPU implementation on $40 \times 40$ mesh (time is measured in ms)
| Polynomial order | Local matrix generation—HDG | Global assembly | Global solve | Local solve |
|------------------|-----------------------------|-----------------|--------------|-------------|
| 1 | 11 | 29 | 124 | 3 |
| 2 | 14 | 47 | 128 | 3 |
| 3 | 19 | 61 | 133 | 4 |
| 4 | 28 | 59 | 191 | 4 |
| 5 | 55 | 57 | 195 | 5 |
| 6 | 94 | 192 | 257 | 6 |
| 7 | 140 | 139 | 266 | 7 |
| 8 | 249 | 92 | 346 | 7 |
| 9 | 422 | 137 | 361 | 8 |
We use batched matrix-matrix multiplication operations as the baseline comparison for our method. The FLOPS demonstrated by homogeneous BLAS3 operations serve as an upper bound on the performance of the batched operations carried out in the HDG process. The batched operations in the HDG pipeline are a combination of BLAS1, BLAS2, BLAS3, and matrix inversion operations. BLAS3 operations demonstrate the best performance, in terms
Table 6 Timing data for the four major stages of GPU implementation on $80 \times 80$ mesh (time is measured in ms)
| Polynomial order | Local matrix generation—HDG | Global assembly | Global solve | Local solve |
|------------------|-----------------------------|-----------------|--------------|-------------|
| 1 | 18 | 53 | 210 | 6 |
| 2 | 32 | 88 | 213 | 7 |
| 3 | 44 | 135 | 239 | 8 |
| 4 | 82 | 159 | 303 | 9 |
| 5 | 194 | 146 | 355 | 10 |
| 6 | 347 | 236 | 469 | 10 |
| 7 | 537 | 291 | 551 | 12 |
| 8 | 868 | 322 | 722 | 13 |
| 9 | 1,413 | 405 | 769 | 17 |
Fig. 5 Ratios of different stages of GPU implementation with respect to the total run time. $80 \times 80$ mesh is used
of FLOPS, due to higher computational density over the other operations. Our method demonstrates peak performance of 60 GFLOPS, which is $\sim 75\%$ of the peak FLOPS seen by batched matrix-matrix multiplication operations using cuBLAS [35], on a GPU with 665 peak GFLOPS for double precision. The addition of matrix inversion operations, BLAS1 and BLAS2 operations lower the computational performance from that of pure BLAS3 operations.
Figure 6 illustrates the FLOPS and bandwidth of the local matrix generation process and provides a comparison between the rates on the CPU and GPU (with and without the transfer time). Figure 7 provides an estimate of the FLOPS for the global solve stage. The solver performs the conjugate gradient method on the sparse global matrix. From this we estimated the FLOPS based on the size of $K$, the number of non-zero entries in the global matrix, and the number of iterations required to converge to a solution. Our estimate may be slightly higher than the actual FLOPS demonstrated by the solver, due to implementation specific optimizations. Our FLOPS estimate was derived from the conjugate gradient algorithm which requires approximately $2N_{nz} + 3N_{rows} + N_{iter} \ast (2N_{nz} + 10N_{rows})$ operations, where $N_{nz}$ is the number of non-zero entries in the sparse global system (which is approximately $N^I_\lambda N_T \times 5N^I_\lambda$), $N_{rows}$ is the number of rows (which corresponds to $N^I_\lambda N_T$), and $N_{iter}$ is the number of iterations required to converge to a solution.
The efficiency of the HDG method on the GPU is highlighted by the growth rate of the local matrix generation stage. As polynomial order increases, this step becomes the dominant factor in the run-time. The batch processing technique takes advantage of the independent nature of the local (elemental) operations. The computational density per step increases with mesh size which makes the GPU operations more efficient. At lower mesh sizes the performance is lower due to the increased relative overhead associated and lower computational density.
We note that the global solve stage contributes a non-negligible amount of time to the overall method. The choice of iterative solver influences the time taken by this stage. In our CPU implementation we use a banded Cholesky solver, while the GPU implementation uses an iterative conjugate gradient solver from the CUSP library. This CUDA library uses a multigrid preconditioner and is a state-of-the-art GPU solver for sparse linear systems. There are alternatives to this approach, such as the sparse matrix-vector product technique described by Roca et al. [36]. Their method takes advantage of the sparsity pattern of the global matrix to efficiently perform an iterative solve of the system. We chose our approach based on the fact that the global system solve is not the focus of our method, and instead focus on the parallelization of the elemental operations.
We would like to conclude this section with a brief discussion of the efficacy of GPU parallelization when applied to the statically condensed CG method. Static condensation allows the interior modes to be formulated in terms of solutions on the boundary modes through the use of the Schur Complement (see Karniadakis and Sherwin [20] for more details). The statically condensed CG method can therefore be formulated in a similar fashion to the HDG method, which allows it to be implemented within our GPU pipeline. We expect the CG method to take less time during the local matrix generation stage than in the HDG case. This is due to the simpler formulation of the local $\mathbb{K}^e$ matrices, which, as demonstrated in Kirby et al. [26], can be expressed as
$$\mathbb{K}^e = (\mathbb{D}_1^e)^T (\mathbb{M}^e)^{-1} \mathbb{D}_1^e + (\mathbb{D}_2^e)^T (\mathbb{M}^e)^{-1} \mathbb{D}_2^e - \mathbb{M}^e.$$
This is merely expressing the local elemental matrix in the form of the mass matrix subtracted from the Laplacian matrix which derives from the Helmholtz equation.
To gain further insight into this area, we conducted some preliminary tests. We setup the local (elemental) $\mathbb{K}^e$ matrix generation within our pipeline for the CG case. Table 7 provides the timing results of the local $\mathbb{K}^e$ matrix generation for the HDG and CG methods within our framework across the range of test meshes. For the statically condensed CG method it takes 35–65% (depending on mesh size and polynomial order) less time to compute the $\mathbb{K}^e$ matrices compared to the HDG method. Our results are only preliminary, as we did not fully implement the statically condensed CG method within our framework. However, our conjecture is that the global assembly step will take longer due to the stronger coupling.
Fig. 6 GPU local matrix generation metrics. **a** FLOPS of local matrix generation process. **b** Bandwidth of local matrix generation process.
between elements in the CG method. We also suspect that the global solve step may take longer for CG as indicated in Kirby et al. [26], but it may be influenced by differences in architecture (CPU vs. GPU) as well as the choice of solver.
### 6 Conclusions and Future Work
We have directly compared a CPU and GPU implementation of the HDG method for a two-dimensional elliptic scalar problem using regular triangular meshes with polynomial orders ranging from $1 \leq P \leq 9$. We have discussed how to efficiently implement the HDG method within the context of the GPU architecture, and we provide results which show the relative costs and scaling of the stages that take place in the HDG method as polynomial order and mesh size increase.
Our results indicate the efficacy of applying batched operations to the HDG method. We provide an efficient way to map values from the local matrices to the global matrix during the global assembly step through the use of a lock-free edge mapping technique. This technique avoids atomic operations and is key for implementing an efficient HDG method on the GPU.
The framework we suggest illustrates an effective GPU pipeline which could be adapted to fit methods structurally similar to HDG.
Through our numerical tests we have demonstrated that the HDG method is well suited to large scale streaming SIMD architectures such as the GPU. We consistently see a speed up of $30 \times$ or more for meshes of size $80 \times 80$ and larger. The method demonstrates strong scaling with respect to mesh size. With each increasing mesh size, for a given polynomial order, the number of elements increases by $4 \times$, and we see a corresponding increase in compute time of roughly $\sim 4 \times$. As the mesh size increases, the process becomes more efficient due to increased computational density relative to processing overhead. We have also demonstrated that the HDG method is well-suited to batch processing with low inter-element coupling and highly independent operations.
Let us end by indicating possible extensions to the work presented. One possible extension could be a GPU implementation of the statically condensed CG method. The formulation of the statically condensed CG method is similar to that of the HDG method. The structure of the global $K$ matrix will differ due to increased coupling between elements in the CG case (see Kirby et al. [26] for details). This may present an additional challenge in formulating the global assembly step in an efficient manner on the GPU, because elements are coupled by edges and vertices. We suspect that the performance gains will not be as great as in the HDG case.
Another possible extension could be scaling of the HDG method to multiple GPUs. The local matrix generation and the global assembly step consist of independent operations and would scale well with increased parallelization. The cost of the local matrix generation stage grows at a faster rate than the other stages, and becomes the dominant factor for $P \geq 7$ for moderately sized and larger meshes. The global assembly stage would also see performance gains, since the assembly process is performed on a per-edge basis. Each GPU could be given a unique set of edges to assemble into the global matrix $K$, with some overlapping edges being passed along to avoid cross communication. The global solve stage may prove to be a bottleneck in a multi-GPU implementation since it cannot be easily divided up amongst multiple processing units. However, as we have shown in our results, the computation time for this step does not grow at the same rate as the local matrix generation step.
Acknowledgments We would like to thank Professor B. Cockburn (U. Minnesota) for the helpful discussions on this topic. This work was supported by the Department of Energy (DOE NETL DE-EE0004449) and under NSF OCI-1148291.
References
1. Buck, I.: GPU computing: programming a massively parallel processor. In: Proceedings of the International Symposium on Code Generation and Optimization, CGO ’07, p. 17. IEEE Computer Society, Washington, DC, USA (2007)
2. Bell, N., Yu, Y., Mucha, P.J.: Particle-based simulation of granular materials. In: Proceedings of the 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’05, pp. 77–86. ACM, New York, NY, USA (2005)
3. Owens, J.D., Houston, M., Luebke, D., Green, S., Stone, J.E., Phillips, J.C.: GPU computing. Proceedings of the IEEE 96(5), 879–899 (2008)
4. Hesthaven, J.S., Warburton, T.: Nodal Discontinuous Galerkin Methods: Algorithms, Analysis and Applications. Springer, New York (2008)
5. Ali, A., Syed, K.S., Ishaq, M., Hassan, A., Luo, Hong.: A communication-efficient, distributed memory parallel code using discontinuous Galerkin method for compressible flows. In: Emerging Technologies (ICET), 2010 6th International Conference on, pp. 331–336, oct 2010
6. Eskilsson, C., El-Khamra, Y., Rideout, D., Allen, G., Jim Chen Q., Tyagi, M.: A parallel High-Order Discontinuous Galerkin Shallow Water Model. In: Proceedings of the 9th International Conference on Computational Science: Part I, ICCS ’09, pp. 63–72. Springer-Verlag, Berlin, Heidelberg (2009)
7. Goedel, N., Schomann, S., Warburton, T., Clemens, M.: GPU accelerated Adams-Bashforth multirate discontinuous Galerkin FEM simulation of high-frequency electromagnetic fields. IEEE Trans. Magn. 46(8), 2735–2738 (2010)
8. Goedel, N., Warburton, T., Clemens, M.: GPU accelerated Discontinuous Galerkin FEM for electromagnetic radio frequency problems. In: Antennas and Propagation Society International Symposium, 2009. APSURSI ’09. IEEE, pp. 1–4, June 2009
9. Klöckner, A., Warburton, T., Hesthaven, J.S.: High-Order Discontinuous Galerkin Methods by GPU Metaprogramming. In: GPU Solutions to Multi-scale Problems in Science and Engineering, pp. 353–374. Springer (2013)
10. Cockburn, B., Karniadakis, G.E., Shu, C.-W. (eds.): The Development of Discontinuous Galerkin Methods. In: Discontinuous Galerkin Methods: Theory, Computation and Applications, pp. 135–146. Springer-Verlag, Berlin (2000)
11. Cockburn, B., Gopalakrishnan, J., Lazarov, R.: Unified hybridization of discontinuous Galerkin mixed and continuous Galerkin methods for second order elliptic problems. SIAM J. Numer. Anal. 47, 1319–1365 (2009)
12. Klöckner, A., Warburton, T., Bridge, J., Hesthaven, J.S.: Nodal discontinuous Galerkin methods on graphics processors. J. Comput. Phys. 228, 7863–7882 (2009)
13. Lanteri, S., Perrussel, R.: An implicit hybridized discontinuous Galerkin method for time-domain Maxwell’s equations. Rapport de recherche RR-7578, INRIA, March (2011)
14. NVIDIA Corporation, CUDA Programming Guide 4.2, April 2012
15. AMD Corporation, AMD Accelerated Parallel Processing Math Libraries, Jan 2011
16. ATI, AMD Accelerated Parallel Processing OpenGL Programming Guide, Jan 2011
17. Volkov, V., Demmel, J.W.: Benchmarking GPUs to tune dense linear algebra. In: Proceedings of the 2008 ACM/IEEE conference on Supercomputing, SC ’08, pp. 31:1–31:11. IEEE Press, Piscataway, NJ, USA (2008)
18. Agullo, E., Augonnet, C., Dongarra, J., Ltaief, H., Namyst, R., Thibault, S., Tomov, S.: A Hybridization Methodology for High-Performance Linear Algebra Software for GPUs. In: GPU Computing Gems, Jade Edition 2, 473–484 (2011)
19. Song, F., Tomov, S., Dongarra, J.: Efficient Support for Matrix Computations on Heterogeneous Multi-core and Multi-GPU Architectures. University of Tennessee, Computer Science Technical, Report UT-CS-11-668 (2011)
20. Karniadakis, G.E., Sherwin, S.J.: Spectral/HP Element Methods for CFD, 2nd edn. Oxford University Press, UK (2005)
21. Sherwin, S.J., Karniadakis, G.E.: A triangular spectral element method. Applications to the incompressible Navier–Stokes equations. Comput. Methods Appl. Mech. Eng. 123, 189–229 (1995)
22. Cockburn, B., Dong, B., Guzmán, J.: A superconvergent LDG-Hybridizable Galerkin method for second-order elliptic problems. Math. Comput. 77(264), 1887–1916 (2007)
23. Cockburn, B., Gopalakrishnan, J., Sayas, F.-J.: A projection-based error analysis of HDG methods. Math. Comput. 79, 1351–1367 (2010)
24. Cockburn, B., Guzmán, J., Wang, H.: Superconvergent discontinuous Galerkin methods for second-order elliptic problems. Math. Comput. 78, 1–24 (2009)
25. Arnold, D.N., Brezzi, F., Cockburn, B., Marini, D.: Unified analysis of discontinuous Galerkin methods for elliptic problems. SIAM J. Numer. Anal. 39, 1749–1779 (2002)
26. Kirby, Robert M., Sherwin, Spencer J., Cockburn, Bernardo: To CG or to HDG: a comparative study. J. Sci. Comput. 51(1), 183–212 (Apr 2012)
27. Dubiner, M.: Spectral methods on triangles and other domains. J. Sci. Comput. 6, 345–390 (1991)
28. Bell, N., Garland, M.: Efficient Sparse Matrix–Vector Multiplication on CUDA. NVIDIA Technical Report NVR-2008-004, NVIDIA Corporation, Dec 2008
29. Vos P.E.J.: From h to p efficiently : optimising the implementation of spectral / hp element methods. PhD thesis, University of London, 2011
30. Göddeke, Dominik, Strzodka, Robert, Mohd-Yusof, Jamaludin, McCormick, Patrick S., Wobker, Hilmar, Becker, Christian, Turek, Stefan: Using GPUs to improve multigrid solver performance on a cluster. Int. J. Comput. Sci. Eng. 4(1), 36–55 (2008)
31. Göddeke, Dominik, Wobker, Hilmar, Strzodka, Robert, Mohd-Yusof, Jamaludin, McCormick, Patrick S., Turek, Stefan: Co-processor acceleration of an unmodified parallel solid mechanics code with FEAST-GPU. Int. J. Comput. Sci. Eng. 4(4), 254–269 (2009)
32. Kirby, R.M., Sherwin, S.J.: Nektar++ finite element library. http://www.nektar.info/
33. Bell, N., Garland, M.: Cusp: Generic Parallel Algorithms for Sparse Matrix and Graph Computations, 2012. Version 0.3.0
34. Hoberock, J., Bell, N.: Thrust: A Parallel Template Library, 2010. Version 1.7.0
35. Ha, L.K., King, J., Fu, Z., Kirby, R.M.: A High-Performance Multi-Element Processing Framework on GPUs. SCI Technical Report UUSCI-2013-005, SCI Institute, University of Utah (2013)
36. Roca, X., Nguyen N.C., Peraire, J.: GPU-accelerated sparse matrix-vector product for a hybridizable discontinuous Galerkin method. Aerospace Sciences Meetings. American Institute of Aeronautics and Astronautics, Jan 2011. doi:10.2514/6.2011-687
|
Terms for your credit card
Credit Card Agreement regulated by the Consumer Credit Act 1974
In this credit card agreement:
“Tandem”, “us” or “we” means: Tandem Bank Limited, 40 Bernard Street, London, WC1N 1LE, UK
“You” or “your” means: the holder of the Tandem Credit Card
If you have any questions on any part of these terms please call us on 020 3370 0970 or contact us via our in-app chat.
1. Card basics
Your credit limit
Your credit limit will be determined by us from time to time under this agreement and we’ll notify you of it. We’ll tell you what it is when we open your account. We may increase or reduce the credit limit from time to time. If we do, we will notify you of the new limit and give you at least 30 days’ notice of any increase. You may notify us that you do not want us to increase your credit limit or ask us to change your credit limit by calling us on 020 3370 0970.
Contactless payments
You can use your card for contactless payments up to £30 each. If you use contactless payments for purchases in foreign currency the limit of £30 still applies. This limit is set by Mastercard and may change.
2. How much it costs to use your card
Account Fee
This card has a monthly Account Fee of £5.99.
Annual Percentage Rate (APR) and total amount payable
The APR is 6.37% (variable).
The interest rate when you buy something, and when you withdraw cash, is 0.00% per year (variable).
If you were to:
- Buy something for £1,200.00 with an interest rate of 0.00% immediately when your account is opened;
- Pay for it over 12 months with equal payments; and
- Pay the 12 monthly Account Fees of £5.99 you build up alongside;
... then the total amount you would pay back is £1,271.88
This assumes that the monthly Account Fee doesn’t change.
| Interest rates and other charges | How you use your card | Annual interest rate (variable) | Charge |
|---------------------------------|-----------------------|---------------------------------|--------|
| **Card purchases** | | 0.00% per year | n/a |
| **Cash advances in the UK** | | 0.00% per year | 2.50% |
| | | | ...but we’ll charge a minimum of £2.50 each time |
| **Transactions in a foreign currency** | This will be the current standard rate or any promotional rate that we’re offering on this kind of transaction at the time you make it. | n/a |
**Card purchases**
This means where you use your credit card to make a purchase; for example, in a shop, restaurant or online.
**Cash advances**
This means any cash transaction where you use your Card or Card number, including but not limited to:
- The purchase of traveller’s cheques or foreign currency;
- Cash from a cash machine or obtained over the counter at a bank or cash provider;
- Any payment made using a money order, electronic money transfer or similar;
- Any use made for gambling including internet gambling and purchase of lottery tickets;
- Any facilities we determine to be similar to the above that we may provide in connection with the use of the Account.
You can use up to 30% of your credit limit, and not more than £500 per day. However, please remember that taking out cash using a credit card could be more expensive compared with taking it out of a current account.
**Transactions in a foreign currency**
This means card purchases or cash advances that you make in any currency other than sterling.
**How we charge the Account Fee**
The Account Fee is payable monthly in arrears and will be charged on your statement.
This Account Fee will show up as ‘Membership Fee’ on your statement and gives you access to the Tandem Membership Plan.
The 0% interest rate is tied to the Account Fee.
If you terminate your credit card account under clause 14 of these Terms for your Credit Card, we reserve the right to remove the other benefits that are offered with the Tandem Membership, including any loyalty rates you receive on our products. If we do this, we’ll change your rate to our standard rate, or move you onto the closest equivalent product.
If you end your credit card account under clause 14 of these Terms for your Credit Card, you will not be able to spend on your card and you’ll be required to pay off the remaining balance before the account can be closed.
Our fees if you pay late or don’t stick to our agreement
If you miss a payment, or if there’s a problem with collecting your payment, we may charge one or more of the fees below.
These fees are based on our operational costs when these problems happen.
We’ll ask you to pay these fees on your statement. You’ll see them as part of the outstanding balance on your statement.
If we need to apply a fee, we’ll include a notice in your next statement confirming this.
| Type of fee | Amount of fee |
|------------------------------|---------------|
| Late payment | £12 |
| We may charge this if: | |
| • Your minimum payment doesn’t reach us and clear by the payment date given on your statement. | |
| • You’ve paid less than the minimum payment amount we’ve asked for. | |
| Returned Direct Debit | £12 |
| We may charge this if your Direct Debit doesn’t go through when there’s not enough money in your account. | |
How we charge interest and fees
The following applies if interest is charged.
If you pay your balance in full by the payment date we won’t charge interest on purchases you made with your card.
If you don’t pay your balance in full by the payment date, we’ll charge interest on the outstanding balance including any new purchases you make until your outstanding balance is paid in full.
We’ll add the interest to your account on your statement date each month. The interest will form part of your outstanding balance.
We will charge interest on your interest if not paid in full – this is known as compound interest.
If you use your card for cash advances we’ll charge interest:
• From when the cash advance is added to your account until repaid in full;
• Between your statement date and your payment date; and
• On your interest if not paid in full – this is known as compound interest.
Interest accrued between your statement date and your payment date to clear the cash balance will be added to the balance of your next statement.
If you pay by Direct Debit you may receive trailing interest until you make a manual payment to clear down the cash balance.
The interest we charge on fees and unpaid interest
The following applies if interest is charged.
We charge interest on fees for cash advances in the UK at the standard rate for cash advances. We charge interest on any unpaid interest at the rate that applied to that type of transaction.
For example, if the original interest was on a purchase, we’ll charge interest on the interest at our standard rate for purchases.
If we’ve charged a fee for paying late or not sticking to this agreement, we’ll start charging interest on this fee from the 30th day after the fee was charged until you’ve paid off the fee in full. We won’t charge interest on top of this interest.
Promotional rates
If we offer you a promotional rate, we’ll let you know the details of it at the time. If you use the promotion, we’ll show this on your statement.
3. Using your card in a foreign currency
If you use your card or get a refund in a foreign currency, this is what happens:
- Mastercard convert all transactions and refunds from a foreign currency to pounds sterling at their exchange rate on the day the transaction is settled.
- You can find their exchange rates at mastercard.com/global/currencyconversion.
- We use Mastercard’s currency exchange rate with no additional mark up.
- Exchange rates change daily, so the rate Mastercard use may be different from the rate on the day you make your transaction, because it might not be settled on the same day.
- The exchange rate used will appear on your statement.
4. Paying your statement
**Frequency**
We’ll provide you with a statement for each month you used your card or have an outstanding balance. On it you’ll see your total balance owed and the transactions of that statement month including any interest and fees. It’ll also show you the minimum payment required within 25 days of the statement date.
**Your minimum payment**
You must pay at least the minimum payment each month.
The minimum payment must reach your account by the payment date shown on your statement.
If you receive a refund after your statement date, but before your payment date, you must still make the minimum payment.
Your minimum payment is the higher amount of:
- The interest and any fees, plus 1% of the remaining balance;
- £5 (or the whole balance if it is less than £5).
You can pay us more than the minimum payment if you want to and you can repay everything you owe us under this agreement at any time.
**How long it takes for payments to reach us**
It can take up to seven working days for the money to reach us, depending on how you pay.
**Cancelling your payments**
If you pay with a Direct Debit and want to cancel future payments, you must let us know no later than ten working days before the payment is due.
**Other amounts we can ask for**
If there’s an amount that’s overdue, or if you’ve gone over your credit limit, we can ask you to pay this at any time.
**The currency you can use to pay**
All payments you make to us must be in pounds sterling.
You should not make payments that place the Account in credit. If you do, we may restrict the use of the Card and the Account to the amount of your Credit Limit and we can return any credit balance to you at our discretion.
5. How we allocate your payments
We’ll always use your payments to pay off:
- Balances with higher interest rates before those with lower rates.
- Existing balances before new transactions that haven’t yet appeared on your statement.
- Balances that attract interest charges at the time of your payment before those that do not attract interest charges at that time.
6. Your right to cancel
How you exercise your right to cancel
You may cancel this agreement without giving a reason within 14 days beginning the day after the day we confirm your credit card has been set up.
If you cancel this agreement you must pay the balance on your account and any interest within 30 days from the date of cancellation. If you do not do this, we may recover it as a debt through the courts.
You can exercise your right to cancel by contacting us on 020 3370 0970.
7. Additional cardholders
If offered by Tandem, you can ask for additional cards and PINs for your family
There are a few things to note:
- These cards and PINs are part of your account and you must make sure additional cardholders keep to this agreement.
- The additional cardholders will need to meet our rules for who can apply for a card; for example, they need to be over 18 and live at the same address as you.
- You must pay for any transactions they make on the card, even if they breach this agreement.
- We may give information about your account to any additional cardholder in relation to their own transactions.
- Your additional cardholder may not be able to access the same account features as you. For example, they won’t be able to verify some online transactions that may need a passcode sent to your device or mobile phone number.
8. Disputes and refunds
When we will give you a refund
We’ll give you a refund for a transaction and any interest and associated fees charged if:
- We receive refund details from the supplier you bought goods or services from;
- You, or an additional cardholder, didn’t authorise the transaction;
- We’re able to claim a refund for you through the card scheme.
Disputing a transaction
If you dispute a transaction and we find that you didn’t authorise it, your account will be credited for that transaction amount.
To be able to carry out an investigation, you’ll be asked to provide us information within a timeframe. We’ll tell you what to provide at the time.
If we refund the transaction but find that you authorised it, we reserve the right to re-debit your account.
If you dispute a pre-authorised transaction, you must tell us as soon as possible.
Claiming against the supplier or us
If you used your card to buy goods or services that weren’t fully supplied or were unsatisfactory, you may have a claim against the supplier and us.
This applies to individual items that cost between £100 and £30,000.
9. Keeping you and your card safe
Keeping secure
You must not let anyone else use your card, nor share your PIN or other security information (such as security codes) with anyone. If you do so, we won’t be legally responsible for any losses you suffer.
| **Lost or stolen cards or security information** | If your card or card details are lost or stolen, or you think someone knows your security information or your account is compromised in any way, you should tell us immediately on 020 3370 0970. You’ll need to tell us all the information you have about the loss, theft or misuse of your card or security information. We may ask you to report the matter to the police, or we may give information to the police about it. |
| **If your card is misused** | You won’t be liable for any transactions not made by you or any additional cardholder, unless:
• Someone is using the card or card details with your, or an additional cardholder’s, permission or with security information made available by you;
• You, or an additional cardholder, fail to keep your security details secure.
If you find a card that you have previously reported lost or stolen, do not try to use it. Please destroy it securely. |
| **Restrictions on using your card** | You, and any additional cardholder, must not use your card or card details:
• For an illegal purpose;
• After the expiry date shown on the card. |
| **Preventing fraud or misuse** | Occasionally we may prevent or limit the use of your card or card details, or refuse to issue a replacement card.
We may do this when:
• A card is lost or stolen, or we suspect unauthorised or illegal use;
• We reasonably consider this is necessary to ensure the security of your account;
• We have good reason to think you may not be able to repay us;
• The transaction stands out to us as unusual compared to your normal spending habits;
• The transaction exceeds your credit limit;
• We reasonably believe the transaction would damage our reputation or breach a legal or regulatory requirement;
• We reasonably believe the transaction is in breach or a misuse of this agreement;
• We reasonably believe you no longer live at the UK address we have on record for you.
We’ll make checks to try to prevent fraud or misuse. If we decide not to carry out a transaction, the supplier or we will tell you.
If possible, we’ll explain why we made the decision and let you know what has happened by the communication medium that we reasonably think is appropriate. |
| **If you can’t use your card** | We can’t guarantee that you’ll always be able to use your card or card details. We aren’t legally responsible for any loss if a card can’t be used due to circumstances that we can’t control or because we prevent or limit the use of your card or card details for any of the reasons shown above. |
10. Missed payments
If you miss a payment If you fail to make the minimum payment in full on its due date, you’ll have to pay the missed payment fee of £12.00 each time you fail to make the minimum payment.
We will record the details with a credit reference agency which may negatively impact your credit score and may make it more difficult or more expensive for you to borrow in future.
We may use funds in other accounts you have with us to reduce or repay the amount of the missed payment or any other outstanding balances. This is called a right of set-off.
If we’re unable to resolve the matter, we may give you a default notice explaining the situation and giving you at least 14 days to try and correct it. If you don’t make the payment within the time we give you, we can ask you to pay back the whole balance, all interest, fees and other sums payable under this agreement immediately, or we might sell your debt, and the buyer may follow similar processes.
We could take legal action against you to secure repayment and the court could order you to pay the debt directly from your wages and you might have to pay our legal costs as well. Alternatively, the debt and our costs may be secured against any property that you own.
If you have difficulties making payments, please contact us on 020 3370 0970.
11. If you give us incorrect information
What we will do if you give us incorrect information If any information you’ve given us (either when you applied for the credit card or during the duration of this agreement) proves to be inaccurate or incomplete, we may give you a default notice. This will explain the default and give you at least 14 days to try and correct it. If the problem continues, we can ask you to pay back any outstanding balance, all interest, fees and other sums payable under this agreement immediately.
12. Transferring this agreement
What happens if we transfer this agreement We can transfer any of our rights and duties under this agreement to another company or person. We’ll only do this if we reasonably believe they will treat you the same way we do. Before we do this, we may give them and their advisers personal information about you to help them prepare for a possible transfer.
We may also allow them to use your personal information after the transfer in the same way that we can. By personal information, we mean any personal details you and others have given us, and what we learn about you from running your accounts.
When we refer to we, us, or Tandem in this agreement, this will also mean anyone we transfer our rights or duties to. We may also arrange for any other person to carry out our rights or duties under this agreement. This will not affect your rights under this agreement or your legal rights.
You may not transfer the benefit of this agreement to anyone else.
13. Changes to this agreement
Letting you know about changes We’ll give you not less than two months notice of any changes before they take effect. However, if a change benefits you, we may make it immediately.
If you are unhappy with any change, you can close your account as set out in clause 14. If you do not do so, you will be deemed to have accepted the changes.
Why might we change this agreement
We may change the rates and fees, introduce new rates and fees or update this agreement at any time because of:
- The cost of providing the card and our services;
- Changes to laws, regulations, regulatory guidance, banking practices or other external factors that it is reasonable for us to take into account;
- The need to operate our business profitably and soundly;
- The need to reflect changes in technology or the functionality of your account;
- The need to introduce new facilities and services;
- Our assessment of your credit risk in the future.
We may also change this agreement to correct any mistakes or to make this agreement fairer or clearer.
We may make other changes as long as it is reasonable for us to do so and we explain the reason to you.
14. Termination
Why we might end this agreement
If we need to end this agreement we’ll give you 30 days’ notice. However, as long as we comply with our legal requirements, we may end this agreement immediately. We might do this if:
- You don’t keep to this agreement;
- We believe, as a responsible lender, it is necessary to end this agreement;
- A bankruptcy order is made against you;
- You apply for a debt relief order to make a voluntary arrangement with your creditor;
- You move abroad;
- You die.
When this agreement ends
We won’t close the account unless we receive payment in full. You must repay the following:
- All outstanding amounts;
- Any amounts that become due;
- Interest to the date of payment;
- Any fees.
If you want to end this agreement
This agreement has no fixed term. You can end it at any time by calling us. You don’t need to give any reason for ending it.
Enforcement
If you don’t keep to this agreement but we decide not to take action at the time, it doesn’t stop us from taking action in the future.
15. Statements and other things
Your statements
We’ll provide statements showing movements (if any) on your account, and interest and fees due each month.
You can also request a free copy of your monthly statement with this information at any time by calling us.
You are responsible for checking your statement. You must tell us immediately if you:
- Don’t receive a statement;
- Think something is wrong on your statement.
If your details change If any of the following details change, to avoid any problems you must tell us as soon as possible:
- Your name;
- Your home address;
- Your email address;
- Your bank account details used for a Direct Debit Instruction;
- Your mobile phone registered with us. You will only be able to register a UK mobile phone number.
Notices We can send notices or communications to you by post, email, text or any other electronic communication that we reasonably think is appropriate, using the latest contact details and preferences you’ve given us. We will always communicate with you in English.
Confidentiality and data protection Your privacy is important to us and the information you give us online and offline is treated confidentially, in line with data protection laws.
We use your information to provide our services to you and, where necessary to help us improve our product service delivery, we may share your information with parties outside of Tandem. For further details on how we obtain and use your information and who we share it with please read the Privacy Policy. The Privacy Policy can be found on our website. We’ll provide you with a copy of the Privacy Policy when you open your account.
16. If things go wrong
If you have a complaint Call us on 020 3370 0970 if something’s concerning you or to make a complaint and we’ll try to work it out with you.
If you have a complaint and aren’t satisfied with how we deal with it or it’s been over 8 weeks since you raised it, you can refer your complaint to the Financial Ombudsman Service. You can contact the Financial Ombudsman Service and find out more about their service:
- By post: The Financial Ombudsman Service, Exchange Tower, London, E14 9S4
- By phone: 0800 023 4567
- By email: email@example.com
- Online: www.financial-ombudsman.org.uk
Our supervisory authority We are authorised by the Prudential Regulation Authority and regulated by the Financial Conduct Authority and the Prudential Regulation Authority. Our Financial Services Register number is 204479. You can confirm our registration on the FCA’s website (www.fca.org.uk).
Governing law These terms are supplied and we will communicate with you in English.
These terms will be governed and construed in accordance with the laws of England and Wales.
|
Fractures and Traumatic Brain Injuries: Abuse Versus Accidents in a US Database of Hospitalized Children
John M. Leventhal, Kimberly D. Martin and Andrea G. Asnes
*Pediatrics* 2010;126;e104-e115; originally published online Jun 7, 2010;
DOI: 10.1542/peds.2009-1076
The online version of this article, along with updated information and services, is located on the World Wide Web at:
http://www.pediatrics.org/cgi/content/full/126/1/e104
PEDIATRICS is the official journal of the American Academy of Pediatrics. A monthly publication, it has been published continuously since 1948. PEDIATRICS is owned, published, and trademarked by the American Academy of Pediatrics, 141 Northwest Point Boulevard, Elk Grove Village, Illinois, 60007. Copyright © 2010 by the American Academy of Pediatrics. All rights reserved. Print ISSN: 0031-4005. Online ISSN: 1098-4275.
American Academy of Pediatrics
DEDICATED TO THE HEALTH OF ALL CHILDREN™
Downloaded from www.pediatrics.org. Provided by LDS Hospital on January 5, 2011
abstract
OBJECTIVE: The goal was to use a national database to determine the incidence of abusive traumatic brain injuries (TBIs) and/or fractures and the frequency of abuse versus accidents among children <36 months of age.
METHODS: We used the 2006 Kids’ Inpatient Database and classified cases into 3 types of injuries, that is, (1) TBI only, (2) TBI and fracture, or (3) fracture only. Groups 2 and 3 were divided into 3 patterns, that is, (1) skull fractures, (2) skull and nonskull fractures, or (3) nonskull fractures. For each type and pattern, we compared abuse, accidental falls, other accidents, and motor vehicle accidents.
RESULTS: The incidence of TBIs and/or fractures attributable to abuse was 21.9 cases per 100 000 children <36 months of age and 50.0 cases per 100 000 children <12 months of age. In the abuse group, 29.9% of children had TBIs only, 28.3% TBIs and fractures, and 41.8% fractures only. Abused children were younger and were more likely to be enrolled in Medicaid. For TBI only, falls were more common than abuse in the first 2 months of life but abuse was more common from 2 to 7 months. For TBI and skull fracture, falls were more common during the first year of life. For skull fracture only, almost all injuries were attributable to falls.
CONCLUSIONS: There was overlap in TBIs and fractures attributable to abuse. Among <12-month-old children, TBIs and/or fractures attributable to abuse occurred in 1 of 2000. Falls occurred more commonly than abuse, even among very young children. Pediatrics 2010;126: e104–e115
Traumatic brain injuries (TBIs) and fractures are the most common, serious injuries attributable to child abuse among young children. Previous studies focused on one or the other of these types of injuries, as if they were separate types, but clinical overlap occurs often. For example, in one series of 39 patients with subdural hematomas attributable to abuse, either rib or long-bone fractures were identified for 51% of patients.
Most studies of TBIs or fractures attributable to abuse focused on the clinical characteristics of the children or the characteristics that distinguished accidental from abusive injuries. A few studies used case surveillance to examine the incidence of either abusive TBIs or abusive fractures, and we used national US databases to estimate the incidence of each of these types of injuries. In addition, epidemiological studies used hospital data from California to examine injuries, including accidental falls and abuse. Those studies, however, examined all types of injuries in hospitalized children.
No previous study used a sufficiently large database to examine both TBIs and fractures and to compare the characteristics of injuries attributable to abuse versus accidents during the first 3 years of life. This type of epidemiological information would be helpful to clinicians when they evaluate such injuries in young children. For example, knowledge of the age distribution of such injuries could provide information about the likelihood of these occurrences. Therefore, in this study we used a large US database of hospitalizations in 2006 to examine data for young children with TBIs and/or fractures. We compared the demographic characteristics of children with injuries attributable to abuse versus accidents and examined the age distributions of abuse and accidents among children with 3 types of injuries, that is, (1) TBI only, (2) TBI and ≥1 fracture, and (3) fractures only. In addition, we examined clinically relevant patterns of injuries. Among children with TBIs and fractures, we examined 3 patterns, that is, (1) TBI and skull fracture, (2) TBI and skull and nonskull fractures, and (3) TBI and nonskull fractures; among children with fractures only, we examined the same 3 patterns, that is, (1) skull fracture, (2) skull and nonskull fractures, and (3) nonskull fractures.
METHODS
We used the 2006 Kids’ Inpatient Database (KID), which includes an 80% sample of all acute-care hospitalizations from 3739 hospitals in 38 states; these states include >88% of the US population. Every 3 years since 1997, a KID data set has been made available by the Healthcare Cost and Utilization Project, sponsored by the Agency for Healthcare Research and Quality. The database contains information about demographic features, payment, and hospitals, 15 fields for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM), diagnosis codes, and 4 fields for external cause-of-injury codes (E-codes), which provide information about the causes of injuries.
We confined our analyses to children who were <36 months of age, because most serious injuries attributable to abuse occur among young children. Analyses were conducted by using 3 age demarcations, that is, <12 months, 12 to 23 months, and 24 to 35 months of age. TBI was defined by using ICD-9-CM codes for brain injury; children were not included in this group if the only injury involving the head was a skull fracture. This definition of TBI is similar to that used by Keenan et al in the only prospective, case-surveillance study of inflicted TBIs in the United States and is identical to that used by Ellingson et al in their KID study of the incidence of inflicted TBIs. We used 5 major groups of ICD-9-CM codes, that is, (1) skull fracture and intracranial injury (codes 800.1–800.4, 800.6–800.9, 801.1–801.4, 801.6–801.9, 803.1–803.4, 803.6–803.9, 804.1–804.4, and 804.6–804.9), (2) concussion (codes 850.0–850.9), (3) cerebral laceration and contusion (codes 851.0–851.9), (4) subdural hemorrhage after injury (codes 852.2 and 852.3), and (5) other intracranial injury (codes 852.0, 852.1, 852.4, 852.5, 853.0–853.1, and 854.0–854.1). Fractures were defined by using the ICD-9-CM codes for fractures (codes 800–829).
Child abuse was defined by using an ICD-9-CM code for abuse (code 995.5) or an E-code for assault (codes E960–E969). Cases whose only abuse definitions were for emotional or psychological abuse (code 995.51), nutritional neglect (code 995.52), or sexual abuse (code 995.53) were not counted as abuse. Accidental injuries included 3 main groups, that is, (1) accidental falls (codes E880–E888), (2) other accidents (eg, struck by a falling object or against an object or person) (codes E916–E928), and (3) motor vehicle accidents (MVAs) and related accidents (codes E810–E829). The remaining cases with injuries were classified in the following groups: (1) cases with no E-codes, (2) cases with a fracture with unspecified cause (code E887), (3) cases in which it was not determined whether the injury was accidental or intentional (codes E980–E989), and (4) cases in which the coding indicated a birth injury (code 767) or an underlying medical condition, such as osteogenesis imperfecta (code 756.51) or rickets (code 268.0).
We grouped cases into 3 types of injuries, that is, (1) TBI only, (2) TBI with fracture, or (3) fracture only. Within
each type of injury, we used the weightings provided with the KID to calculate the proportions of children in the United States with abuse, accidental falls, other accidents, and MVAs. In addition, within each type of injury, we compared the demographic characteristics within the 4 different causes of injury. Weighted proportions were compared by using $\chi^2$ tests, and means were compared by using analysis of variance. Weighting took into account 6 characteristics of the hospitals, namely, ownership (control), bed size, teaching status, type of hospital (eg, freestanding children’s hospital), rural/urban location, and region of the country.
For calculation of the incidence of TBIs and/or fractures attributable to abuse, the numerator was the weighted number of children with the specific type of injury attributable to abuse; the weighting corrected the numerator for the sampling methods of the KID, so that the numerator could provide a national estimate.\textsuperscript{11} The denominator was based on estimates of the national population of the specific age for 2006. Census data were obtained from the 2006 intercensal estimates.\textsuperscript{13} Confidence intervals (95% CIs) were calculated by using the Taylor series in SAS 9.1.3 (SAS Institute, Cary, NC).
For each type of injury, we examined the weighted frequency of accidental falls, other accidents, and abuse at 1-month intervals during the first 36 months. For these analyses, we excluded MVAs because such injuries are unlikely to be confused with abuse. Finally, we examined the frequency of accidents and abuse among children with specific patterns of injuries. For children with TBI and fractures, we examined (1) TBI with skull fracture, (2) TBI with skull and nonskull fractures, and (3) TBI with nonskull fractures. For children with fractures, we examined the same 3 patterns, (1) skull fracture only, (2) skull and nonskull fractures, and (3) nonskull fractures only.
In some cases, data on the age in months were not available but data on the age in years were provided. These cases were included in the analyses with stratification according to age groups but not in the calculations of mean ages or in the figures showing frequencies for each month of life. This study was considered exempt from review by the institutional review board of Yale University School of Medicine.
**RESULTS**
**TBI and/or Fractures**
In the weighted sample, there were 18 822 children who were <36 months of age and had TBIs and/or fractures. Of those children, 3512 (18.7%) had TBIs, 3630 (19.3%) had TBIs and fractures, and 11 680 (62.1%) had fractures only. The overall incidence of TBI and/or fracture was 152.5 cases per 100 000 children <36 months of age (95% CI: 137.4–167.6 cases per 100 000 children). The incidence was highest among children <12 months of age (201.8 cases per 100 000 children [95% CI: 181.2–222.4 cases per 100 000 children]), compared with those 12 to 23 months of age (83.2 cases per 100 000 children [95% CI: 74.4–91.9 cases per 100 000 children]) and those 24 to 35 months of age (172.3 cases per 100 000 children [95% CI: 146.4–198.2 cases per 100 000 children]).
Table 1 shows the proportions of all children with the 3 types of injuries in each age group. There was a statistically significant association ($P < .0001$) between age and type of injury. TBI alone was noted for approximately one-fifth of the children in the 2 younger age groups (0–11 months, 21.5%; 12–23 months, 21.3%). Children with TBI and fractures represented 27.5% of children 0 to 11 months of age, and this value decreased to 11.8% in the oldest group. The proportion of children with fractures increased from 51% among children <12 months of age to 74.2% in the oldest age group.
**TBI and/or Fractures Attributable to Abuse**
The incidence of TBI and/or fractures attributable to abuse for children <36 months of age was 21.9 cases per 100 000 children (Table 2). The incidence was highest in the youngest age group (50.0 cases per 100 000 children [95% CI: 42.6–57.4 cases per 100 000 children]) and was substantially lower in the 2 older age groups (12–23 months, 9.2 cases per 100 000 children [95% CI: 7.4–11.1 cases per 100 000 children]; 24–35 months, 6.3 cases per 100 000 children [95% CI: 4.4–8.2 cases per 100 000 children]).
Table 3 shows that there was a statistically significant association ($P < .0001$) between the causes of injuries and the 3 types of injuries. Children with abuse represented 14.4% ($n = 2703$) of the overall sample and even larger proportions of children with TBI only (23.0% [$n = 807$]) and children
---
**TABLE 1** Proportions of Children With Each Type of Injury in Each Age Group
| Type of Injury | 0–11 mo ($N = 8335$) | 12–23 mo ($N = 3417$) | 24–35 mo ($N = 7070$) | Total ($N = 18\,822$)* |
|----------------------|---------------------|-----------------------|-----------------------|------------------------|
| TBI | 21.5 | 21.3 | 14.1 | 18.7 |
| TBI and fracture | 27.5 | 14.7 | 11.8 | 19.3 |
| Fracture | 51.0 | 64.0 | 74.2 | 62.1 |
| Total | 100.0 | 100.0 | 100.1 | 100.1 |
*Includes all children <36 months of age with TBIs and/or fractures.
with TBI and fractures (21.1% [n = 766]). In contrast, 9.7% (n = 1130) of children with fractures only had been abused. Of all abused children in the sample, 29.9% had TBI only, 41.8% had fractures only, and 28.3% had both types of injuries.
Table 4 shows the proportion of abuse cases within each age group for each type of injury. Overall, 24.8% of children 0 to 11 months of age with TBI and/or fractures had been abused, and this value decreased to 11.1% and 3.7% for the 12– to 25-month and 24- to 35-month age groups, respectively (P < .0001). Within each age group, there was a statistically significant association between the proportion of abuse and the type of injury (P < .0001). In each age group, the proportion of children with abuse was highest among children with TBI only and lowest among children with fractures only. For example, among children 0 to 11 months of age, 32.6% of those with TBI only, 26.8% of those with TBI and fractures, and 20.4% of those with fractures only were abused.
The overall sample had a mean age of 14.0 months, and 59.3% of subjects were male. The racial/ethnic composition of the sample was 35.1% white, 11.6% black, 20.7% Hispanic, 7.6% other, and 25.0% unknown; 58.1% of subjects were enrolled in Medicaid or had no insurance, 36.9% had private insurance or were in a health maintenance organization, and 4.8% had other types of insurance. There were statistically significant differences for all 4 of these demographic variables when the data were analyzed according to the specific cause of injury (abuse, accidental fall, other accident, or MVA) within each type of injury (data not shown). These differences were most marked for the child’s mean age and health insurance (Table 5). For all 3 types of injuries, children with abuse, compared with other causes, had the youngest mean age. For example, among children with TBI only, those with abuse had a mean age of 8.1 months, compared with 13.8 months for accidental falls, 13.6 months for other accidents, and 17.2 months for MVAs. There also were marked differences in the children’s medical insurance according to the cause of injury. The proportion of children enrolled in Medicaid or without insurance was largest among abused children for all 3 types of injuries (74.5%–80.7%).
**Patterns of Injuries**
Figures 1 to 3 show the weighted numbers of cases of abuse, accidental falls, and other accidents at each month of age for each type of injury. The proportions of cases in which data on the age in months were available are noted. As shown in Fig 1, TBIs attributable to accidental falls occurred more commonly than did TBIs resulting from abuse or other accidents among children <2 months of age. In contrast, from 2 months to 7 months, abuse was the most common cause of TBIs.
Figure 2A shows the frequencies of cases of TBIs and fractures; accidental falls and abuse were very common during the first 5 months of life. When specific clinical patterns were examined, accidental falls resulting in TBIs and skull fractures were more common than abuse from 0 to 35 months, and this was especially striking among children <12 months of age (Fig 2B). In contrast, for children with TBIs and nonskull fractures (Fig 2C) or with TBIs and skull and nonskull fractures (Fig
almost all of the injuries during the first 12 months of life were attributable to abuse.
For children with fractures, numbers of both accidental falls and abusive injuries peaked during the first months of life, and accidental falls were more common after 4 months of age (Fig 3A). When the clinical patterns were examined, almost all cases of skull fractures only were attributable to accidental falls, particularly during the first 12 months of life (Fig 3B). In contrast, for children <6 months of age with nonskull fractures, the most common cause was abuse; as children became more mobile, the frequency of accidental falls increased (Fig 3C). There were fewer children with skull and nonskull fractures but, during the first 12 months of life, most of these cases were attributable to abuse (Fig 3D).
**DISCUSSION**
By using a national database of data on hospitalized children, we found a relatively high incidence of cases of TBI and/or fracture attributable to abuse, substantial differences in the age and
---
**TABLE 5** Age and Insurance Status of Children With TBI Only, TBI and Fracture, or Fracture Only
| | Abuse | Accidental Fall | Other Accident | MVA | P |
|--------------------------|---------|-----------------|----------------|--------|-------|
| **TBI only (N = 3068)** | | | | | |
| n | 807 | 1561 | 276 | 425 | |
| Age, mean ± SD, mo | 8.1 ± 0.40 | 13.8 ± 0.41 | 13.6 ± 1.01 | 17.2 ± 0.98 | <.0001 |
| Insurance, % | | | | | |
| Medicaid/self-pay | 74.5 | 50.7 | 54.7 | 53.8 | <.0001|
| Private/HMO | 19.0 | 44.3 | 40.0 | 41.0 | |
| Other | 6.5 | 5.0 | 5.4 | 5.3 | |
| **TBI and fracture (N = 3165)** | | | | | |
| n | 766 | 1579 | 260 | 560 | |
| Age, mean ± SD, mo | 6.3 ± 0.41 | 10.1 ± 0.40 | 13.6 ± 1.27 | 16.9 ± 0.65 | <.0001|
| Insurance, % | | | | | |
| Medicaid/self-pay | 77.0 | 49.8 | 59.7 | 58.3 | <.0001|
| Private/HMO | 17.2 | 46.1 | 32.5 | 37.0 | |
| Other | 5.8 | 4.1 | 7.8 | 4.7 | |
| **Fracture only (N = 9424)** | | | | | |
| n | 1130 | 5844 | 1421 | 1029 | |
| Age, mean ± SD, mo | 7.5 ± 0.31 | 17.6 ± 0.29 | 17.2 ± 0.48 | 21.9 ± 0.49 | <.0001|
| Insurance, % | | | | | |
| Medicaid/self-pay | 80.7 | 51.8 | 60.1 | 56.6 | <.0001|
| Private/HMO | 15.7 | 44.0 | 34.9 | 38.4 | |
| Other | 3.7 | 4.2 | 5.0 | 5.0 | |
Age data were missing for 28% of the TBI group, 27% of the TBI and fracture group, and 37% of the fracture group. Insurance comparisons excluded 20 cases with unknown insurance status. HMO indicates health maintenance organization.
FIGURE 2
A, Causes of TBIs and fractures. Age data were available for 74.9% of eligible (weighted) cases (1952 of 2605 cases). B, Causes of TBIs and skull fractures. Age data were available for 73.1% of eligible (weighted) cases (1517 of 2075 cases). C, Causes of TBIs and nonskull fractures. Age data were available for 83.0% of eligible (weighted) cases (225 of 271 cases). D, Causes of TBIs and skull and nonskull fractures. Age data were available for 81.1% of eligible (weighted) cases (210 of 259 cases).
FIGURE 2
Continued.
FIGURE 3
A, Causes of fractures only. Age data were available for 64.0% of eligible (weighted) cases (5375 of 8396 cases). B, Causes of skull fractures only. Age data were available for 71.1% of eligible (weighted) cases (1616 of 2274 cases). C, Causes of nonskull fractures only. Age data were available for 60.8% of eligible (weighted) cases (3621 of 5955 cases). D, Causes of skull and nonskull fractures. Age data were available for 81.1% of eligible (weighted) cases (137 of 169 cases).
FIGURE 3
Continued.
health insurance of children with abusive versus accidental injuries, and important differences in the occurrence of accidental falls versus abuse during the first year of life. The incidence of TBIs and/or fractures attributable to abuse was 21.9 cases per 100 000 children <36 months of age and 50.0 cases per 100 000 children during the first year of life. Considerable overlap occurred between the 2 types of injuries; 28% of the abused children had both types of injuries. If only children with TBIs were examined, then 42% of the abused group with only fractures would be missed.
Because these types of abusive injuries are not rare, they should be the target of prevention programs that have become widespread in the United States. Such programs have used home visiting during the first few years of life to prevent abuse and neglect among socially high-risk, often first-time parents. None of the randomized trials of home visiting was large enough to study these types of abusive injuries but, as the number of home visiting programs increases, there is an opportunity to examine whether these programs can prevent TBIs and/or fractures, the 2 most-common types of serious abusive injuries among young children.
More-targeted prevention has focused on abusive head trauma.\textsuperscript{15,16} Because our data showed substantial overlap between TBIs and fractures attributable to abuse, it may be helpful for these targeted programs to broaden their focus and aim to prevent both types of abusive injuries, particularly because they focus on helping parents not hurt their crying infants.
There were 2 striking demographic differences between abused children and those with other causes of injury. The first was that abused children were substantially younger. Their younger age is likely attributable to their increased vulnerability to injuries caused by caretakers’ maltreatment and the increased challenges of caring for young infants, such as managing their crying. In addition, it is not surprising that accidental injuries occurred at an older mean age, when children have better motor skills and therefore might be injured in falls.
The second difference was that abused children were more likely to be enrolled in Medicaid (or to have no insurance), compared with children with other causes of injury. This difference suggests that abused children are more likely to be from economically impoverished families, as reported previously.\textsuperscript{17} Because 74.5% to 80.7% of the abused children were enrolled in Medicaid (or had no insurance), funding from this government program might aim to prevent the injuries themselves and thus decrease the costs of hospitalizations.
An alternative hypothesis to explain at least some of the differences in insurance status between abused and non-abused children relates to bias in the diagnosis of abuse by physicians. Previous studies demonstrated that physicians are more likely to suspect abuse and to report abuse to child protective services for minority children, compared with white children, and these minority children are more likely to have Medicaid as their health insurance.\textsuperscript{18}
The examination of the patterns of injuries revealed 3 important findings related to accidental falls versus abuse during the first year of life. First, for children <2 months of age, the most-common cause of TBI without a fracture was an accidental fall; because children of this age are unlikely to roll over, these accidental falls likely occurred when a caregiver accidentally dropped the infant or fell while holding the infant. Second, in the first year of life, TBI with a skull fracture occurred more commonly because of an accidental fall than abuse. Third, almost all children in the first year of life with isolated skull fractures were injured in accidental falls.
These 3 findings all involve head injuries in very young children, and the first 2 concern infants with TBIs. Although the clinical literature emphasizes that TBIs (with or without skull fractures) in infants can be caused by abuse, limited data have been able to compare directly similar patterns of injuries attributable to abuse and accidental falls. Greenes and Schutzman\textsuperscript{19} described 20 children <2 years of age who had TBIs and skull fractures and had no significant symptoms when admitted to the hospital. Of the 20 cases, 1 case was classified as abuse. Of the 19 children who were classified as having accidental injuries, most had small subdural, subarachnoid, or epidural hematomas, and 53% were <4 months of age. These results are similar to those shown in Fig 2B. A recent study by Wood et al\textsuperscript{20} showed that infants with isolated skull fractures rarely had positive skeletal survey findings, which suggested that the risk of abuse was low. These results are consistent with our findings that isolated skull fractures were almost always attributable to accidental falls (Fig 3B).
This study has 4 limitations. The first is the reliance on E-codes and ICD-9-CM codes for abuse to ascertain abusive versus accidental injuries. For these codes to be used correctly, the physician must document the decision regarding the cause of the child’s injury in the medical record, and then the hospital coder must interpret the physician’s note correctly and use the correct code. In particular, there has been concern that physicians might under-recognize abuse and/or not document that diagnosis clearly, which would make administrative data sets, such as the KID, a poor source of data. Data
from a study using the 2003 KID showed that the diagnosis of abuse varied according to the type of hospital. Among patients with long-bone or skull fractures, the proportion of children with abusive injuries was largest in freestanding children’s hospitals and smallest in community hospitals.\textsuperscript{21} Whether this difference reflects physicians’ willingness to diagnose abuse or variations in patients’ characteristics because of differences in the severity of injuries is not clear. In contrast, Ellingson et al\textsuperscript{7} showed that data from the KID could be used to provide incidence estimates of inflicted TBI in children <12 months of age that were very close to those provided in prospective, case-surveillance studies. In addition, we showed that the proportion of children who were <12 months of age and had abusive fractures was in the same range as that provided in the only published study that used surveillance to identify fractures resulting from abuse.\textsuperscript{8}
A second limitation relates to missing data. Approximately 17% of the overall sample (13% of children with TBIs or TBIs and fractures) had no E-code or an E-code with an unspecified cause, and thus no cause of injury could be determined. In addition, the specific age of the child in months was missing in 17% to 39% of the cases used to generate the graphs shown in Figs 1 to 3. If children with missing age data were different from those with data included in the graphs, then our results might be biased.
A third limitation is that data from all states are not included in the KID, and different regions are not sampled uniformly. The 2006 KID does include 38 states, however, and the populations of these states represent >88% of the US population.
Finally, because the KID includes only hospitalized children, our results do not include nonhospitalized children with injuries. Although children with TBIs are likely to be hospitalized, children with fractures, especially those >12 months of age and those with isolated injuries, often are not hospitalized. Therefore, we have underestimated the incidence of abusive injuries.
**CONCLUSIONS**
By using a large national database of children hospitalized in 2006, we have shown that there is considerable overlap in the occurrence of abusive injuries attributable to TBI and fractures, and ≥75% of these children are enrolled in Medicaid or have no health insurance. Findings on the frequency of injuries attributable to abuse versus accidents in the first 36 months of life should be helpful for clinicians evaluating injuries in young children.
**ACKNOWLEDGMENT**
Funding was provided by the Child Abuse Funds, Department of Pediatrics, School of Medicine, Yale University.
**REFERENCES**
1. Feldman KW, Bethel R, Shugerman RP, Grossman DC, Grady MS, Ellenbogen RG. The cause of infant and toddler subdural hemorrhage: a prospective study. *Pediatrics*. 2001;108(3):636–646
2. King J, Diefendorf D, Apthorp J, Negrete VS, Carlson M. Analysis of 429 fractures in 189 battered children. *J Pediatr Orthop*. 1988;8(5):585–589
3. Bechtel K, Stoessel K, Leventhal JM, et al. Characteristics that distinguish accidental from abusive injury in hospitalized young children with head trauma. *Pediatrics*. 2004;114(1):165–168
4. Leventhal JM, Thomas SA, Rosenfield SN, Markowitz RI. Fractures in young children: distinguishing child abuse from unintentional injuries. *Am J Dis Child*. 1993;147(1):87–92
5. Keenan HT, Runyan DK, Marshall SW, Nocera MA, Merten DF, Sinal SH. A population-based study of inflicted traumatic brain injury in young children. *JAMA*. 2003;290(6):621–626
6. Jayawant S, Rawlinson A, Gibbon F, et al. Subdural haemorrhages in infants: population based study. *BMJ*. 1998;317(7172):1558–1561
7. Ellingson KD, Leventhal JM, Weiss HB. Using hospital discharge data to track inflicted traumatic brain injury. *Am J Prev Med*. 2008;34(4 suppl):S157–S162
8. Leventhal JM, Martin KD, Asnes AG. Incidence of fractures attributable to abuse in young hospitalized children: results from analysis of a United States database. *Pediatrics*. 2008;122(3):599–604
9. Agran PF, Winn D, Anderson C, Trent R, Walton-Haynes L. Rates of pediatric and adolescent injuries by year of age. *Pediatrics*. 2001;108(5). Available at: www.pediatrics.org/cgi/content/full/108/3/e45
10. Agran PF, Anderson C, Winn D, Trent R, Walton-Haynes L, Thayer S. Rates of pediatric injuries by 3-month intervals for children 0 to 3 years of age. *Pediatrics*. 2003;111(6). Available at: www.pediatrics.org/cgi/content/full/111/6/e683
11. Agency for Healthcare Research and Quality. Overview of the Kids’ Inpatient Database (KID). Available at: www.hcup-us.ahrq.gov/kidoverview.jsp. Accessed October 1, 2008
12. National Center for Health Statistics. *International Classification of Diseases, Ninth Revision, Clinical Modification*. Hyattsville, MD: National Center for Health Statistics; 1999
13. US Census Bureau. Population estimates. Available at: www.census.gov/popest/states/files/SC-EST2006-AGESEX_RES.csv. Accessed December 8, 2008
14. Leventhal JM. Getting prevention right: maintaining the status quo is not an option. *Child Abuse Negl*. 2005;29(3):209–213
15. Dias MS, Smith K, deGuehery K, Mazur P, Li V, Shaffer ML. Preventing abusive head trauma among infants and young children: a hospital-based, parent education program. *Pediatrics*. 2005;115(4). Available at: www.pediatrics.org/cgi/content/full/115/4/e470
16. Barr RG, Rivara FP, Barr M, et al. Effectiveness of educational materials designed to change knowledge and behaviors regarding crying and shaken-baby syndrome in mothers of newborns: a randomized, controlled trial. *Pediatrics*. 2009;123(3):972–980
17. Berger LM. Income, family characteristics,
and physical violence toward children. *Child Abuse Negl.* 2005;29(2):107–133
18. Lane WG, Rubin DM, Monteith R, Christian CW. Racial differences in the evaluation of pediatric fractures for physical abuse. *JAMA* 2002;288(13):1603–1609
19. Greenes DS, Schutzman SA. Occult intracranial injury in infants. *Ann Emerg Med.* 1998;32(6):680–686
20. Wood JN, Christian CW, Adams CM, Rubin DM. Skeletal surveys in infants with isolated skull fractures. *Pediatrics.* 2009;123(2). Available at: www.pediatrics.org/cgi/content/full/123/2/e247
21. Trokel M, Waddimba A, Griffith J, Sege R. Variation in the diagnosis of child abuse in severely injured infants. *Pediatrics.* 2006;117(3):722–728
Fractures and Traumatic Brain Injuries: Abuse Versus Accidents in a US Database of Hospitalized Children
John M. Leventhal, Kimberly D. Martin and Andrea G. Asnes
*Pediatrics* 2010;126;e104-e115; originally published online Jun 7, 2010;
DOI: 10.1542/peds.2009-1076
Updated Information & Services including high-resolution figures, can be found at:
http://www.pediatrics.org/cgi/content/full/126/1/e104
References This article cites 14 articles, 9 of which you can access for free at:
http://www.pediatrics.org/cgi/content/full/126/1/e104#BIBL
Permissions & Licensing Information about reproducing this article in parts (figures, tables) or in its entirety can be found online at:
http://www.pediatrics.org/misc/Permissions.shtm
Reprints Information about ordering reprints can be found online:
http://www.pediatrics.org/misc/reprints.shtm
|
The Sino-North Korean borderland is a fine location from which to assess the state of Northeast Asia. Many fates intersect there. Even today, deep into the 21st century, it is one of very few places where you feel it is possible to receive an answer to the “North Korea question”. It is there that China, North and South Korea carefully watch each other, and we watch them in turn. And it is there that local elites and ethnic Korean communities subvert central attempts to bring the frontier under control.
Travelling its meandering length, rendered more than feasible by immense Chinese investment in transport infrastructure, is to parse multiple borderlands. High-speed rail has integrated the population of most of Dongbei (东北). Fast, comfortable and cheap, trains connect Beijing with Shenyang, the “Gateway to Manchuria” and home to both a US Consulate-General and Korean War memorial. From there the mainline shoots inexorably onwards to Changchun, the former capital of the Japanese colonial state, Manchukuo, before terminating in the Russified city of Harbin.
Other lines hang like droplets from this arterial route, creating north-south linkages to the border but without connecting borderland termini together. A high-speed branch line connects Shenyang with Dandong and Dalian, and a spur of the mainline links Beijing with Jilin, Yanji and Hunchun. Consequently, it takes just 90 minutes to get from Shenyang to Dandong, and a mere nine hours from the Chinese capital to the most easterly point of the China-North Korea border. Yet there is no high-speed link between Dandong and Hunchun, which means that that traversing the borderland from end to end is still a journey of 21 hours,
---
1 This research was made possible by an Academy of Korean Studies grant (AKS-2015-R-49) and the War of Words project at Universiteit Leiden.
just as it has been for decades. This exacerbates pre-existing inequalities between borderland communities.
Most border journeys start at Dandong, as do most journeys by land out of North Korea. That is why, per South Korean anthropologist Kang Ju-won, who spent more than a year embedded in the city, Dandong is primarily a zone of human exchange. Kang’s second book, *The Amnok River flows differently* is a rich depiction of the who, why, where, when and how of North Korean livelihoods in the city, in which 20,000 workers from the DPRK quietly labor. It is the prevalence of firms employing these workers that draws the attention of economists Kim Byung-yeon and Jung Seung-ho. They use the municipality to partially quantify the basis of Chinese trade and investment with North Korea. Though they do not openly admit it for security reasons, the authors of *Person to Person* seem also to have conducted their research in the city. In this case, the respondents were North Koreans in Dandong on legal visit visas, another common category of those who pass through here.
---
2 Kang Ju-won, *Amnokkang-ün tarüge hüründä* [압록강은 다르게 흐른다 / The Amnok River flows differently], Seoul: Nulmin, 2016.
3 Kim Byung-yeon and Jung Seung-ho, *Chungguk-üi taebuk muyökkwa t’uja: Tandongsi hyönji kiop chosa* [중국의 대북 무역과 투자: 단동시 현지 기업 조사 / China’s Trade and Investment with North Korea: Firm Surveys in Dandong], Seoul: SNU Press, 2015.
4 Kang Dongwŏn and Pak Chŏngnan, *Saramgwâ saram: kimjŏngûn sidae ‘pukchosŏn inmin’ŭl mannada* [사람과 사람: 김정은 시대 ‘북조선 인민’을 만나다 / Person to person: meeting Kim Jong-un era ‘North Chosŏn people’], Seoul: Neona, 2015. For North Koreans who intend to return to their homes, even those on legal visit visas, talking is risky. Imprisonment beckons for she – it is predominantly she – who pulls back the curtain on the DPRK for foreign – nay, worse still, South Korean – researchers. That here is China, foreign sovereign territory, offers scant protection. Close historical links between the security services of the two countries at the local level mean that agents dispatched from North Korea operate with relative impunity in this borderland.
Today’s visitor to Dandong is struck by a lingering sense of unrealised potential. It takes an extreme form of relational dysfunction for two modern nation-states sharing a 1400km frontier to still be conducting 80% of their legal cross-border trade across one, single-lane, early 20th century road and rail bridge, yet that is what happens here. Absurdly, it is possible, albeit without accounting for the trade in natural resources, to judge the state of cross-border economic relations by sitting on the riverside with a coffee, counting trucks in and out of the DPRK.
It wasn’t supposed to be like this. Seven kilometres downstream in Dandong’s impressive New City there is a second, much bigger road bridge connecting dynamic Dandong with the North Korean city of Sinuiju. Except that it doesn’t. With the bridge agreed upon in 2009, when Wen Jiabao went to Pyongyang, it looked for all the world as if North Korea was considering a serious economic policy shift. If the neon lights of the Dandong skyline were not jarring enough for the North Pyongan Province locals, the new bridge now hangs on their horizon, pregnant with possibility. Yet every-
---
5 Adam Cathcart and Christopher Green, “Xi’s Belt,” chapter 9 in Tiang Boon Hoo (ed.), Chinese Foreign Policy Under Xi, Abingdon, Oxon: Routledge, 2017.
one knows that, for now at least, on the North Korean side its four lanes end ignominiously in a field. Sinuiju is another city on the brink of something, but nobody seems quite sure what.
Figure 4: Farming in the shadow of Dandong; southern Sinuiju in the spring of 2016.
Figure 5: From the air, the absurdity of it all becomes clear, as the new bridge comes to a shuddering halt in a field. Taken in June 2016 on a flight from nearby Uiju Airfield to Pyongyang. (image by Simon Cockerell)
Travel away from the border to the south and there is yet another road, built with the same energy aroused by Wen’s trip to Pyongyang. It is a simple, two-lane affair, but still a great improvement upon National Road No.1, the unpaved, axle-crushing dirt track that vehicles traverse en route for Pyongyang today. Unfortunately for driver and passenger alike, the new road, which begins east of South Sinuiju and culminates at the (paved) Pyongyang-Huichon Highway north of Anju, is also unconnected to the road network and lies unused, except by local cyclists. Today it functions as the widest, longest and arguably most expensive cycle path in East Asia.
Go up the Amnok (압록; aka Yalu) River for a few hundred kilometers and things begin to change at the town of Ji’an and its North Korean opposite number, Manpo. For one thing, an unassuming new bridge across the Yalu here is open and operational.
---
c Adam Cathcart, “Revisiting Wen Jiabao in Pyongyang, October 2009,” Sino-NK, January 10, 2012.
(as, indeed, is a third one to the east near Hunchun.\textsuperscript{7} Another one at Namyang is under construction). Ji’an attained brief infamy at the turn of the 21\textsuperscript{st} century as a key site in the bilateral struggle for ownership of Koguryŏ heritage, after China rather brazenly integrated the ancient kingdom into its national history, triggering a bilateral spat with Seoul whose intensity still waxes and wanes in inverse proportion to their mutual enmity toward Japan. The town plays host to the Koguryŏ Museum, one of the least user-friendly museums I’ve ever encountered. Presumably antagonized by the risk of South Koreans taking umbrage at the content and, worse, uploading evidence of rather blatant historical revisionism to the internet, in 2014 not only was it impossible to take photos in the museum; one had to leave all cameras and phones in another building entirely.
North Korea is visible from all points in this town, but visitors from the DPRK present no obvious impediment to the Chinese government’s creative reinterpretation of local history. At the nearby General’s Tomb (将军坟 장군총), said to be the last resting place of the late-4\textsuperscript{th} century monarch King Kwanggaet’o, I ask a guide whether, amidst all the squabbling over 1000-year-old relics, she has ever met a North Korean researcher from across the mountainous frontier. “One did come out here from Pyongyang once,” she recalls with a chuckle. “But all he was interested in was proving that Pyongyang has always been the
\textsuperscript{7} Im Sangbŏm, “[tandok] Chung, hunch’un-najin innŭn ‘sinduman’gang taegyo’ kaet’ong [[단독] 中, 춘춘-나진 익는 ‘신두만강 대교’ 개통 / [Exclusive] China, Hunchun-Rajin-connecting ‘New Tuman River Bridge’ opens], \textit{SBS}, October 2, 2016. \url{http://m.news.naver.com/read.nhn?mode=LSD&sid1=001&oid=055&aid=0000459789}.
capital of Korea.”
Out of town toward the towering Mt. Paektu, the river gets progressively narrower.\(^8\) In the lee of the mountain lies the city of Hyesan and its counterpart Chinese town, Changbai. Hyesan is isolated from the North Korean interior; a remote corner of an underpopulated province, accessible by dilapidated public transport that takes forever. When time counts for anything, it is unwise to attempt such a journey; it is invariably quicker to fly to a regional Chinese city, travel by train, then cross the border once again. Visitors to Hyesan from within North Korea say that the ponderous train exacerbates the sense of it being a disruptive frontier settlement; the kind of place that pays only selective attention to diktats from the center.
At Dandong, the Yalu is too broad to communicate across. By the time you reach Changbai it is possible to shout and be heard. Go east from Mt. Paektu along the Tumen River, however, and in places you could whisper. Human linkages are much more vibrant to the east of Mt. Paektu. The water freezes over in winter, and in places can be crossed on foot. This turns chilly towns like Musan and the larger Horyeong into hubs for escape from North Korea. There are just as many people who simply travel back and forth across the border, trading as they have always done.
---
\(^8\) At 9,000ft, Mt. Paektu casts a long shadow across the borderland, and not merely as the mythical wellspring of North Korean “Kimism”. It is also a volcanic risk factor for the entire region. It has been a thousand years since the last volcanic eruption, but it was serious when it happened, creating the 7km-wide crater at the summit that is known today as the picturesque Heaven Lake. The volcano is still alive, as volcanologist Kayla Iacovino explains. See: Kayla Iacovino, “Of Eruptions and Men: Science Diplomacy at North Korea’s Active Volcano,” *Sino-NK*, May 8, 2014. [http://sinonk.com/2014/05/08/of-eruptions-and-men-science-diplomacy-at-north-koreas-active-volcano/](http://sinonk.com/2014/05/08/of-eruptions-and-men-science-diplomacy-at-north-koreas-active-volcano/)
Stretching out from the northeastern face of Mt. Paektu is Yanbian, China’s Korean autonomous prefecture and home to approximately a million ethnically Korean citizens of the People’s Republic of China, most of whose ancestors settled here in the mid-19th century. A decreasing percentage of young residents of the prefecture speak functional Korean, especially in Antu and the northern city of Dunhua, and these days there is a net outflow of ethnic Koreans to the liberal visa regime of South Korea, too. Nevertheless, Yanbian retains its flavour, and is the easiest place for illegal migrant North Koreans to get around unimpeded. The local Korean dialect resembles that of Hamgyong Province, the rebellious frontier land to the south, and there is a (albeit slowly dwindling) reserve of sympathy for those who flee. Blending in is quite feasible.
In the 1990s, South Korean civil servants came here to get a handle on the scale of starvation across the border. What they found was a quintessential case of “borderland not bordered land”: people who recognized the border but didn’t particularly respect it as a division. This has changed markedly in the intervening years, with both China and North Korea investing in fences, cameras, upgraded customs houses and other
Figure 8: There is an exceptional collection of North Korean literature, journals, and newspapers at the Yanbian Library facility in the suburbs of Yanji (aka Yŏn’gil).
Figure 9: Zhu Dehai (朱德海), a prominent Chinese Korean. Chu Dŏk-hae (주덕해) in Korean, Zhu takes center stage of the revolutionary exhibit at Yanbian Museum, whereas Kim Il-sŏng is barely mentioned.
accoutrements of state control to try and bring the unruly border into line. But this has long been, and in the privacy of older minds still is, an intimately connected frontier zone where Korean meets Korean across a river that divides territory, but not ethnic bonds. State power can impede this shared history, but not completely eradicate it.
North Korean women who cross the border illegally here often live on the margins as the wives of older Chinese men in rural Yanbian villages. Many of them remain connected to family back inside North Korea. It is said that many would, all other things being equal, be content to remain in the border region to tend to those linkages. However, regressive Chinese policy choices accomplish the opposite, pushing the women toward resettlement and security in South Korea, a seemingly wrong-headed approach given the social cohesion and economic growth brought to troubled border communities by these exchanges. A presentation by Professor Chŏn Sinja of Yanbian University in Seoul late last year asserted that migrant North Korean women bring family, labour, and the green shoots of a revival in rural schooling to areas where once there was a preponderance of despair and its natural companions, drinking and gambling.
That is not to deny the unwelcome prevalence of an illicit trade in North Korean women. But there is a close causal link, here. The exploitation of migrant women is exacerbated by their status as illegal border-crossers. Formalizing a process of integration for the women who migrate and settle in Yanbian would have positive ramifications, allowing the women to exercise more complete agency in borderland society.
In the end, it is inaccurate to speak of a single China-North Korea border zone. Go coast-to-coast from Dandong all the way up to Rason and you pass through very different environments; from the economic dynamism of Dandong and Sinuiju on the Yalu to the kinship bonds between Yanbian and the people of Hamgyong. It was never monolithic, of course, but the changing face of modern China is making it even less so.
|
EFFECT OF FILLER WEIGHT FRACTION ON THE MECHANICAL PROPERTIES OF BAMBARA GROUNDNUT (OKPA) HUSK POLYETHYLENE COMPOSITE
1,*Azeez Taofik Oladimeji, 2Olaitan Samuel Abiodun, 3Attanya Clement Uche, 4Onukwuli Dominic Okechukwu, 5Akagu Christian Chukwudi, and 6Menkiti Mathew Chukwudi
1Department of Biomedical Technology, School of Health Technology, Federal University of Technology, P. M. B. 1526, Owerri, Nigeria
2, 4, 6Department of Chemical Engineering, Faculty of Engineering, Nnamdi Azikiwe University, P. M. B. 5025, Awka, Anambra State, Nigeria
3Department of Metallurgical and Material Engineering, Faculty of Engineering, Nnamdi Azikiwe University, P. M. B. 5025, Awka, Anambra State, Nigeria
5Department of Architecture, Faculty of Environmental Sciences Nnamdi Azikiwe University, P. M. B. 5025, Awka, Anambra State, Nigeria
ARTICLE INFO
Article History:
Received 16th April, 2013
Received in revised form 11th May, 2013
Accepted 25th June, 2013
Published online 18th July, 2013
Key words:
Bambara groundnut husk Filler,
Polyethylene,
Mechanical properties.
ABSTRACT
The increased biomass level of bambara groundnut husk (BGH) in the environment through dumping as a refuge due to high consumption rate of bambara groundnut seed has led to an environmental concern. The effect of mechanical properties of the recycled polyethylene (RPE) and recycled polyethylene with 20 percent virgin polyethylene (MPE) was investigated. The weight fractions of the BGH filler loading for this experiment were 10, 20, 25, 30 and 35 percent and processed for the reinforcement of RPE and MPE at 150MPa and 160°C in an injection moulding machine to examine the mechanical properties on the composites. The tensile strength and modulus, flexural strength and modulus, and hardness of the composites increased with increased filler weight fraction of the filler from 25 percent in the composites and impact strength of the composites decreased with increased filler weight fraction. The increased tensile strength signified that BGH filler may be used for the reinforcement of RPE and MPE. There is significant improvement on the mechanical properties of the MPE composite compared with RPE composite at p < 0.01 and p < 0.05.
Copyright, IJCR, 2013, Academic Journals. All rights reserved.
INTRODUCTION
The *Vigna subterranea* commonly known as Bambara or “Okpa” groundnut originated from West Africa (Hepper, 1963). It is commonly cultivated in Sub-Saharan Africa’s warm tropics (Gwekwere, 1995). The abundance and wide spread of the bambara groundnut plants is not only due to multipurpose of its seeds but also due to human consumption. This can either be in the form of “okpa” beans pudding, milk making and bread making (Poulter and Caygill, 2006; Nwaichi et al., 2010; Okeke and Eze, 2006; Akande et al., 2009; Okpuzor et al., 2010; Swanevelder, 1998). It is a very important crop to poor people in Africa who cannot afford expensive animal protein (Okeke et al., 2007; Okeke and Eze, 2006; Baruch, 2001) because of its high protein value as to improve the nutrient content. A promising clean up agent for removal of heavy metal contamination of soils due to its potential cost-effective, environmentally sustainable technique and the toxicity of Bambara beans diet harvested from a contaminated soil on test animal (Nwaichi et al., 2010; McGrath and Zhao, 2003) is being ensured from time to time. In fact, the rate of this bambara paste is becoming uncontrollable without considering health and microbial implication to consumers on the high ways (Oramus and Braide, 2012). Several workers have successfully fed processed bambara groundnut to livestock animals such as poultry based on high nitrogen and phosphorus content, and suitability for livestock feeding (Arjeniwa and Onnokhioke, 2004; Joseph et al., 2000; Akande, 2009). Moreover, researchers have reported the composition of both its seed flour and seed coat proximate composition with high cellulose and proteins content (Akejego – Samson, 2010). The bambara groundnut husk (BGH) have become refuges on many high ways in many south-east part of Nigeria after the removal of bambara groundnut seed. The initiation of this research was not only based on the benefits using BGH as a filler for polyethylene matrix which have not been documented to the best knowledge of these researchers as a reduction of biomass level of BGH in the environment so as to reduced environmental threat that might be posed by BGH in the society but also to study the significant effect of BGH filler loading on the mechanical properties of polyethylene composites.
MATERIALS AND METHOD
Both virgin and recycled polyethylene was used in this study. The waste polyethylene (PE) was obtained from the industrial waste of IBETO Group of Companies and crushed using the fabricated crushing machine at National Engineering Design and Development institute (NEDDI) in Snewi, Anambra State, Nigeria. The virgin PE was obtained from chemical company in Enugu metropolis, Anambra State, Nigeria. The Bambara groundnut seed husk (BGH) was obtained from livestock feeds line in Ose Market, Onitsha, Nigeria.
Composite Preparation
The Bambara groundnut seed husk (BGH) used was sun-dried and then oven-dried at 110°C for 2 days to a moisture content of almost 4.0% percent. Finally, crushed and sieved with the 18-mesh size. The
Composites were prepared in mixing-ratio of virgin polyethylene, recycled polyethylene and BGH as presented in the Table 1.
**Table 1. Composition of the Prepared Composite**
| Sample | % Virgin PE | % recycled PE | % BGH Filler |
|--------|-------------|---------------|--------------|
| A | 0 | 100 | 0 |
| B | 0 | 90 | 10 |
| C | 0 | 80 | 20 |
| D | 0 | 75 | 25 |
| E | 0 | 70 | 30 |
| F | 0 | 65 | 35 |
| G | 20 | 70 | 10 |
| H | 20 | 60 | 20 |
| I | 20 | 55 | 25 |
| J | 20 | 50 | 30 |
| K | 20 | 45 | 35 |
**Composite Processing**
The BGH filler and polyethylene were fed into an injection moulding machine of the reciprocating screw type to produce the composite samples. The operating pressure and temperature of the injection moulding machine was 150MPa and 160°C respectively. Process time for each sample was 30 – 60 seconds averagely. The following mechanical tests were carried out to assess the effects of the bambara hunk filler on the mechanical properties of polyethylene. Samples of the BGH filler - polyethylene composites were cut into specified dimensions and tested at room temperature in accordance with ASTM standards.
**Tensile Testing**
Tensile properties were carried out on the samples using a KAOH TIEH Instron Testing Machine, in accordance with ASTM 638-90, at a cross-head speed of 200rev/min. The dimensions of each sample were 150mm (length) x 30mm (width) x 5mm (thickness). Held by the gripping heads, the specimens were pulled till failure and the respective loads and extensions noted. The values thus gathered were used to calculate the strain, tensile strengths, and modulus of the specimens A to K using the equation (1) and (2) as reported by Raju *et al.* (2012).
\[
T_s = \frac{P}{bt}
\]
(1)
\(T_s\) is the tensile strength of the sample, \(P\) is the pulling force, \(b\) is the sample width and \(t\) is the sample thickness.
\[
T_m = \frac{\sigma}{\varepsilon}
\]
(2)
\(T_m\) is the tensile modulus, \(\sigma\) is the stress and \(\varepsilon\) is the strain.
**Flexural Testing**
Flexural properties were carried out by 3-point bending tests on composite samples with dimensions 60mm (length) x 20mm (breadth = width) x 20mm (thickness) using a WP 300.4 bending device in accordance with ASTM 790 – 90. Equation 3 and 4 was used to obtain the flexural strengths and modulus respectively of the samples.
\[
R_f = \frac{3FL}{2bt^2}
\]
(3)
\(R_f\) is the applied flexural strength, \(F\) is the flexural load, \(L\) is length of the support span (mm), \(b\) is the width of the sample and \(t\) is the thickness of the composite sample.
\[
R_{m} = \frac{mL^3}{4bt^3}
\]
(4)
\(R_{m}\) is the flexural modulus, \(m\) is the slope of the tangent to the initial line portion of the load deflection curve.
**Impact Testing**
The unnotched impact properties were conducted on all specimens in accordance with ASTM D 256-90. Prepared specimens were subjected to fracture by a pendulum – Type Impact Testing Machine and the unnotched toughness values of the composites obtained by reading off the energy expended to rupture each specimen.
**Hardness Testing**
Brinell Hardness Test was conducted on flat samples of both RPE and MPE using a manually-operated Universal Testing Machine. A hardened steel ball with a diameter of 10mm was used in performing the test. The indentations on the specimens were measured (diameter-wise) and appropriate mathematical methods used for conversion to obtain the Brinell Hardness Values reference made to Figure 6. The equation for the Brinell Hardness Number (\(HB\)) is given as:
\[
HB = \frac{2P}{\pi D(D-\sqrt{D^2-d^2})}
\]
\(P\) is the applied Load measured in kg, \(D\) is diameter of steel ball (10mm) and \(d\) is the diameter of indentation (mm)
**Statistical Analysis**
The statistical analysis was conducted using SPSS Version 17.0 with bivariate correlations. The Pearson’s correlation coefficient test was conducted to test of significance with two tail of p-value less than 0.01 or 0.05 was considered statistically significant between RPE and MPE composites.
**RESULTS**
**Tensile strength and Modulus**
The tensile strength and modulus of both RPE and MPE composite increases from 30.33 MPa to 35.14 MPa with increased BGH filler loading up to 25 percent weight fraction and latter decreases to 27.45 MPa with increased BGH filler loading beyond 25 percent weight fraction as illustrated in Figure 1.

The tensile modulus of the composites increases with increased weight fraction of the BGH filler loading. Based on the results obtained as shown in the Figure 2, the tensile modulus for RPE composite increases from 240.7 to 861.2 MP which is about 257.8 percent and MPE composite increases to 924.7 MP a amounted to about 284.2 percent.

Flexural strength and modulus
The flexural strength of RPE and MPE composite increases with increased up to 35 percent BGH filler loading by 74.74 and 55.26 percent respectively. The flexural modulus of RPE and MPE composite increases with increased in filler fraction by 121.14 and 148.78 percent respectively.
DISCUSSION
The increased tensile strength of RPE and MPE composite for 25 percent of weight fraction of the BGH filler loading was the same and amounted to 15.86 percent and latter, the tensile strength decreased by 9.5 percent for both RPE and MPE composite after 25 percent filler loading. The increased tensile strength attributed to an effective creation of an interfacial adhesion bond between filler (hydrophilic) and PE (hydrophobic), and morphological changes as reported by many researchers (Luyt, 2009; Yao et al., 2008; Yang et al., 2004; Ratnam et al., 2010). It was also obtained that there is significant effect of 0.004 (p < 0.01) of BGH filler loading on tensile strength with correlation coefficient of 0.949 between MPE and RPE composites which indicated that the addition of 20 percent of virgin PE was statistically significant in these composite applications. The increase in tensile strength of the MPE compared with RPE as shown in the Figure 2 indicated that addition of 20 percent of virgin PE significantly influenced the rigidity of the composite. Though, tensile strength and modulus decrease latter as the weight fraction of the BGH filler loading above 25 percent. The addition of 20 percent of virgin PE gave a significant effect of 0.000 (p < 0.01) of BGH filler loading on tensile modulus with correlation of 0.990 between RPE and MPE composites. The result of flexural test indicated that the composite increases its flexibility at increased BGH filler loading as illustrated in the Figure 3 and 4. This is a common phenomenon and similar to the results of many researchers (Ratmi et al., 1996; Yang et al., 2004; Insan et al., 2012). There is also a significant effect of 20 percent addition of virgin PE on the flexural modulus of RPE and MPE composites at significant level of 0.001 (p < 0.01) with correlation coefficient of 0.981. The significant effect may be attributed to improvement on the flexibility of the composite by the effect of 20 percent of weight fraction of the virgin PE present in the MPE composite which improve the stiffness of the composite.
The decreased in impact strength could be attributed to the poor wetting of the BGH with the PE blends, which lead to poor interfacial adhesion between the fiber and polymer matrix, and leads to creation of weak interfacial regions. The poor interfacial adhesion exists between hydrophobic matrix and hydrophilic nature filler usually results in decreased toughness as reported by Raju et al. (2012). Thus, the decline in impact strength with the increased BGH filler loading is attributed to the poor interfacial adhesion between the hydrophobic (PE) matrix and hydrophilic BGH filler. In addition to this, incorporation of fillers resulted in reduction in polymer chain mobility, thereby lowering the ability of the system to absorb energy during fracture propagation. There is significant effect of 0.018 (p < 0.05) with Pearson correlation coefficient of 0.939 on impact strength between MPE and MPE composite due to 20 percent virgin polyethylene present in MPE composite. The increased in hardness of RPE and MPE can be attributed to adhesive bonding between the polyethylene matrix and BGH filler but incorporation of 20 percent of virgin PE makes the MPE hardness to be greater than RPE. The hardness results show that MPE is significantly increased due to 20 percent virgin PE compared with RPE. Statistically, the correlation coefficient between RPE and MPE composite is 0.925 with significant value of 0.008 (p < 0.01).
Conclusion
The result obtained shows that the tensile modulus, flexural strength and modulus and hardness value of the composites increases significantly with increased Bambara groundnut husk filler. The tensile strength of the composite increases with increase up to 25 percent of the weight fraction of the BGH filler and decreases above 25 percent weight fraction of the BGH filler. It can also be deduced that the flexural strength and modulus increases with increased weight fraction of the filler. Moreover, there is significant improvement in the mechanical properties of the composites which could be attributed to 20 percent virgin polyethylene present in the composites with recycled polyethylene (MPE) compared with only recycled polyethylene (RPE).
REFERENCES
Akande, K. E., Abubakar, M. M., Adegbola, T. A., Bogoro, S. E., Doma, U. D. and Fahiyi, E. F. (2009). Nutrient Composition and Uses of Bambara Groundnut (*Vigna Subterranea* (L.) Verdeur). *Continental J. Food Science and Technology*, 3: 8–13.
Akegbejo – Samsons, O. R. (2010). Functional Properties of Extruded Meat Analogue from Bambara Groundnut Protein Isolate. Department of Food Science and Technology, University of Agriculture, Abeokuta, Nigeria. (B. Sc. Thesis).
Arjeniwa, A. and Omokhoje, S. O. (2004). Performance, carcass traits and relative organ weights of broiler chickens fed processed bambara groundnut (*Vigna subterranea*) meals. *Nigerian Poultry Science Journal*, 2: 76 - 81.
Barryh, E. A. (2001). Physicochemical properties of bambara groundnuts. *Journal of Food Engineering*, 47: 321–326.
Gwekwerere, Y. (1995). Pests and diseases of bambara groundnut in Zimbabwe. In: Bambara groundnut (*Vigna subterranea* (L.) Verde.) promoting the conservation and use of underutilized and neglected crops. 9. (Editors: Heller, J., Begemann, F. and Mushonga, J.). *Proceedings of the workshop on conservation and improvement of bambara groundnut (*Vigna subterranea* (L.) Verde.),* 14 -16th November, 1995, Harare, Zimbabwe, pp.84–86.
Hepper, F. N. (1963). Plants of the 1957-58 West Africa Expedition II: The bambara groundnut (*Voandzeia subterranea*) and Kerseing’s groundnut (*Kerseingia geocarpa*) wild in West Africa. *Kea Bulletin*, 16 (3): 395–407.
Imosili, P. E., Ukoba, K.O., Ihegbulam, C. M., Adigizi, D. and Olusunle, S.O.O. (2012). Effect of Filler Volume Fraction on the Tensile Properties of Cocoa-Pod Epoxy Resin Composite. *International Journal of Science and Technology*, 2 (7): 432-434.
Joseph, J. K., Awosanya, B., Adeoye, P. C. and Okekunle, M. R. (2000). Influence of graded levels of toasted bambara groundnut meal on rabbit carcass characteristics. *Nigerian Journal of Animal Production*, 27 (1): 86–89.
Luyt, A. S. (2009). Editorial corner – a personal view Natural fibre reinforced polymer composites – are short natural fibres really reinforcements or just fillers? *EXPRESS Polymer Letters*, 3 (6): 332.
McGrath, S. P. and Zhao, F. J. (2003). Phytoextraction of metals and metalloids from contaminated soils. *Curr. Opin. Biotechnol.*, 14: 277-282.
Nwaichi, E. O., Onyeike, E. N. and Wegwu, M. O. (2010). Performance and risk assessment of Bambara beans grown on petroleum contaminated soil and the biostimulation implications. *African Journal of Environmental Science and Technology*, 4 (4), 174-182.
Olceke, E. C. and Eze, C. (2006). Nutrient Composition and Nutritive Cost of Igbo Traditional Vendor Foods and Recipes Commonly Eaten in Nsukka. *Journal of Agriculture, Food, Environment and Extension*, 5 (1): 36-44.
Okeke, E.C., Enecobong, H.N., Uzuegbunam, A.O., Ozioko, A.O., Umeh, S.I. and Kühnlein, H. (2009). Nutrient Composition of Traditional Foods and Their Contribution to Energy and Nutrient Intakes of Children and Women in Rural Households in Igbo Culture Area. *Pakistan Journal of Nutrition*, 8 (4): 304-312.
Okpuzor, J., Ogbumagafor, H. A., Okafor, U. and Sofidiya, M. O. (2010). Identification Of Protein Types In Bambara Nut Seeds: Perspectives For Dietary Protein Supply In Developing Countries. *EXCELLENT*, 17 (2): 1-10.
Oranmokun, S. S. and Braide, W. (2012). A Study of Microbial Safety of Ready-To-Eat Foods Vended on Highways: Onitsha-Owerri, South East, Nigeria. *International Research Journal of Microbiology (IRJM)*, 3 (2): 66-71.
Poulter N.H. and Caygill J. C. (2006). Vegetable milk processing and rehydration characteristics of bambara groundnut [*Voandzeia subterranea* (L.) thouars]. *J Sci Food Agric*, 31: 1158 - 63.
Raju, G. U., Kumarappa, S. and Gaitonde, V. N. (2012). Mechanical and Physical Characterization of Agricultural Waste Reinforced Polymer Composites. *J. Mater. Environ. Sci.*, 3(5): 907-916.
Ratnayake, C. T., Fazlina, R. S. and Shamsuddin, S. (2010). Mechanical Properties of Rubber Wood Flour Filled PVC/ENR Blend. *Malaysian Polymer Journal*, 5 (1): 1-10.
Swanevelder, C. J. (1998). *Bambara food for Africa* (*Vigna subterranea* - bambara groundnut). National Department of Agriculture Pretoria, Republic of South Africa. Government printer, pp. 16.
Yang, H-S., Kim, H-J., Son, J., Park, H-J., Lee, B-J. and Hwang, T-S. (2004). Rice-husk flour filled polypropylene composites; mechanical and morphological study. *Composite Structures*, 63: 305–312.
Yao, F., Wu, Q., Lei, Y. and Xu, Y. (2008). Rice straw fiber-reinforced high-density polyethylene composite: effect of fiber type and loading. *Industrial Crops and Products*, 28(1): 63–72.
Zaimi, M. I., Fuad, M. Y. A., Ismail, Z., Mansor, M. S. and Mustafa, J. (1996). The Effect of Filler Content and Size on the Mechanical Properties of Polypropylene/Oil Palm Wood Flour Composites. *Polymer International*, 40: 51-55.
|
Ultra-broadband mobile networks from LTE-Advanced to 5G: evaluation of massive MIMO and multi-carrier aggregation effectiveness
Marco Neri, Maria-Gabriella Di Benedetto
Dept. of Information Engineering,
Electronics and Telecommunications (DIET)
Sapienza, Università di Roma
Roma, Italia
firstname.lastname@example.org
email@example.com
Tommaso Pecorella
Dept. of Information Engineering
Università di Firenze
Firenze, Italia
firstname.lastname@example.org
Camillo Carlini, Andrea Castellani, Pietro Obino, Pamela Sciarratta
Telecom Italia S.p.A.
Roma, Italia
email@example.com
firstname.lastname@example.org
email@example.com
firstname.lastname@example.org
Abstract—LTE-Advanced networks are spreading widely across the world and they are continuing to evolve as new device features such MIMO 4x4, Carrier Aggregation are being released to move towards the peak data rates introduced by 3GPP Release 12 and 13. Mobile network Operators are looking for technologies that guarantee higher spectral efficiency and wider spectrum usage but they have to deal with limitations due to commercial devices’ RF components. This paper analyzes several scenarios, compares them and suggests deployment strategies.
Index Terms—LTE, LTE-Advanced, MIMO, Carrier Aggregation, 5G, Massive MIMO, ns-3
I. INTRODUCTION
Nowadays mobile communications are used by most of the world population. According to [1] at the end of 2016 there were 7.5 billion mobile subscriptions that are expected to reach 8.9 billion by 2022. LTE will be the dominant technology even after the upcoming of 5G at the beginning of 2020: in fact, LTE devices are predicted to be 4.6 billion by 2022, more than half of the whole mobile subscriptions.
This increase of data connections demand, as well as the diffusion of multimedia services, calls for higher data rates than those achievable with LTE. New commercial LTE-A devices are equipped with capabilities providing greater spectral efficiency on a wider spectrum:
- DL 256 Quadrature Amplitude Modulation (256QAM);
- 4x4 Multiple Input Multiple Output;
- 3, 4 or more Component Carrier Aggregation over licensed bands.
The diffusion of LTE-A devices is growing throughout the world and Operators are upgrading mobile networks with configurations able to support 3GPP Rel-12 downlink Category 16, for data rates up to 1 Gbps [2]. Typically, this target data rate is not fully achievable so far because of Operators spectrum fragmentation and commercial devices limitation due to RF components (mainly transceivers). In fact, the maximum number of downlink antenna ports managed by current commercial devices is typically 8. This number can be exploited in different ways, either using MIMO 4x4 on two aggregated bands, or using MIMO 4x4 on one band and MIMO 2x2 on two bands. The first configuration is theoretically more performing and the maximum throughput is about 800 Mbps. When this constraint will be overcome, devices will support MIMO 8x8 and above to guarantee the target peak data rate for next generation technologies [3].
Two possible approaches open up. The first based on bandwidth broadening and the second on the improvement of the spectral efficiency. At the moment, mobile Operators are trying to figure out which approach or combination of approaches is the best in terms of performance and costs.
The aim of this paper is to study different network configurations in order to evaluate the overall system performance in each case and provide suggestions on the deployment choices to be made. This study was carried on with a network simulator and the results were compared with real measures taken in collaboration with TIM (Telecom Italia) on new commercial devices.
This paper is structured as follows: in Section II we will present the case study and the principal features of the technology. In Section III we describe the method of our analysis and the implementation of the scenario within a network simulator. In Section IV we compare the results of the simulations with measures made on commercial devices over real network. Conclusions form Section V.
II. REFERENCE SCENARIO
We consider five cellular sites in a real neighborhood to represent a dense urban scenario. Each site is characterized by three sectors. In this network, the average cell radius is 550 m, leading to an overall coverage of about 4.5 km$^2$.
Our study considers the use of antennas transmitting with a Radio Base Station (RBS) output power of 43 dBm for each radio branch. These antennas are multi-array and can work at all the frequencies needed for carrier aggregation over four bands: 3GPP Band 20 (800 MHz), 3GPP Band 32 (1500
MHz), 3GPP Band 3 (1800 MHz) and 3GPP Band 7 (2600 MHz). These antennas can also support both MIMO 2x2 and MIMO 4x4. Given the dense urban scenario, we expect around 1000 users per site. Nowadays, video accounts for 50% of mobile data traffic while in 2022 the percentage will rise up to 75% [1]. For this reason, we focused our simulations on a high-quality video streaming, that requires a minimum user throughput of 20 Mbps.
Among the features described in Section I, we mainly focused the analysis on MIMO and Carrier Aggregation and we examined the advantages and the disadvantages that each feature entails. Here, we give a brief description of both:
a) **Carrier Aggregation**: CA allows to increase the peak data rate by concatenating several frequency channels in order to transmit on a wider bandwidth, up to 100MHz. This technique has been recently extended to unlicensed spectrum leading to the birth of LTE Licensed Assisted Access (LAA) that can aggregate up to 60 MHz in the 5 GHz spectrum [4].
b) **MIMO 4x4**: LTE-A Release 10 extended the transmission layers up to 8 in order to guarantee higher data rates. In the future, 5G will use Massive MIMO technologies that will extend the number of simultaneous transmit and received streams [5]. Nowadays, it is very difficult to implement such a complex radio interface on a device and last commercial devices provide only for MIMO 4x4. It is important to highlight that this techniques is not very robust with respect to noise and requires high SINRs. Note that, apart from the number of antennas, novelties have been introduced with active antennas based on the digital beamforming: it allows to focalize the signal between UE and eNB so that the useful signal increases with respect to the interference.
The broadening of the usable spectrum should lead to significant improvement in terms of performance [6], but mobile network Operators must deal with a limited disposability of bandwidth. Therefore, they encourage to move towards the deployment of massive MIMO but this implies higher manufacturing and implementation costs. Moreover, user equipment side, Operators are still encountering the limit of downlink Antenna Port due to RF components. Since MIMO (both 4x4 and 8x8) is highly affected by noise, it is not always available to user equipment, especially in a dense urban scenario where reflection and refraction phenomena are frequent, leading to performance departing away from theoretical limits as highlighted in [7].
### III. Analysis and Simulations
In our analysis, we used *ns-3*, a discrete-event network simulator designed as a set of libraries written in C++ [8]. *ns-3* is organized in modules (i.e. LTE, Internet) each supplying a single functionality or layer. This tool allows to set the mobility of each UE, so that we could simulate a heterogeneous environment, with some equipment at a fixed position and others moving at a maximum speed of 60 km/h, which is very likely the maximum achievable speed in a dense urban environment. The radio environment is that described in [9]: this recommendation provides guidance on outdoor short range transmissions (less than 1km) for both line-of-sights (LoS) and non-line-of-sight (NLoS) environments. In our scenario we consider the latter: it takes into account *urban canyons*, characterized by buildings of several floors each that can significantly contribute to long path delays and a large number of vehicles that may act as reflectors adding Doppler shift to the waves.
We simulated the deployment of DL Category 16 devices that can reach up to 800 Mbps (with DL 256QAM) according to the configuration used between those described in Section I.
The simulator provides an output interface to read each transmission parameter: transmitted and received packets and bytes, as well as the throughput associated to a single data flow, then to a single user.
Hence, we evaluated the overall system performances and the *effectiveness* of Carrier Aggregation and MIMO 4x4. For the latter we evaluated the percentage of transmission in which it was activated. We set the devices to achieve 20 Mbps as target data rate. In this way we can also evaluate the system at its saturation.
The simulator is continuously renewing and enhancing its features but it doesn’t provide the whole LTE-A functionalities. To introduce the DL 256QAM we modified the library that oversees the Adaptive Modulation and Coding: we updated the tables referred to spectral efficiency, CQI, MCS and TBS to those described in [10]. The table that links spectral efficiency to CQI was originally proposed by Qualcomm in [11], but there aren’t proposals made so far, then we introduced ours, following the previous one’s construction.
In *ns-3*, MIMO is not implemented as the use of multiple antennas in transmission and reception. The model is obtained considering the gain that MIMO schemes bring in the system from a statistical point of view. This solution is based on [12] and only provides for MIMO 2x2.
Then, we propose another technique to implement MIMO, either with 2 or 4 layers. As 3GPP shows in [10], there
are precise translation tables to be considered when switching from a single layer to multiple ones. In particular, we focused our attention on tables 126.96.36.199.2-1 and 188.8.131.52.5-1. The former refers to the translation from one layer to two layers while the latter refers to the translation to four layers. From here, it is evident that the usage of MIMO 2x2 leads to an average doubling of the TBS. Equally, the TBS is quadrupled in presence of MIMO 4x4. The observation of the relationship between the transport block size and the number of layer led us to implement MIMO schemes as the multiplication of the TBS. We adapted the function `GetTbSizeFromMcs` so that it returns the TBS doubled (or quadrupled) depending on the MIMO scheme.
Though, since MIMO 4x4 feels the effect of noise more than modulation, it can be activated only with a very good channel quality (CQI>14). Then we allowed the activation only for MCS higher than 24. In case of a lower channel quality, it is permitted to transmit with a lower rank and, thereby, three layers, in order to guarantee however a faster transmission. This can be done for MCS greater than 21. In the code this consideration is made with an *if clause* in `LteAmc` library.
Our proposal to implement Carrier Aggregation is simple: we locate multiple devices or eNBs in the same place and, if necessary, we make them move jointly. This follows the reality: in fact, a device has an antenna for each frequency used to received and/or transmit. One problem would be the unawareness when scheduling resources. Actually, there are two kinds of scheduling procedure: Single-Carrier Scheduler and Cross-Carrier Scheduler [13]. The former is the one used nowadays in real networks and doesn’t imply each frequency receiver to be aware one another.
When simulating, we dealt with an implementation problem with the UDP traffic: in fact, within the simulactor, each flow saturates when the data flow, eNB side, is higher than 75 Mbps. We overcame this problem with our implementation of CA: infact, since we use four antennas per sector, we have 12 co-located antennas per site each of which can guarantee at maximum 75 Mbps. Hence, the whole site can provide a 900 Mbps data rate, which is close to the real network bottleneck at 1 Gbps.
To design the scenario within the simulator we set the positions of the antennas and the directions of the respective beams to maximize the coverage. The simulator includes a tool to analyze the SINR levels over the considered area, the *Radio Environment Map Helper*. It is impossible, though, to obtain a general map simultaneously including the contributions of each frequency: in fact, the helper works only at a given frequency for each simulation.
In Fig. 2 we provide the SINR map at 800 MHz. Of course, the higher the frequency, the worse is the coverage.
We also made measures on DL Category 16 commercial devices to evaluate both the data rates and the performances of LTE-A features. Unlike the simulations, real measures were made only with the 2CA configuration with MIMO 4x4. In fact, the combination of the four LTE carriers available was introduced in the 3GPP Standard in July 2017 within [14] and there are very few commercial devices able to do it yet.
### IV. RESULTS
With the simulations we analyzed the behavior of the overall network. Tools like *ns-3* are fundamental for studies like this one, since it is impossible to make such analysis for a real scenario.
First of all, we provide the results of the simulations with the 4-CA configuration that leads to a usage of 65MHz on the four bands 800, 1500, 1800 and 2600 MHz. MIMO 4x4 cannot be used since the limitation to 8 Antenna Port due of RF component on the transceivers.
To evaluate the peak data rate for this configuration we need to consider that over 20 MHz, with DL 64QAM and SISO transmission, it is about 75.7 Mbps. Moreover, we need to add the contributions due to MIMO 2x2 and 256QAM. Finally, we have:
\[
\left[ \left( 75.7 \cdot \frac{65}{20} \right) \cdot 2 \right] \cdot \frac{4}{3} \approx 650 Mbps
\]
The overall system throughput, evaluated as the sum of each UE data rate, is 790 Mbps. From the statistics we got that all the cells and frequencies were used.
The second simulation was focused on the aggregation of only two carriers with MIMO 4x4 on both. The best configuration is the one that aggregates 40MHz from 1500 and 1800MHz.
In this way the peak data rate for a single user is
\[
\left[ \left( 75.7 \cdot \frac{40}{20} \right) \cdot 4 \right] \cdot \frac{4}{3} \approx 800 Mbps
\]
This simulation was mainly focused on evaluating the robustness of MIMO 4x4 with respect to SINR. To do that, we read through the MAC statistics to evaluate the efficiency as the ratio between the Transport Blocks where MIMO 4x4 was activated and the whole TBs.
Since the environment we are simulating is characterised by many users transmitting simultaneously and many multipaths and interferences, it is very hard to achieve a high channel quality and MIMO 4x4 was activated only for the 27% of Transport Blocks. As comparison, note that 256QAM is activated in the 75% of the TBs. Hence, the overall system throughput is 630 Mbps, lower than the one evaluated in the first configuration, even if this one is theoretically more performing.
To achieve higher SINR levels, it is possible to increase the number of cells. Note that it could lead to a greater inter-cells interference and to a worsening of data rate. Then, it is important to design the deployment properly. In fact, it is necessary to reduce the coverage of each cell and to direct the beams in order not to interfere one another. Since it might lead to very high performances, 5G will provide for the deployment of the so called small cells. In our work, we added a further site to the scenario depicted in Fig. 2 to increase the SINR level. Effectively, MIMO performance was enhanced: it grew to 33% and the system throughput was 670 Mbps.
| CONFIGURATION | SYSTEM THROUGHPUT | SPECTRAL EFFICIENCY |
|---------------|-------------------|---------------------|
| 4 CA MIMO 2x2 256QAM | 790 Mbps | 12.15 bps/Hz |
| 2 CA MIMO 4x4 256 QAM | 630 Mbps (670 Mbps with six sites) | 15.75 bps/Hz (16.75 bps/Hz with six sites) |
It is interesting to highlight that with this configuration only two bands are used (i.e. 1800 and 2600MHz) and the network can manage other devices that aggregate other bands like DL Category 6 devices. Then, the complexive capacity increases a lot since the two configuration are independent one another and the throughput of each configuration can be summed and the system capacity increases.
With real measures we evaluated both the peak performance of the devices and the efficiency of MIMO 4x4. We measured 15 seconds with a 700Mbps UDP DL traffic. The real maximum throughput is 600Mbps because of core network limitations that will be overcome by the end of 2017. The throughput trend is shown in Fig. 3 and the main results are in TABLE II.
| PARAMETER | VALUE |
|--------------------|-----------|
| Theoretical TPUT | 691.5 Mbps|
| Measured Peak TPUT | 596.17 Mbps|
| Measured AVG TPUT | 469.83 Mbps|
| B7 MIMO 4x4 % | 36.1% |
| B3 MIMO 4x4 % | 43.8% |
Note that measures were taken in optimal radio conditions. In fact, the average SINR on the four layers is always very high, over both frequencies as we can see in TABLE III.
| 2600 MHz | AVG |
|----------|--------|
| SINR RX1 | 29.94dB|
| SINR RX2 | 29.27 dB|
| SINR RX3 | 29.17 dB|
| SINR RX4 | 29.57 |
| 1800 MHz | AVG |
|----------|--------|
| SINR RX1 | 28.54dB|
| SINR RX2 | 29.79 dB|
| SINR RX3 | 29.35 dB|
| SINR RX4 | 28.76 |
Then, measures in TABLE II show that MIMO has a low percentage of activation since it is highly affected by interference, much more than 256QAM that was activated in more than 99% of transmissions. These results correspond to those obtained within the simulations which highlighted the low efficiency of the 4-layers technique.
Finally, we made a laboratory test to study more deeply MIMO 4x4. First, we analyzed a system with no correlation between the transmit and receive antennas and we compared the throughput with the two highest modulation schemes. In Fig. 4 we can see that the 4-layers technology effectively introduces the doubling of the downlink throughput. A further increment is given by the adoption of 256QAM as in Fig. 5.
Those shown in Fig. 4 and in Fig. 5 are actually the theoretical behaviors of the multiple-layer transmissions techniques. In fact, if the propagation channels between the antennas can be described as statistically independent and identically distributed, then multiple independent channels can be created by precoding. In practice, this never happens and channels are often correlated and the throughput gain shown in figures above is not achievable.
Then, we made measures in condition of low and medium/high correlation between the paths to reproduce a real radio environment. In fact, in dense urban scenarios like the one we are describing, there are usually many interferences and multipaths. Then, the usage of MIMO 4x4 might increase reflection and refraction phenomena so that there is not the throughput boost expected. As shown in Fig. 5 theoretical limit with MIMO 4x4 and 256QAM is around 400Mbps on 20MHz. In Fig. 6, we can see that this value cannot be reached at all, rather it is very far from being achieved, in particular with medium/high correlation.
Even these measures showed the weakness of MIMO 4x4 in real scenarios with multipaths and low signal levels, as it was already highlighted by the simulations and general measures. However, in case of optimal channel quality, this technology can give a huge contribution to throughput.
V. CONCLUSIONS
In this paper we presented an evaluation of the performance of MIMO 4x4 and Carrier Aggregation in LTE-A in diverse scenarios highlighting the limitations and the opportunities that mobile network Operators are dealing with. We also showed the system implementation within a network simulator and the changes made to update it to the last LTE-A Releases. We studied two main configurations in order to evaluate the effectiveness of the LTE-A capabilities and compared the results with real measures to figure out the deployment strategies that mobile Operators should implement. In particular, we focused our attention on new devices that have just entered the market or will be commercial by the end of 2017.
Simulations results were consistent with the theoretical analysis: the 2-band configuration with MIMO 4x4 on both bands has even better performances than the 4-band configuration with MIMO 2x2, but it requires an environment with very low noise. This leads to the need of smaller cells, that, if well dimensioned, would guarantee a better coverage and better signal levels but it would entail elevated deploying costs for Operators.
Hence, we suggest deploying MIMO 4x4 – and 8x8 when it will be available – only in those sites where high SINRs are achievable like indoor scenarios or for those applications that require very high data rates and low latency. Moreover, MIMO 4x4 shall be implemented by Operators with a poor spectrum disposability. Massive MIMO will be the dominant technology for 5G communications that will provide for very dense cells and very high SINR levels.
Finally, it is interesting to underline that the peak spectral efficiency in the 800 Mbps configuration is 20 bps/Hz which is comparable to 30 bps/Hz required by the standard for 5G telecommunications [3], although the restriction of the maximum Antenna Port that will be overcome with the next generation technologies. This to remark that the LTE-A is already near to the best performance that next generation telecommunication networks will fulfill.
Future work will focus on simulating miscellaneous scenarios with diverse user equipment Categories and on comparing the results of the simulations involving the latest technologies with some real measures that are not available yet. Simulator side, the limit on UDP traffic shall be overcome in order to completely fulfil the data rate requirements of new devices. Moreover, it will be necessary to implement MIMO 4x4 (and above) within it.
REFERENCES
[1] Ericsson, “Ericsson Mobility Report,” Ericsson, Tech. Rep., 11 2016.
[2] 3GPP, “Evolved Universal Terrestrial Radio Access (E-UTRA); User Equipment (UE) radio access capabilities,” 3rd Generation Partnership Project (3GPP), TS 36.306, Oct. 2016. [Online]. Available: http://www.3gpp.org/ftp/Specs/html-info/36306.htm
[3] ——, “Study on scenarios and requirements for next generation access technologies,” 3rd Generation Partnership Project (3GPP), TR 38.913, Oct. 2016. [Online]. Available: http://www.3gpp.org/ftp/Specs/html-info/38913.htm
[4] ——, “Feasibility Study on Licensed-Assisted Access to Unlicensed Spectrum,” 3rd Generation Partnership Project (3GPP), TR 36.889, Jul. 2015. [Online]. Available: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=2579
[5] E. G. Larsson and L. Van der Perre, “Massive MIMO for 5G,” IEEE 5G Tech Focus, vol. 1, no. 1, June 2017.
[6] D. Micheli, M. Barazzetta, C. Carlini, R. Diamanti, V. Mariani Primiani, and F. Moglie, “Testing of the Carrier Aggregation Mode for a Live LTE Base Station in Reverberation Chamber,” IEEE Transactions on Vehicular Technology, vol. 66, no. 4, pp. 3024–3033, April 2016.
[7] E. Lahetkangas, K. Pajukoski, E. Tirola, J. Hamalainen, and Z. Zheng, “On the performance of LTE-Advanced MIMO: How to set and reach beyond 4G targets,” 18th European Wireless Conference, 2012.
[8] “ns-3.” [Online]. Available: http://www.nsnam.org
[9] ITU, “Propagation data and prediction methods for the planning of short-range outdoor radiocommunication systems and radio local area networks in the frequency range 300 MHz to 100 GHz,” International Telecommunication Union Radiocommunication Sector (ITU-R), Reccomendation 1411-8, Jul. 2015. [Online]. Available: https://www.itu.int/dms_pubrec/itu-r/rec/p/R-REC-P.1411-8-201507-S!!PDF-E.pdf
[10] 3GPP, “Evolved Universal Terrestrial Radio Access (E-UTRA); Physical layer procedures,” 3rd Generation Partnership Project (3GPP), TS 36.213, Jun. 2017. [Online]. Available: http://www.3gpp.org/ftp/Specs/html-info/36213.htm
[11] 3GPP TSG-RAN WG1, “Conveying MCS and TB size via PDCCH,” Apr. 2008. [Online]. Available: http://www.3gpp.org/ftp/tsg_ran/WG1_RL1/TSGR1_52b/Docs/R1-081483.zip
[12] S. Catteaux, L. J. Greenstein, and V. Erceg, “Some results and insights on the performance gains of MIMO systems,” IEEE Journal on Selected Areas in Communications, vol. 21, no. 11, pp. 839 – 847, June 2003.
[13] J. Wannstrom, “Carrier Aggregation explained,” Jun. 2013. [Online]. Available: http://www.3gpp.org/technologies/keywords-acronyms/101-carrier-aggregation-explained
[14] 3GPP, “Evolved Universal Terrestrial Radio Access (E-UTRA): User Equipment (UE) radio transmission and reception,” 3rd Generation Partnership Project (3GPP), TS 36.101, Jul. 2017. [Online]. Available: http://www.3gpp.org/ftp/Specs/html-info/36101.htm
|
4-79. Fault Isolation Guide
4-80. The Fault Isolation Guide is a simplified check of instrument performance. It is intended to direct the troubleshooter to the defective circuit or circuits. There are three basic cases of improper operation:
1. The instrument will not turn on. Use the Power Up procedure.
2. The instrument turns on but the problems seem to be spread throughout the instrument or erratic. This may be due to any of a number of power supply related problems. Use the Power Up procedure.
3. The instrument has a problem in one or more function or range. Use the Improper Operation procedure.
4-81. POWER UP PROCEDURE
CAUTION
Line power voltage is present from the power cord throughout the primary circuit or the main power transformer. Do not contact this voltage.
4-82. If the instrument cannot be turned on, the problem may lie in several areas: the line power used may not be present, the AC POWER switch may be in the OFF position, the main power fuse, F1, may be blown, or there may be a power supply problem. Power supply problems can be caused both by faults in the power supply circuitry and by shorts in the instrument loading the power supply down. Points to consider when attempting to isolate power problems are listed below. If the UUT will operate but the symptoms are erratic or widespread, seemingly disassociated then go directly to item number 6.
1. Insure that line power is present at the receptacle being used.
2. Insure that the Rear Panel AC POWER switch is in the ON position.
3. Check F1, the main power fuse.
4. Make continuity measurements between chassis common and the three pins of the power receptacle. The ground pin to common should be zero volts. The other two pins to common should be infinity.
5. Make the continuity measurement between the two non-ground pins of the Rear Panel power receptacle. (The AC POWER switch should be in the ON position.) There should be some slight resistance because the measurement is taken through the primary windings of the main power transformer.
6. Measure the power supply voltages at TP1, TP2, TP3 and TP4 with the Front Panel POWER ON/STBY switch in both positions. Use E1 for the common reference when making these measurements. If all voltages are within the limits listed in the Power Supply Voltage Adjustment procedure, proceed to item 7. If one or more voltage is incorrect when the POWER ON/STBY switch is in the STBY position, the problem is in the power supply, proceed to item 6, Part A. If one or more voltages is/are incorrect only when the POWER ON/STBY switch is in the ON position, the fault does not lie in the power supply, proceed to item 6, Part B.
a. Problems in the power supply can be tracked down using conventional methods, but remember that:
1) The -5V supply "tracks" the +5V supply so if the +5V supply has a problem, it will affect both power supplies.
2) The +5V supply "tracks" the +12V supply, so if the +12V supply has a problem, it will affect the +12V, +5V and -5V supplies.
3) The -12V supply, after the rectifier, CR1, is independent of the other supplies. Problem in all four power supplies indicates that the fault lies in the primary circuit, transformer, or CR1.
b. Shorts that load down a particular power supply can best be isolated by disconnecting the pcbs that plug into the Main PCB, one at a time. Remember to turn the instrument off before disconnecting or connecting cables, plugs or pcbs. If the short cannot be located by unplugging pcbs, use the current probe tracing procedure described in Troubleshooting Techniques. Start at the output of the power supply that is loaded down. This is the logical point and also gives the approximate amount of current drawn by the short.
4-83. IMPROPER OPERATION PROCEDURE
4-84. The Improper Operation Procedure is a simple dynamic test of the instrument. The procedure provides a speedy overall view that is interpreted by Table 4-13 to guide the technician to the most likely circuits. Additional information can be gained to aid troubleshooting by performing the Performance Checks indicated by the results of the procedure. The UUT can pass all parts of the procedure and still have faults. Should the UUT pass the procedure, do the Performance Checks. Use the following
steps to perform the Improper Performance Procedures. Perform the Improper Operation Procedure as follows:
1. Set the instrument controls as follows:
| Control | Setting |
|------------------|-----------|
| RESOLUTION | 1 kHz |
| SEP/COM | SEP |
| FILTER | OUT |
| CHANNEL A & B | |
| TRIGGER LEVEL | PRESET |
| ATTEN | X1 |
| AC/DC | DC |
| ± | + |
| REF | INT |
2. Connect the LF synthesizer to the Channel A input terminal of the UUT via a 50Ω termination.
3. Program the LF synthesizer for an output of 1 MHz at a level of 100 mV rms.
4. To check the FREQ A function:
a. Set the FUNCTION control to the FREQ A position.
b. Verify that the GATE annunciator is flashing and that the display is 1.000 MHz.
5. To check the CPM X100A function:
a. Set the FUNCTION control to the CPM X100A position.
b. Verify that the GATE annunciator is flashing and that the display is 600000.
6. To check the FREQ C function: Refer to Section 6.
7. To check the RATIO A/B function:
a. Set the FUNCTION control to the RATIO A/B position.
b. Set the SEP/COM switch to the COM position.
c. Verify that the GATE annunciator is flashing and that the display is 1.0.
d. Set the SEP/COM switch to the SEP position.
8. To check the PER A function:
a. Set the FUNCTION control to the PER A position.
b. Verify that the GATE annunciator is flashing and that the display is 0.0010 msec.
c. Set the RESOLUTION control to the 10 ns position.
d. Verify that the GATE annunciator is flashing and that the display is 1.00 μsec. (R0 check.)
9. To check the PER AVG A function:
a. Set the FUNCTION switch to the PER AVG A position.
b. Verify that the GATE annunciator is flashing and the display is 0.00100 msec.
10. To check the TI A-B function:
a. Set SEP/COM switch to COM and the FUNCTION control to the TI A-B position.
b. Set the Channel B ± control to the - position.
c. Verify that the GATE annunciator is flashing and that the display is 0.50 μsec.
d. Set the RESOLUTION control to the 100 ns position.
e. Verify that the GATE annunciator is flashing and that the display is 0.0005 msec.
11. To check the TIA A-B function:
a. Set the FUNCTION control to the TIA A-B position.
b. Verify that the GATE annunciator is flashing and that the display is 0.000500 msec.
12. To check the TOT A B function:
a. Set the SEP/COM switch to the SEP position.
b. Rotate the Channel B TRIGGER LEVEL control maximum counterclockwise.
c. Set the FUNCTION control to the TOT A B position.
d. Press and release the RESET button on the Front Panel of the UUT.
e. Verify that all zeros is displayed.
f. Rotate the Channel B TRIGGER LEVEL control maximum clockwise.
g. Verify that a count begins to accumulate in the display and the GATE annunciator is lit.
| FREQ A | CPM X100A | RATIO A/B | PER A | PER A (R0) | PER AVG A | TI A-B | TI A-B (R0) | TIA A-B | TOT A-B | CHK |
|--------|-----------|----------|-------|------------|-----------|--------|-------------|---------|--------|-----|
| F | F | F | F | F | F | F | F | F | F | F |
**TROUBLESHOOT CIRCUITRY BELOW:**
- 100 MHz control, 10-100 MHz Mult. PCB
- Control Logic U42
- TIA Circuitry, Control Logic
- CPM Control & Timing Circuitry
- Channel B Input
- Channel A Input, Control Circuitry
- Decoding ROM'sm Control Circuitry
- Time Base
- Channel A Input
- Control, Timing, Main Gate, Counter, Display
1. Display is 1/6 of correct value, U2 or U48.
2. Time Base or U26.
4-85. **Troubleshooting Techniques**
4-86. There are several techniques that can be used to isolate a fault in the instrument. The techniques are discussed below by type.
4-87. **CURRENT TRACING**
4-88. Current Tracer probes, such as the HP 547A, are usually the best way to locate shorts in the instrument. If the short is so bad that the power supply is loaded down, the Performance Checks or Fault Isolation Guide may not provide any help in isolating the faulty circuit. Starting at the output of the loaded power supply, logically move the Current Tracer through the instrument until the short is found. Sometimes the short is minor and is located between two or more logic gates as shown in Figure 4-12. The Current Tracer will glow brightest at the terminal of the shorted gate.
4-89. **HEAT AND COLD**
4-90. A fast and effective method of locating the faulty area in the instrument is by alternately heating and cooling areas in the instrument with a heat gun and freon spray. This check can be used on large areas or even individual components. IC's can open or short internally and this method of troubleshooting can be especially effective.
4-91. **LOGIC CLIP**
4-92. Logic clips, such as the John Fluke Testclip 200, provide the troubleshooter with visual indication of the logic levels in the instrument as the instrument operates. This test device is easier to use (it clips onto the IC) than such test equipment as an oscilloscope and allows all inputs and outputs to be observed simultaneously.
4-93. **TEMPERATURE**
4-94. Shorted components overheat. Temperature can be measured with the Fluke 80T-150 and any of its associated DMMs.

|
What can the hippocampal representation of environmental geometry tell us about Hebbian learning?
Colin Lever\textsuperscript{1}, Neil Burgess\textsuperscript{1,2}, Francesca Cacucci\textsuperscript{1}, Tom Hartley\textsuperscript{1,2}, John O’Keefe\textsuperscript{1,2}
\textsuperscript{1}Department of Anatomy and Developmental Biology,
\textsuperscript{2}Institute of Cognitive Neuroscience, University College London, Gower Street, London WC1E 6BT, UK
Received: 8 March 2002 / Accepted: 13 June 2002
Abstract. The importance of the hippocampus in spatial representation is well established. It is suggested that the rodent hippocampal network should provide an optimal substrate for the study of unsupervised Hebbian learning. We focus on the firing characteristics of hippocampal place cells in morphologically different environments. A hard-wired quantitative geometric model of individual place fields is reviewed and presented as the framework in which to understand the additional effects of synaptic plasticity. Existent models employing Hebbian learning are also reviewed. New information is presented regarding the dynamics of place field plasticity over short and long time scales in experiments using barriers and differently shaped walled environments. It is argued that aspects of the temporal dynamics of stability and plasticity in the hippocampal place cell representation both indicate modifications to, and inform the nature of, the synaptic plasticity in place cell models. Our results identify a potential neural basis for long-term incidental learning of environments and provide strong constraints for the way the unsupervised learning in cell assemblies envisaged by Hebb might occur within the hippocampus.
Key words: Hippocampus, place cell, remapping, space, neural network
1 Introduction
Hebb’s (1949) postulate regarding the creation of cell assemblies has come to be seen as the pre-eminent model of learning in neural systems. Here we examine the evidence for it in single-unit recordings from the hippocampi of freely moving rats. As we shall see, this paradigm provides a particularly appropriate testing ground for several reasons.
The current pre-eminence of Hebb’s postulate derives at least in part from the discovery of long term potentiation (LTP) (Bliss and Lomo 1973) in the hippocampus. This seems to provide a biological instantiation of Hebb’s rule in that roughly simultaneous pre-synaptic activity and post-synaptic activity (or at least depolarisation) produce a long-lasting increase in the efficacy of the synaptic connections.
By coincidence, at about the same time, one of the most striking behavioural correlates of neural activity was discovered, also in the hippocampus. The firing of ‘place cells’ in fields CA1 and CA3 of the hippocampus of freely moving rats appears to represent the current location of the animal: each one firing whenever the animal enters a restricted portion of its environment (O’Keefe and Dostrovsky 1971; O’Keefe 1976). The integrity of the hippocampus, and of some of the processes linked to LTP, have been shown to be required for spatial behaviours such as finding the hidden platform in a water maze (e.g. Morris et al. 1982; Davis et al. 1992; Steele and Morris 1999).
There is thus solid ground to suppose that LTP and place cells in the hippocampus form components of the neural basis of spatial behaviour of the rat. In addition, as O’Keefe and Nadel (1978) pointed out, Hebb’s graduate seminar at McGill emphasised incidental or latent learning as one of the few areas that pointed out the limitations of the behaviourist approach. Indeed, Hebb’s postulate concerns exactly this type of ‘unsupervised’ learning. Appropriately, latent learning has been demonstrated in aspects of rodent spatial behaviour (e.g. Blodgett 1929; Keith and McVety 1988; Harley 1979; Tolman 1932, 1948). Thus examination of changes to the hippocampal representation of space during the rat’s exploration of its environment should provide a good opportunity to observe Hebbian learning at the level of single cells in an ecologically valid situation.
We start by reviewing a model of the properties of place cells that does not invoke synaptic or cellular plasticity (O’Keefe and Burgess 1996; Burgess and O’Keefe 1996; Burgess et al. 2000; Hartley et al. 2000). We argue that quantitative definition of the basic system must be the first
step upon which an understanding of the additional effects of plasticity can be built. We also briefly discuss some of the existing models of the effects of synaptic plasticity on place cell firing. The main body of the article then examines in detail recent experimental data concerning the time course of plasticity in the place cell representation of environmental geometry under two experimental manipulations. The first concerns short term changes following the introduction of a barrier into the environment, extending the findings of (Muller and Kubie 1987), while the second concerns further analysis of the time course of plasticity in the representation of environments of different shape (Lever et al. 2002). These data provide constraints on any mechanisms of synaptic plasticity that might be at play in the place cell system.
2 Computational models of place cell firing
2.1 Introduction to computational models
In this section, we briefly review a model of the firing of place cells, specifically their spatial receptive fields or ‘place fields’, that requires no learning at all. We show that this adequately handles much of the data regarding the basic characteristics of place cell firing. As such, we argue that this model provides an appropriate framework into which to incorporate experimental data regarding the effects of learning, such as that presented in the main section of this paper. To provide a broader context for the consideration of place cell plasticity, we also briefly review some of the existent models of place cell firing that depend on synaptic plasticity. In this section we also introduce the additional concepts of pattern completion and pattern separation that have been respectively attributed to the recurrent collaterals in CA3 and the dentate gyrus by many authors.
2.2 A simple geometric model of place cell firing, without learning
A major motivation for not invoking some mechanism of learning is that, on a rat’s very first exposure to an environment, many cells already have place fields in that environment (Hill 1978; Lever Cacucci, and O’Keefe unpublished; Wilson and McNaughton 1993). The type of simple feed-forward model of place cell responses discussed here originates from the work of Zipser (1985). In Zipser’s model, sensory details of the environment feed-forward to ‘landmark detectors’ and hence to place cells. Each landmark detector is tuned to detect a particular aspect of the sensory scene (a ‘location parameter’) and simply performs a match between the stored state of the location parameter and its currently perceived state. A place cell’s activity then corresponds to a threshold sum of the strengths of the matches of the landmark detectors that connect to it. Location parameters included the retinal angle between two landmarks, on the assumption that place field size should scale proportionally with the size of the environment (in fact the relevant experiment shows a much lower scaling factor Muller and Kubie 1987).
Constraints on the functional form of the sensory input to place cells can be derived by systematically varying the shape and size of the rat’s environment while recording from the same place cells. In an experiment of this form, O’Keefe and Burgess (1996) showed a lawful pattern of firing across environments such that the location of peak firing often maintained a fixed distance to the nearest two walls, while the place field sometimes became stretched or bimodal in the larger environments. The pattern of firing was consistent with a thresholded linear sum of inputs tuned to respond to the presence of a boundary at a given distance along a given allocentric direction (Fig. 1). These inputs were labeled ‘boundary vector cells’ (BVCs) (Hartley et al. 2000; Burgess et al. 2000). Two aspects of the BVCs should be noted. First, the directions along which the postulated BVCs are oriented \((\phi_i)\) are independent of the orientation of the
rat and probably dependent on the head-direction system (e.g. Taube 1990, 1998) and the various orientation cues around the environment. Second, the sharpness of tuning of a BVC’s response is affected by the distance to which it is tuned ($d_i$), with sharper tuning to shorter distances. This means that boundaries near to a field will tend to provide the most powerful determinant of the field’s subsequent location.
More specifically, a given BVC (i.e. BVC $i$) is tuned to respond to the presence of a boundary at a given bearing ($d_i$, $\phi_i$). The response of BVC $i$ to a boundary element at distance $r$ subtending angle $\delta \theta$ is:
$$\delta f_i = g_i(r, \theta) \delta \theta,$$
where
$$g_i(r, \theta) = \frac{\exp\left(-\frac{(r - d_i)^2}{2\sigma_r^2(d_i)}\right)}{\sqrt{2\pi \sigma_r^2(d_i)}} \times \frac{\exp\left(-\frac{(\theta - \phi_i)^2}{2\sigma_a^2}\right)}{\sqrt{2\pi \sigma_a^2}}.$$
The width of the radial tuning increases with the preferred distance $d_i$, i.e. $\sigma_r(d_i) = \sigma_o(1 + d_i/\beta)$. The firing rate of a place cell with $n$ BVC inputs is then:
$$F(x) = AH\left(\sum_{i=1}^{n} \left(\int_0^{2\pi} g_i(r, \theta) d\theta\right) - T\right),$$
where $T$ is the threshold, $A$ determines the amplitude, $H$ is the Heaviside function (i.e. $H(u) = u$ if $u > 0$, $H(u) = 0$ otherwise) and $x$ is the location of the rat which (together with the geometry of the environment) determines the locus $r(\theta)$ of the boundary from the rat.
Hartley et al. (2000) showed that the characteristic properties of populations of place cells could be captured by the above model by fixing $A$ and $T$ and simply assuming that each place cell receives a random sample of BVCs. The place field properties modelled included the distributions of the numbers of fields and their shapes, sizes and orientations across the four environments used by Burgess and O’Keefe (1996). Beyond this, Hartley et al. (2000) showed that a place cell’s firing pattern across different environments could be modelled by choosing the appropriate BVCs and threshold $T$. For most cells, a reasonable fit could be obtained using no more than four BVCs along orthogonal directions, corresponding to only six degrees of freedom in the model (their overall orientation, the four distances and the threshold). Such a model can then be used to predict the pattern of firing in an environment of novel shape (Fig. 2). The model is also consistent with the patterns of firing of place cells recorded on a linear track whose length is varied systematically (Gothard et al. 1996).
### 2.3 Learning in models of place cell firing
Models of place cell firing that focus on synaptic plasticity fall into three categories according to the principal reason for which plasticity is required: directionally modulated firing of place cells, asymmetric expansion of place fields and stability and remapping in the place field representation. We briefly discuss each of these categories but before doing so we provide a brief introduction to the concepts of continuous attractors and pattern completion, as applied to the recurrent collaterals in CA3, and of pattern separation as applied to the dentate gyrus. These concepts are important for interpretation of the computational and experimental studies of stability and remapping and are not included in the simple geometrical model discussed above.
**Fig. 2.A,B.** Predictions from the geometric place field model. Having fitted the data for a given place cell in several different environments we can use the same set of BVCs to predict the behaviour of that cell in any novel environment. In this example, data has been fitted for a cell based on its firing in the four environments shown in Fig. 1 (square, circle, large square, diamond). A) shows the predicted place fields for the cell fitted in Fig. 1 in a right angled triangular box in two different orientations relative to distal cues in the laboratory. Experimental data from the same cell are shown for comparison. B) shows the predicted place field for two environments (standard square and large square) in which a barrier has been placed. The model predicts an additional second field will appear north of the barrier in both the large square (left) and standard square (right). Experimental data from this cell in the corresponding environments are shown in Figs. 3D and 4A (Cell 2). Adapted from Hartley et al. (2000)
2.3.1 Recurrent collaterals, pattern separation, pattern completion and continuous attractors. The recurrent collaterals in region CA3 and the dentate gyrus whose cells project into this region comprise the two anatomical features of the hippocampus most noticeably absent from the simple geometric model. These two features have been the focus of the hippocampus as an associative memory device since the seminal contribution of Marr (1971). Many authors have brought attention to the possibility that the extensive recurrent connections between pyramidal cells in CA3 could support an auto-associative memory (Marr 1971; McNaughton and Morris 1987; McNaughton and Nadel 1990; Treves and Rolls 1992; McClelland et al. 1995), see also (Kohonen 1972; Gardner-Medwin 1976; Hopfield 1985). In these systems, based on Hebbian learning in the recurrent connections, a partial cue can produce retrieval of an entire stored representation, a process referred to a ‘pattern completion’. Interference between similar stored representations can be a problem in these systems such that performance is improved when non-overlapping representations are to be stored (e.g. Amit 1990). For this reason, it has also been proposed that the dentate gyrus serves to ensure that even similar cortical inputs to the hippocampus are stored as non-overlapping representations in CA3 (Marr 1971; McNaughton and Nadel 1990; Treves and Rolls 1992; McClelland et al. 1995). This is achieved by generating an intermediate, highly sparse, representation of the cortical input using the very large number of cells in the dentate gyrus (which contains an order of magnitude more neurons than either entorhinal cortex or CA3). This process is referred to as ‘pattern separation’. Much evidence points to the presence of processes akin to pattern completion. For example, it has long been known that representation of only a subset of the original cues present during training may be sufficient to maintain normal place field firing (e.g. O’Keefe and Conway 1978; O’Keefe and Speakman 1987; Quirk et al. 1990). There is now evidence that this property depends on CA3 NMDA receptors (Nakazawa et al. 2002). The primary direct evidence for pattern separation concerns experiments in which the place cell representation appears to ‘re-map’ entirely between reasonably similar environments. We consider the details of these data in the following sections.
Many computational models of remapping have concerned a class of models of the hippocampus that use the recurrent collaterals in region CA3 to support ‘continuous attractor’ states in the activity of CA3 place cells (Zhang 1996). In these models, each place cell is assumed to have a preferred location in the environment and to fire according to the rat’s proximity to this location. The recurrent connections are arranged so that the strength of the connection between two place cells is a simple (increasing) function of the proximity of their preferred locations, a situation that might be created by unsupervised Hebbian learning during exploration (Muller et al. 1996; Muller and Stead 1996). This causes patterns of firing consistent with the rat being in a single location to form attractor states, and allows for smooth transitions between the representations of neighbouring locations.
2.3.2 Directionality of place cell firing. When rats move in an unconstrained manner through an open environment, the spatially specific firing of place cells is independent of the animal’s orientation (Muller et al. 1994; O’Keefe 1976). However, when the rat is constrained to run on a linear track or a narrow-armed maze, responses become direction dependent. In other words, a given cell will fire when the rat runs through the place field in a specific direction but not when it runs through it in the reverse direction (McNaughton et al. 1983; Muller et al. 1994; O’Keefe and Recce 1993).
To model the directionality of place fields, Sharp (1991) extended the simple feed-forward model of Zipser (1985) by adding an element of ‘competitive learning’ (Rumelhart and Zipser 1986). Her model consists of a layer of inputs projects to an intermediate layer (entorhinal cortex) which projected to a layer of place cells. Inputs correspond to one of two types of information regarding environmental landmarks (representing their distances and angles relative to the rat’s heading direction). To simulate competitive learning, neurons in the two processing layers are divided into groups dominated by lateral inhibition such that only one ‘winner’ can be active at a time. Hebbian learning is then applied to the initially random connection strengths such that connections to the active neuron in each group from active neurons in the preceding layer are strengthened. However, an important modification must be made to the Hebbian learning rule for the algorithm to work. The net strength of the connections to each neuron must be normalised (i.e. kept equal and constant over time), to ensure that several neurons in each group can ‘win’ according to the pattern of input to the group. This learning rule results in specific neurons coming to respond to a particular pattern of input, or to patterns similar to it.
The orientation dependence of half of the inputs in Sharp’s model means that initial responses in the place cell layer are modulated by the rat’s orientation as well as by its location. If the rat subsequently moves through a place field solely along restricted directions, then the response remains directionally modulated. However, if the rat subsequently moves through the place field in diverse directions, the cell learns to respond to a broader and broader selection of directions at that location, eventually showing directionally-independent firing. As a variant on this model, models with similar behaviour have been proposed in which direction-independence arises as a result of unsupervised Hebbian learning in recurrent collaterals in the place cell layer (Brunel and Troullier 1998; Kali and Dayan 2000). The main argument against the way all of these models incorporate learning is that, as far as we can tell, place cell firing is initially direction-independent, and subsequently becomes direction-dependent under conditions of constrained motion (O’Keefe unpublished observation).
2.3.3 Experience dependent increases in the firing field. The phenomenon of LTP shows an interesting variation from a simple dependence on coincident preand post-synaptic activity, with preferential induction when presynaptic activity precedes postsynaptic activity (Gustafsson and Wigstrom 1986; Levy and Stewart 1983; Markram this volume). Perhaps the best evidence for short-term experience-dependent changes in place cell firing is related to this phenomenon and comes from experiments by Mehta et al. (1997, 2000). They found that the spatial distribution of a place cell’s firing rate on a linear track becomes more asymmetrical during the first few runs of the rat along the track: tending to fire at a lower rate on entry to the place field than on leaving it. They suggest that the temporal asymmetry of LTP, acting on the CA3 to CA1 pathway, causes this effect by strengthening the connections from place cells firing earlier on the track to those firing later on the track on a given run. This would cause the cells firing later on the track to begin firing earlier on it after learning. Other similar models have implicated asymmetric Hebbian learning in the recurrent connections within CA3. Importantly, this experience-dependent asymmetry has been shown to be dependent on the NMDA receptor and thus linked to LTP (Ekstrom et al. 2001). However, one aspect of this phenomenon that differentiates it from the Hebbian idea of the long-term formation of cell assemblies is that it appears to reset each day, despite the occurrence of many runs during the day.
2.3.4 Stability or remapping in the place cell representation. Initial experiments in which place cells were recorded in environments of different shape (Muller and Kubie 1987) reported completely different patterns of firing, or ‘remapping’, in the two environments. A place cell active in one environment might fire in an unrelated location in the second environment or might be silent. Interestingly, this phenomenon appears to be at odds with the experiments on which the simple geometric model was based, showing systematic regularities in the place fields recorded in environments of different shape (O’Keefe and Burgess 1996). As discussed below, the remapping likely results from a process of experience-dependent plasticity over the two week or longer training period that was used in the experiments by Muller and Kubie (1987) but not by O’Keefe and Burgess (1996) (Sect. 4). Furthermore, this learned discrimination of the place cell representation of environments of different shape has been shown to last for several weeks (Lever et al. 2002). A final experimental finding pertinent to this discussion is the observation that the day to day stability of the place fields within a constant environment has been shown to depend on the NMDA receptor and thus linked to LTP (Kentros et al. 1998).
Samsonovich and McNaughton’s (1997) model of the place cell representation of space showed that a large number of independent continuous attractor representations (or ‘charts’) could be supported by the recurrent CA3 network. In their model, as animals explore an environment, local view representations become bound by Hebbian learning to specific places in that environment’s chart. They suggested that these charts were preconfigured (i.e. hard-wired) in CA3, with specific charts becoming associated with given environments as and when necessary. Within this model, remapping corresponds to switching between different charts. Notice that the distances between place fields are fixed by the preconfigured recurrent connections, a condition seemingly at odds with the plastic and multimodal fields seen after environmental manipulations (O’Keefe and Burgess 1996; Gothard et al. 1996). For the chart model to accommodate the occurrence of a bimodal field under these circumstances, each lobe of the now bimodal place field must exist on a different chart with resetting or switching between charts occurring while the rat moves from one lobe to the other (McNaughton 1996). Alternatively, the model can be set up so that feed-forward input of the form suggested by the geometrical model dominates any recurrent input (Samsonovich and McNaughton 1997).
While Samsonovich and McNaughton’s (1997) chart model can be shown to support separate (remapped) environmental representations, some interesting issues arise regarding the nature of the plasticity that would be required to allow these representations to develop during exploration rather than simply being somehow preconfigured. Kali and Dayan (2000) showed that, if serial exploration is simulated, simple Hebbian learning is not sufficient to form a well-behaved continuous attractor representation i.e. one in which all locations are equally stable. An additional mechanism is required such that synaptic plasticity is modulated by novelty, preventing the attractor for frequently visited locations to become deeper than for less well visited locations. A plausible mechanism for this involves novelty being detected by mismatch between a retrieved CA3 representation and a CA1 representation activated directly from entorhinal cortex. The level of novelty (mismatch) would then control the release of acetylcholine into the hippocampus from the medial septum which in turn would modulate synaptic plasticity (Hasselmo et al. 1996). Interestingly, Kali and Dayan (2000) also argued that novelty-mediated learning was sufficient to allow distinct representations to develop for different environments, partially distinct representations for related environments and similar representations for environments differing only geometrically (as in O’Keefe and Burgess 1996; Gothard et al. 1996). However, to get place fields to maintain fixed distances to environmental boundaries during geometrical manipulations to the environment (as opposed to maintaining a fixed ratio of distances between boundaries) requires feed-forward inputs of the BVC form to dominate over recurrent connections, as with Samsonovich and McNaughton’s (1997) model.
An alternative analysis of the development of remapping was performed by Fuhs and Touretzky (2000). Of direct relevance to our own work described in Sect. 4 (Lever et al. 2002), Fuhs and Touretzky adopt a partial, or gradual, remapping approach to map separation suggested by one study (Tanila et al. 1997) but not others (e.g. Bostock et al. 1991). Their partial remapping is effected by the combined effects of pattern completion and pattern separation mechanisms. Phenomenologically, they focused on what may turn out to be a common expression
of remapping in individual place cells, that is, for it to continue to fire in the environment in which it fires most strongly, and to stop firing in the other. In contrast to the recurrent models previously discussed, they examined whether or not plasticity in the perforant path projection from entorhinal cortex to CA3 place cells could support this type of behaviour. Interestingly, changing connection weights according to the product of pre and post synaptic activation (i.e. ‘Hebbian’ learning), and/or to their covariance, did not produce the required behaviour. Hebbian learning leads to a place cell that fires strongly in one environment and weakly in another, thus strengthening its firing in both environments. Using covariance learning, exposure to the second environment leads to loss of the place cell representation of the first environment (including cells with high and low firing rates). However, the BCM learning rule (Bienenstock et al. 1982) did produce the desired result, that is, strong firing remains stable and weak firing reduces with experience. The critical aspects of the BCM rule are that synaptic modification depends on pre and post synaptic activity (avoiding interference where different inputs are active in the two environments) and the direction of modification (increasing or decreasing) depends on the strength of the post synaptic activity.
Models implicating plasticity in providing a stable representation of an environment are supported by some recent studies. Basically, rats and mice which are old (Barnes et al. 1997) or whose candidate molecular plasticity-machinery (e.g. NMDA receptor) is compromised (Kentros et al. 1998; Rotenberg et al. 1996, 2000) can show different patterns of firing upon exposure to a previously-experienced environment (but see also McHugh et al. 1996). Although various models would interpret, say, an old rat’s remapping of a familiar environment quite differently and identify assumed plasticity in concomitantly different projections (Barnes et al. 1997; Redish and Touretzky 1999), consensus is emerging which suggests that the place cell’s daily stability in familiar environments is plasticity dependent. These models are also consistent with studies relating plasticity to pattern completion within an environmental representation (Nagazawa et al., 2002 unpublished data). There have been hints throughout the place cell literature that alterations in a familiar environment sometimes cause firing rate reduction in many cells in the altered environment, as though there were a learned template firing pattern. This was specifically studied in Fenton and Muller (2000) with clear results stating that in a circle with two cards on the internal wall, firing rates were significantly lower in the two manipulations where the cards were placed further apart and closer together than in the standard trained configuration.
We return to the subject of bimodal place fields and attractor charts in Sect. 3 “Experiments with barriers – Predictions and limitations of the geometric model”. We return to stability and remapping in Sect. 4 “Long-term memory and incidental learning of environmental geometry”. As we shall see, the simple geometric model needs to be extended to include plasticity.
3 Experiments with barriers – predictions and limitations of the geometric model
3.1 Introduction to barrier experiments
In this section we describe some experimental place cell data which highlight some of the strengths and limitations of the model described above. We stress at the outset that these preliminary data are not quantitative and were collected in rats with diverse experience, though sharing a common exposure to square and circular environments formed from a deformable walled environment (the “morph box”, further described in Lever et al. 2002). We introduce this topic by noting that experimenting with barriers formed an important part of the seminal studies by Muller and Kubie (1987) in the cue controlled circular-walled environment. Muller and Kubie used barriers to suggest the “kinematic hypothesis” of place cell firing. Their key finding was that the placement of a barrier within the place field tended to severely reduce the amount of firing in the field. Figure 3A, B schematises this finding and shows a clear replication of this effect for a cell firing in the square, with further demonstration of its reproducibility. We emphasise that the powerful rate-reduction effect exerted by the barrier can occur when the barrier is not directly within the place field but at some distance from it (trials 1, 3, 6, 8) as well as when the barrier is in the field centre (first half of trial 9). Such effects on place fields can be understood within the framework of our model because the introduction of the barrier “occludes” more distant walls and thus prevents BVC inputs in those directions from contributing to firing at the original location of the field. We would not expect an overall decrease in firing rates (across a population of cells) as the introduction of the barrier will also produce additional firing in some cells.
Of particular interest here is the prediction that the insertion of a barrier will often induce a new second field at a predictable location. This happens because the barrier acts as a boundary and thus produces additional input to the place cells at the new location (Burgess et al. 2000; Hartley et al. 2000). Sometimes this additional input, when combined with the other inputs to the cell, will be sufficient to exceed its firing threshold and thus produce a second field (typically near to the barrier). To our knowledge, such effects had not been suggested before the development of our model. In the top two rows of Fig. 3C, one can see that the insertion of an east-west barrier can create a second field close to, and north of, the barrier, as if the barrier were acting as an additional south wall (Fig. 3C, bottom row). The model also predicts that some cells will not fire north of the barrier where the additional input is not sufficient to drive the cell beyond its firing threshold (Fig. 3C, top row) and that some previously silent cells will start to fire because the cell only receives sufficient input to fire when the barrier is added (not shown).
If place cells provide the rat’s signal of its allocentric location within an environment, it might plausibly be reasoned that the hippocampal network will not tolerate too many cells firing in two distinct positions. A cell
continuing to fire as in the bottom row of Fig. 3C is highly ambiguous, and the network’s “correction” of such firing may well give us important clues about the system as a whole. The paradigm may open a window into the network, to see if, for instance, it obeys attractor dynamics. The next section describes some preliminary work illustrating this line of approach.
3.2 Stable barrier-related firing in place cells
The basic methodology behind the results described in Sects. 3 and 4 of this paper is the same. Briefly, recording took place within a black-curtained, circular testing arena. Walled enclosures, such as the standard 62 cm sided, 50 cm high square box, or the larger 93 cm sided, 50 cm high square box, were placed inside the testing arena such that the centre of each box always had the same location in the arena. Between trials, rats are placed on a holding platform outside the curtained testing arena. We actively encouraged directional constancy with the use of stably positioned cue cards and procedures standardising the way that the rats were led into the testing arena. To encourage the rats to walk around and provide even coverage of the boxes’ twodimensional surface, sweetened rice was randomly thrown into the given box at about 30 s intervals. In this section, we describe results where, upon finding place cells firing along the south wall of the standard square, we tested whether a second field is created by the insertion of an east-west barrier within the square.
Cell 1 in Fig. 3D shows the simplest type of result, where the insertion of the barrier in the large and the small square does not induce a second field. We note that cells of this type should be relatively rare according to the geometric model, and finding a high proportion of these would constitute evidence against it. Cell 2 in Fig. 3D shows another type of result simultaneously recorded from the same animal. Here the barrier reliably induces a second field in the expected region in the large square (trial 2), and this effect is also seen when tested two days later in the small square (trial 4) where the effect is reproduced on a later trial on the same day (trial 6). The firing in the square without a barrier insertion is unaffected. We point out that the trials labeled 4–7 in Fig. 3D and the trials 1–4 in Fig. 3B are the same trials. Accordingly, in the same cell ensemble, we can simultaneously observe rate inhibition, second field induction, and no obvious change effected by the barrier’s insertion. This rules out interpretations based on global changes. In contrast to Cell 1, the behaviour of Cell 2 corresponds well to the geometric model. In fact the geometrical model is particularly well supported by this example, as the barrier was made of a different substance (old wood) with a different texture, visual appearance, and smell to the rest of the environment.
For models in which place fields must have fixed relative locations within a given representation (Samsonovich and McNaughton 1997), cells that develop sub-fields following some environmental manipulation require elaborate explanation. One possibility is that each sub-field exists on a different chart, or that charts are somehow reset\(^1\) between each manifestation of activity within a given sub-field. Thus it is important to ask if the firing in two firing locations for Cell 2 represents the outcome of two genuine “simultaneously” bimodal fields. It could be that the firing rate map averaged over the whole trial is misleading and that only one field is really operative for half the trial, then stops firing, while the other starts. One might even imagine that the two fields oscillate every two minutes, and so on. Of course, the fields cannot literally be examined simultaneously since the rat cannot be in two places at once. Given natural coverage of the environments, however, can it be shown that there is firing in both fields at the shortest measurable intervals?
In Fig. 4, we deal with these issues and show that the fields can be genuinely bimodal. Figure 4 presents data from Cell 2, Fig. 3D showing segments from Trial 2 in parts A, B, C and from Trial 4 in parts D, E. Parts A and D present firing rate maps from the first and second half of the respective trials, showing substantial firing in both locations for each trial half. Parts B, C, and E present firing rate maps for short equal time slices, 7 or 10 s, and show firing in both locations at the shortest natural intervals. To show genuinely bimodal firing, one needs to show firing first in one field, then the other, and then again in the first field, as quickly as the rat can go between them. In both parts B and C, we show two examples of three alternating firing episodes, both occurring within 25 seconds. Part E shows many such alternating firing episodes from Trial 4. For this cell, firing in the two locations occurs as close together in time as one could hope to observe in a freely moving rat.
### 3.3 Plasticity in barrier-related firing
We now consider plastic changes in the place cell representation following barrier insertion. Obviously, these changes will not be consistent with the simple geometrical model which is restricted to firing patterns that remain static over time given an unchanging environment.
Figure 5 shows a place cell (Cell 3) whose response to the barrier changes rapidly within a single 9 minute trial. Its original field is against the south wall (Fig. 5A, Trial 1). Introduction of the barrier quickly induces a second, duplicate field above the barrier (Trial 2, 0–180 s). The original field is gradually eliminated during Trial 2 (0–180 s, 180–360 s, 360–540 s) and does not return in the subsequent trial (Trial 3, 0–270 s, 270–540 s). Removal of the barrier in the latter part of Trial 3 (540–900 s) reinstates the original field configuration which continues stably through the next control trial (Trial 4). The time slices from Trial 2 (Fig. 5B) show the kind of bimodal firing pattern seen in Cell 2. The time slices from Trial 3 (Fig. 5C) show that firing is now restricted to the barrier field, despite frequent visits to the original wall field (e.g. in 6/10 time slices from Trial 3).
Cell 4 (Fig. 5D) is another example of a short-lived field. In this case the second, duplicate field, is transitory. This is a cell which fired consistently in the southwest of the square (Fig. 5D, Trials 1–4). However, closer examination of the temporal firing over Trial 2 showed that the cell fired in the first 160 seconds in the expected duplicate field location above the barrier (albeit at a reduced rate of 2.1 Hz) but changed this pattern such that it did not fire in this location thereafter. The absence of firing above the barrier was then maintained in a subsequent trial (Trial 3) and normal firing resumed in the standard square (Trial 4).
These examples suggest an interesting line of approach by using barrier-type experiments to examine relatively rapid incidental learning in the hippocampal network. It is unclear whether such rapid plasticity is stable in the long-term. The next section describes an experiment in which we have used square and circular enclosures to demonstrate plasticity in the representation of these environments that is both transferable and stable in the long term.
---
\(^1\)One suggestion (McNaughton 1996) was that path integrative inputs are reset by an event such as the rat bumping into a wall.
4 Long-term memory and incidental learning of environmental geometry
4.1 Rationale and description of a study (Lever et al. 2002) showing incremental remapping
In this section we summarise and comment on a recent study (Lever et al. 2002) that tested place cells in geometrically different environments. We also present further details, where appropriate, which were not given in that study. The rationale for this study is as follows. Previous work in this laboratory had shown similarities in place fields across various rectangular walled environments, such as different sized squares and differently oriented rectangles (O’Keefe and Burgess 1996). Previous studies in the Muller-Kubie laboratory involving the comparison of place cell firing in rectangular and circular environments had shown clear evidence of dissimilar representations between these two types of environments (Muller and Kubie 1987; Quirk et al. 1992). In contrast to these studies, our experience (Lever et al. 2002) showed that place fields were found to be “homotopic” when tested in circles and squares; i.e. in corresponding locations in both shapes. Various environmental manipulations, such as wall translation, wall removal, and reconfiguration of the walls into shapes other than circles and squares, showed that the similarity of the firing patterns were determined by the box walls, not by the identical square and circle room locations. We reasoned that the apparent contradiction with the results of Muller and Kubie (1987) might be due to the fact that their animals had received considerable pre-training whereas ours were naïve. To test if experience was a critical factor, we recorded from a new group of animals, for up to three weeks, on successive days from first exposures. In some cases we followed indiFig. 5A–D. Plasticity seen in firing patterns in two place cells (Cell 3: A, B, C; Cell 4: D) after induction of a new second field by the insertion of a barrier into the square. **A)** Plasticity in place Cell 3. Left-hand column shows firing rate maps for Trials 1 to 4, each map averaged over the whole trial. Firing rate maps for 1st, 2nd, and last third of trial 2 are shown to right of whole Trial 2 map. Firing rate maps for 1st and 2nd half of the barrier-in-place portion of Trial 3, and portion of Trial 3 where the barrier has been removed, are shown to right of whole Trial 3 map. Note that the southern, original field gradually declines in strength (Trial 2: 0–180 s, 180–360 s, 360–540 s segments) and then disappears reliably (Trial 3: 0–270 s, 270–540 s segments). When the barrier is removed (Trial 3: 540 s) the original field returns (Trial 3: 540–900 s segment, and Trial 4). **B)** Bimodal firing in Cell 3 occurs over the shortest feasible time intervals. A 50 s segment of bimodal firing from Cell 3 in the early part of Trial 2, segmented further into five 10 second time slices, is shown. **C)** The unimodal firing seen when the barrier is present in Trial 3 is not due to lack of sampling of the location of the original place field. A 200 s segment of firing from Cell 3 in Trial 3 showing place field in the new, northern region only, segmented further into ten 20 second time slices. The difference in firing between **B** and **C** is attributed to rapid hippocampal plasticity. **D)** Plasticity in place Cell 4. Trials 1 to 4 show firing rate maps averaged over the whole trial. To the right are shown firing rate maps for the 1st third (0–160 s segment) and last two-thirds (160–480 s segment) of Trial 2. Note that initially, a new second field is created above the barrier in the predicted region, albeit at a reduced rate. This disappears after about 3 min. The absence of firing above the barrier is maintained in a subsequent trial (Trial 3), and normal firing resumes in the standard square (Trial 4).
vidual cells for over a week. The entire duration of the animals’ experience in the two shaped environments was recorded (Lever et al. 2002).
Replicating our first experiment, place cell firing on initial exposures to the circles and squares was highly similar. Gradually, however, with increased experience in these environments, the place cell firing patterns became divergent across the two shapes (but not between environments of the same shape). In other words, the evolution of this shape-specific remapping over time could be observed. In later trials, we found that many cells fired in shape-specific patterns. For instance, a cell might fire in one shape only (monotopic) or in different locations in the two shapes (heterotopic: e.g. in the centre of the circle, but in the north-west of the square). Two further aspects of the phenomenon were explored, namely, transfer and long-term stability, in an attempt to relate these findings to spatial learning and memory. First, it was found that the cells’ geometrically-tuned responses showed good generalisation from circles and squares made of one kind of material to those of another. Second, after delays of about a month, the firing patterns across shapes remained highly divergent, suggesting that remapping was permanent. We interpret these results as identifying a potential neural basis for hippocampal long-term memory of environments.
It is important to re-emphasise that such learning occurs independently of explicit reward. There is nothing in the rice-throwing procedure used to encourage active exploration of the boxes which would reinforce the development of different representations of the two differently-shaped environments. The animals are *not* trained to differentiate the square and the circle. Accordingly, we believe the paradigm provides a good example of incidental learning and memory. This may be particularly useful as this unreinforced type of learning is often emphasised in theories of hippocampal function (Cohen and Eichenbaum 1993; Morris and Frey 1997; O’Keefe and Nadel 1978).
### 4.2 Individual cells – can we identify different hippocampal mechanisms involved in incremental remapping?
This section describes data from two individual place cells in enough detail to suggest that there may be different hippocampal mechanisms involved in incremental remapping. First, however, we need to address the question: is the incremental remapping seen in Lever et al. (2002) effected by specifically hippocampal synaptic plasticity? While we cannot be certain that the learning site(s) is/are hippocampal, it is the most compelling possibility. An interesting study which partly motivated Lever et al. (2002) found that cells in the superficial layers of entorhinal cortex, which comprise the dominant projection to the hippocampus, showed inter-shape similarity in exactly those circumstances (circles and squares similar to ours) in which their hippocampal cells showed divergence (Quirk et al. 1992). This permits some confidence that hippocampal processes are responsible for incremental remapping. Nevertheless, future work in our laboratory will address the issue of learning sites.
We now turn to remapping in individual cells and discuss two examples of remapping place cells while focusing on the transition from similar to divergent firing patterns. The first cell (Fig. 6) becomes gradually heterotopic by developing a second field in the square, and then losing the original homotopic field (Fig. 6A). By day 21 the cell had established a heterotopic firing pattern with a northern field in the circle and a southeastern field in the square environment. This pattern remained stable during subsequent testing and generalised to shapes of different material (not shown). During the transition period both fields were evident in the square environment (see day 20, Trials D and F). These trials are examined in detail in Fig. 6B, C. At the beginning of each trial the cell fired in the homotopic location, followed by firing in both locations, and ending at the divergent location alone (see Lever et al. 2002 for alternative time slicing). The cell shows a kind of “two-steps forward, one-step back” process. Although varying interpretations are possible, the data appear to suggest that two processes with different time scales are involved. We take up the implications of this after consideration of the second remapping place cell.
Rather than changing field position, this cell ceases firing in the square, while continuing to fire in the circle (Fig. 7A). The top row shows data recorded in the circular environment transformed into the square form (see Lever et al. 2002 for details) to permit a direct comparison with the corresponding square trials (bottom row). From day 16 onwards, the cell fires in the circle only (top row, D16–D20). We have previously emphasised the gradual decline in firing in the square on days 13 to 15. Firing rate peaks in the square are similar to those in the circle on day 13, about half those in the circle on day 14, and less than a quarter of those in the circle on day 15 (Lever et al. 2002, Fig. 3d). Figure 7B presents further details from a crucial phase of this transition period, comprising the last two trials of day 14, and all six trials of day 15. What we wish to draw attention to here is the remarkably clear evidence for a “two-steps forward, one-step back” process during this phase. In Fig. 7B, firing rate maps are shown for the first, middle, and last third of each trial. Thus we can consider the *within-trial* and *across-trial* dynamics. The across-trial data show a clear incremental decrease in firing rates in the square. The striking feature of the within-trial dynamics is that firing is consistently *much higher at the beginning of each square trial* than at its end, while there is no such relationship in the circle where firing is roughly constant. This pattern is reminiscent of the cell in Fig. 6. For both cells, the firing at the beginning of the square trial is similar to that in the circle, then becomes more dissimilar as the square trial proceeds. The degree of divergence reached at the end of a square trial is not obtained at the beginning of the next. Note that firing in the first 60 seconds in the square, on trial days 14f and 15b but not on day 15d, f (Fig. 7B, insets to right), is clearly comparable to that seen in the circle.
Finally, it perhaps needs clarifying that we could not see evidence of this type of firing pattern change (i.e. both the across- and within-trial rate decrease) in the
Fig. 6A–C. Example of a cell with initially homotopic firing pattern (similar position in both shapes) that gradually develops a heterotopic pattern (different position in each shape). **A)** Firing rate maps from seven consecutive trials beginning and ending with square trials (from Trial b of day 20 to Trial b of day 21 of main experiment in Lever et al. 2002) showing the evolution of the heterotopic pattern. **B)** and **C)** Firing rate maps of smaller time segments taken from trials d (**B**) and f (**C**) in the square. First and last maps on each row show first and last 45 s of each trial respectively. Middle four maps show each trial segmented into equal quarters of 2 min each. These temporal sequences reveal the dynamics of the processes underlying remapping in a single cell. Note that the pattern divergence occurs both within trials (rapidly) and between trials (more slowly). Trial times were 10 min in the circle, and 8 min in the square (also applies to Fig. 7).
other cells recorded at the same time. We re-emphasise here that the precise nature and time course of the remapping is individual to each cell.
**4.3 Shedding and recruitment in the network**
The above consideration of changes in the contribution of single cells to the hippocampal representation should not blind us to the possibility of additional processes of changes in the hippocampal representation across cells. If we consider that the “active subset” of cells in a network that fire in an environment represent that environment (Muller 1996; McNaughton and Nadel 1990) then we must appreciate that this active subset changes. In other words, the hippocampal network may both shed cells from, and recruit cells to, the active subset.
Definitive evidence for these processes is difficult to obtain but we suspect that both of these possibilities, particularly recruitment, do occur. The occurrence of recruitment is consistent with our findings in Lever et al. (2002) that more cells per day were recorded later, rather than earlier, in the time-series from each of the three animals. Note that if some cells initially firing in both environments later become silent in one of them (for which we have good evidence) in a situation of *no recruitment* at all, then it follows that later on there will be fewer cells in absolute terms representing each environment. This seems unlikely given that we continue to observe plenty of active cells near to the tetrode throughout several weeks of continuous recording. Pattern differentiation involving cells becoming silent thus implies recruitment and some sort of normalisation process through which the total number of active cells in an environment remains approximately constant. These issues are vital for understanding the network dynamics in a more than superficial manner. What about the data? The demonstration of both shedding and recruitment face the same technical problem. It is hard to provide convincing evidence that a particular cell is within the sensitive range of the electrode, but is not firing. This problem may be helped by recording in sleep or anaesthesia, when normally silent cells may fire, before or after environmental experience (Best and Thompson 1989; Wilson and McNaughton 1993), or by more global imaging methods (Guzowski et al., 1999).
**5 Discussion**
**5.1 Incidental or unsupervised Hebbian learning**
Reward-mediated plasticity is clearly required by models of behavioural learning and experimental evidence for its occurrence is growing (e.g. Kilgard and Merzenich 1998; Schultz et al. 1997). Many models of mammalian learning in the rat and monkey involve using reward to shape the animal’s learning and behaviour. These models may be more tractable in terms of analysing and manipulating learning at the level of the animal’s behaviour. They fail to capture, however, the kind of learning that Hebb often
Fig. 7A–C. Example of a cell that initially fires in both shapes in related locations and gradually comes to fire only in one shape only (the circle). A) Firing rate maps from middle trials on each day from day 10 to day 20 of main experiment in Lever et al. (2002). Top row shows transformed-circle trials (i.e. after topological transformation of the circle data into square data), for direct comparison with firing in the square (bottom row). After day 15, the cell fires in circle only (top row, D16–D20). B) Firing rate maps of smaller time segments taken from the eight consecutive trials from Trial e on day 14 to Trial f on day 15. Firing rate maps are shown for the 1st, 2nd, and last third of each trial. Right-hand panels show the first 60 seconds of each trial in the square. Note that the firing is always highest in the first third of each square trial, and then declines quite rapidly within the trial. Indeed, a disproportionate amount of firing takes place in the first 60 s. Note too that this early-phase firing also declines in strength over trials, from the last trial of day 14 to the last trial of day 15. Firing in the circle shows none of these patterns, and is basically stable.
emphasised and indeed that which is often equated with learning from experience in the popular mind: the automatic acquisition of knowledge. Here we have presented evidence at the level of cells and assemblies for the unsupervised learning described by Hebb. We have described two types of plasticity in place cells associated with environmental changes that are good candidates for such Hebbian learning. First, the rapid changes sometimes induced by insertion of a barrier. Second, the slow, experience-dependent, incremental divergence of the representations of environments of different shapes over trials that are sometimes accompanied by more rapid but less permanent changes within a trial. An important issue to be explored is the degree to which the rapid plasticity seen in barrier experiments is stable in the long term, as we suspect.
There are other candidates for Hebbian learning paradigms involving place cells. Perhaps the most obvious candidate is the experience dependent development of asymmetry of place fields (Ekstrom et al. 2001; Mehta et al. 1997, 2000). Although clearly demonstrated to be dependent on the NMDA receptor, this plasticity seems to be rather short-lived (resetting daily despite repetitions) compared to the creation of long-lasting cell assemblies envisaged by Hebb. Also, it is perhaps less easy to see evolutionary selection pressure in the behavioural uses for asymmetric or expanding field shape than, say, map divergence (e.g. Bostock et al. 1991; Jeffery 2000; Kentros et al. 1998; Lever et al. 2002; Sharp 1997; but see Blum and Abbott 1996).
5.2 What kinds of plasticity are involved in incremental remapping?
Can our data be related to physiological models of hippocampal plasticity? Figures 6 and 7 suggest a possible role for depression (short term and long term) as well as LTP, and more speculatively perhaps, the potential for both processes at the same synapse as in the BCM learning rule used by Fuhs and Touretzky (2000). Recent plasticity studies have suggested that both depression and enhancement can occur depending upon activity frequency and timing (Martin et al. 2000), as
suggested by the BCM rule, and that this frequency-response curve can be altered in favour of LTD in mouse mutants (Bach et al. 1995). It might be interesting to compare these mutant mice against mutants with impairments restricted to the processes related to LTP alone in a gradual remapping paradigm.
What does the time-course of remapping tell us? Although there can be several interpretations of the intra- and inter-trial processes in Figs. 6 and 7, they are sufficiently striking to pose an important test for any remapping model to reproduce them in some of the cells of a simulated network. Can the “two steps forward, one step back” remapping of both cells be interpreted in the same way?
One possibility is that comparable mechanisms for Hebbian learning exist in both long- and short-term forms. Indeed, a short-term (i.e. rapidly decaying) potentiation seems to invariably accompany the occurrence of the more famous long-term potentiation in experimental studies (e.g. Bliss and Collingridge 1993; McNaughton 1982). These parallel forms of plasticity have been reflected in various computational models of short- and long-term memory processes as ‘fast’ and ‘slow’ connection weights (e.g. Gardner-Medwin 1989; Hinton and Plaut 1987; Burgess and Hitch 1999). Thus, similar changes to a connection weight may occur both as a large amplitude but rapidly decaying change, and as a small amplitude permanent adjustment.
An interesting aspect of the place cell changes shown in Figs. 6 and 7 seems to point strongly towards a parallel ‘fast and slow’ interpretation. That is, whatever process causes the slow divergence in the representation of each environment across trials, *the same process* also appears to occur within each trial. However, the changes within each trial proceed relatively quickly, while only a small and incremental residue of the within-trial changes remain over the longer term. Thus the process of divergence at the beginning of the next trial has advanced only a modest amount from that at the beginning of the previous trial despite the rapid advance made during the trial.
A different level of interpretation could include a hierarchical process of environmental recognition, in which the balance between the processes of pattern completion and pattern separation alters over time. Thus, on entry to an environment, it might be behaviourally useful to first classify the general type of environment and then to become successively more specific. Within an auto-associative memory this might correspond to relaxation of the pattern of activation to the basin of attraction for the representations of all similar environments followed by settling into the representation corresponding to a specific environment. Alternatively, within CA3, there might be some dynamic alteration to the relative influence of the inputs from the dentate gyrus (supporting pattern separation) compared to the recurrent collaterals (supporting pattern completion).
### 5.3 Relationship to hippocampal network models
The geometric model captures much of the place cell data in a single static environment. Using barriers, we can see that bimodal place fields can be created in accordance with the model. We should perhaps stress that our data (not presented here) also clearly shows that such bimodality, indicative of local or partial remapping (Muller and Kubie 1987; Muller 1996), occurs within the same representation where the active subset of cells representing the square does not change i.e. there is no complex remapping (Muller 1996).
However, the dynamic field changes we have described require synaptic plasticity. The rapid remapping shown by Bostock et al. (1991) might be consistent with addition of a learned colour preference to the BVCs of the geometric model. In addition, the reduction in firing rates following novel cue card manipulations (Fenton and Muller 2000) might be consistent with a strengthening of the, initially hardwired, inputs to place cells that are active in a much visited environment (Burgess and Hartley 2002). In the more general terms of modeling remapping by incorporating plasticity into feed-forward models, Fuhs and Touretzky’s (2000) model looks pertinent to some of the data presented here, such as in Figs. 5 and 7. However, this model only deals with the evolution of monotopy, i.e. how a cell becomes silent in one of the two environments, and would need to also account for the processes whereby homotopic patterns are replaced by heterotopic patterns.
In terms of auto-associative models, the Charts model (Samsonovich and McNaughton 1997) does not deal well with bimodal firing such as seen in Fig. 4, and predicts only instantaneous, discontinuous remappings between different environments. Such remapping should be seen in the time it takes to minimally sample a standard environment (1–3 minutes). Although Bostock et al. (1991) observed rapid remapping (in many but not all their animals) after changing the colour of the cue card, even this form of remapping was not fully expressed until the next day. Clearly, obligatorily instantaneous remapping is also inconsistent with our recent data. A revised attractor model (e.g. Kali and Dayan 2000) could possibly capture important aspects of the plasticity we have seen although these would still predict place fields that maintained their relative locations rather than the absolute distances to boundaries shown by O’Keefe and Burgess (1996), (see earlier discussion of these models).
How conservative can we be in adducing mechanisms needed to explain the full spectrum of data on stability and remapping? Can a sliding threshold based on the magnitude of difference between environments explain Bostock et al.’s (1991) remapping in terms of a very fast version of gradual remapping? It will be important to describe the temporal dynamics and other aspects of remapping in more detail in order to appreciate any distinctions that may exist between map separation effected by rapid remappings based on several multimodal environmental differences (as in Kentros et al. 1998) and slow remapping effected through repeated experience (Lever et al. 2002). Both may result in an equally thorough pattern divergence. Clearly, as well as environments being “sufficiently different” to induce remapping
(Muller 1996), experience is also important. Remapping may proceed even with small differences, so long as they are perceived to be stable differences.
6 Conclusions
We have argued that study of the hippocampal representation of environmental geometry in freely moving rats should provide one of the best paradigms within which to observe the effects of Hebbian learning at the level of single cells in vivo. The slow, long-term and incremental plasticity of this representation observed across environments of different shape (Lever et al. 2002) appears to be at least consistent with Hebb’s postulate. These changes also appear to be consistent with the incidental or unsupervised nature of the type of learning stressed by Hebb (1949), and additionally indicate constraints on the nature and time-course of its exact implementation. Other forms of plasticity in the place cell representation, such as some of the rapid changes caused by insertion of a barrier (Sect. 3), or changes to the sensory features of an environment (e.g. Bostock et al. 1991) may indicate the presence of long-term changes occurring over much shorter time-scales. Interestingly, closer analysis of the dynamics of the slow incremental shape-based remapping also reveals the action of a faster but less enduring form of plasticity. Taken together with our quantitative understanding of the basic feed-forward organisation of the place cell system (Hartley et al. 2000), these data provide a powerful test-bed for investigation of the mechanisms of plasticity at work within a cognitive representation.
Acknowledgements. This work was supported by the Medical Research Council of the United Kingdom.
References
Amit DJ (1989) Modeling brain function. The world of attractor neural networks. Cambridge University Press
Bach ME, Hawkins RD, Osman M, Kandel ER, Mayford M (1995) Impairment of spatial but not contextual memory in CaMKII mutant mice with a selective loss of hippocampal LTP in the range of the theta frequency. Cell 81: 905–915
Barnes CA, Suster MS, Shen J, McNaughton BL (1997) Multistability of cognitive maps in the hippocampus of old rats. Nature 388: 272–275
Best PJ, Thompson LT (1989) Persistence, reticence, and opportunism of place-field activity in hippocampal neurons. Psychobiol 17: 230–235
Bienienstock EL, Cooper LN, Munro PW (1982) Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J Neurosci 2: 32–48
Bliss TVP, Collingridge GL (1993) A synaptic model of memory: long-term potentiation in the hippocampus. Nature 361: 31–39
Bliss TV, Lomo T (1973) Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path. J Physiol Lond 232: 331–356
Blodgett HC (1929) The effect of the introduction of reward upon the maze performance of rats. Univ Calif Publ Psychol 4: 113–134
Blum KI, Abbott LF (1996) A model of spatial map formation in the hippocampus of the rat. Neural Comput 8: 85–93
Bostock E, Muller RU, Kubie JL (1991) Experience-dependent modifications of hippocampal place cell firing. Hippocampus 1: 193–205
Brunel N, Trullier O (1998) Plasticity of directional place fields in a model of rodent CA3. Hippocampus 8: 651–665
Burgess N, Hartley T (2002) Orientational and geometric determinants of place and head-direction. Neural Inf Process Syst 14 (in press)
Burgess N, Hitch GJ (1999) Memory for serial order: a network model of the phonological loop and its timing. Psych Rev 104: 551–581
Burgess N, Jackson A, Hartley T, O’Keefe J (2000) Predictions derived from modeling the hippocampal role in navigation. Biol Cybern 83: 301–312
Burgess N, O’Keefe J (1996) Neuronal computations underlying the firing of place cells and their role in navigation. Hippocampus 6: 749–762
Cohen NJ, Eichenbaum H (1993) Memory, amnesia, and the hippocampal system. MIT Press, Cambridge, Mass
Davis S, Butcher SP, Morris RG (1992) The NMDA receptor antagonist D-2-amino-5-phosphonopentanoate (D-AP5) impairs spatial learning and LTP in vivo at intracerebral concentrations comparable to those that block LTP in vitro. J Neurosci 12: 21–34
Ekstrom AD, Meltzer J, McNaughton BL, Barnes CA (2001) NMDA receptor antagonism blocks experience-dependent expansion of hippocampal “place fields” Neuron 31: 631–638
Fenton AA, Muller RU (2000) Conjoint control of hippocampal place cell firing by two visual stimuli. J Gen Physiol 116: 191–209
Fuhs MC, Touretzky DS (2000) Synaptic learning models of map separation in the hippocampus. Neurocomput 32: 379–384
Gardner-Medwin AR (1976) The recall of events through the learning of associations between their parts. Proc Roy Soc Lond B Biol Sci 194: 375–402
Gardner-Medwin AR (1989) Doubly modifiable synapses: a model of short and long term auto-associative memory. Proc Roy Soc London B Biol Sci 238: 137–154
Gothard KM, Skaggs WE, McNaughton BL (1996) Dynamics of mismatch correction in the hippocampal ensemble code for space: interaction between path integration and environmental cues. J Neurosci 16: 8027–8040
Gustafsson B, Wigstrom H (1986) Hippocampal long-lasting potentiation produced by pairing single volleys and brief conditioning tetani evoked in separate afferent. J Neurosci 6: 1575–1582
Guzowski JF, McNaughton BL, Barnes CA, Worley PF (1999) Environment-specific expression of the immediate-early gene Arc in hippocampal neuronal ensembles. Nature Neurosci 2: 1120–1124
Harley CW (1979) Arm choices in a sunburst maze: effects of hippocampectomy in the rat. Physiology and Behavior 23: 283–290
Hartley T, Burgess N, Lever C, Caeucci F, O’Keefe J (2000) Modeling place fields in terms of the cortical inputs to the hippocampus. Hippocampus 10: 369–379
Hasselmo ME, Wyble BP, Wallenstein GV (1996) Encoding and retrieval of episodic memories: role of cholinergic and GABAergic modulation in the hippocampus. Hippocampus 6: 693–708
Hebb DO (1949) The organization of behavior. Wiley, New York
Hill AJ (1978) First occurrence of hippocampal spatial firing in a new environment. Exp Neurol 62: 282–297
Hinton GE, Plaut DC (1987) Using fast weights to deburr old memories. Proceedings of the Ninth Annual Conference of the Cognitive Science Society, Seattle, WA
Hollup SA, Molden S, Donnett JG, Moser MB, Moser EI (2001) Accumulation of hippocampal place fields at the goal location in an annular watermaze task. J Neurosci 21: 1635–1644
Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA 79: 2554–2558
Jeffery KJ (2000) Plasticity of the hippocampal cellular representation of place. In: Holscher (ed) Neuronal Mechanisms of Memory Formation Cambridge University Press, Cambridge
Jensen O, Lisman JE (1996) Hippocampal CA3 region predicts memory sequences: accounting for the phase precession of place cells. Learn Mem 3: 279–287
Kali S, Dayan P (2000) The involvement of recurrent connections in area CA3 in establishing the properties of place fields: a model. J Neurosci 20: 7463–7477
Kentros C, Hargreaves E, Hawkins RD, Kandel ER, Shapiro M, Muller RV (1998) Abolition of long-term stability of new hippocampal place cell maps by NMDA receptor blockade. Science 280: 2121–2126
Keith JR, McVety KM (1988) Latent place learning in a novel environment and the influences of prior training in rats. Psychobiol 16: 146–151
Kilgard MP, Merzenich MM (1998) Cortical map reorganization enabled by nucleus basalis activity. Science 279: 1714–1718
Kohonen T (1972) Correlation matrix memories. IEEE Trans Comp C-21: 353–359
Lever C, Wills T, Cacucci F, Burgess N, O’Keefe J (2002) Long-term plasticity in the hippocampal place-cell representation of environmental geometry. Nature 416: 90–94
Levy WB, Steward O (1983) Temporal contiguity requirements for long-term associative potentiation/depression in the hippocampus. Neurosci 8: 791–797
Marr D (1971) Simple memory: a theory for archicortex. Philos Trans R Soc Lond B Biol Sci 262: 23–81
Martin SJ, Grimwood PD, Morris RGM (2000) Synaptic plasticity and memory: an evaluation of the hypothesis. Annu Rev Neurosci 23: 649–711
McClelland JL, McNaughton BL, O’Reilly RC (1995) Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol Rev 102: 419–457
McHugh TJ, Blum KI, Tsien JZ, Tonegawa S, Wilson MA (1996) Impaired hippocampal representation of space in CA1-specific NMDAR1 knockout mice. Cell 87: 1339–1349
McNaughton BL (1982) Long-term synaptic enhancement and short-term post-tetanic potentiation in the rat fascia dentata act through different mechanisms. J Physiology 324: 249–262
McNaughton BL (1996) Cognitive cartography. Nature 381: 368–369
McNaughton BL, Barnes CA, O’Keefe J (1983) The contributions of position, direction, and velocity to single unit activity in the hippocampus of freely-moving rats. Exp Brain Res 52: 41–49
McNaughton BL, Barnes CA, Gerrard JL, Gothard K, Jung MW, Knieirim JJ, Kudrimoti H, Qin Y, Skaggs WE, Suster M, Weaver KL (1996) Deciphering the hippocampal polyglot: the hippocampus as a path integration system. J Exp Biol 199(Pt 1): 173–185
McNaughton BL, Morris RG (1987) Hippocampal synaptic enhancement and information storage within a distributed memory system. Trends in Neurosci 10: 408–415
McNaughton BL, Nadel L (1990) Hebb-Marr networks and the neurobiological representation of action in space. In: Gluck MA, Rumelhart DE (eds) Neurosics and connectionist theory. Lawrence Erlbaum Assoc, Hillsdale, NJ pp 1–63
Mehta MR, Barnes CA, McNaughton BL (1997) Experience-dependent, asymmetric expansion of hippocampal place fields. Proc Natl Acad Sci USA 94: 8918–8921
Mehta MR, Quirk MC, Wilson MA (2000) Experience-dependent asymmetric shape of hippocampal receptive fields. Neuron 25: 707–715
Morris RG, Garrud P, Rawlins JN, O’Keefe J (1982) Place navigation impaired in rats with hippocampal lesions. Nature 297: 681–683
Morris RG, Frey U (1997) Hippocampal synaptic plasticity: role in spatial learning or the automatic recording of attended experience? Philo Trans R Soc Lond B Biol Sci 352: 1489–1503
Muller RU (1996) A quarter of a century of place cells. Neuron 17: 813–822
Muller RU, Kubie JL (1987) The effects of changes in the environment on the spatial firing of hippocampal complex-spike cells. J Neurosci 7: 1951–1968
Muller RU, Bostock E, Taube JS, Kubie JL (1994) On the directional firing properties of hippocampal place cells. J Neurosci 14: 7235–7251
Muller RU, Stead M, Pach J (1996) The hippocampus as a cognitive graph. J Gen Physiol 107: 663–694
Muller RU, Stead M (1996) Hippocampal place cells connected by Hebbian synapses can solve spatial problems. Hippocampus 6: 709–719
Nakazawa K, Quirk MC, Chitwood RA, Watanabe M, Yeckel MF, Sun LD, Kato A, Carr CA, Johnston D, Wilson MA, Tonegawa S (2002) Requirement for hippocampal CA3 NMDA receptors in associative memory recall. Science 297: 211–218
O’Keefe J (1976) Place units in the hippocampus of the freely moving rat. Exp Neurol 51: 78–109
O’Keefe J, Burgess N (1996) Geometric determinants of the place fields of hippocampal neurons. Nature 381: 425–428
O’Keefe J, Conway DH (1978) Hippocampal place units in the freely moving rat: why they fire where they fire. Exp Brain Res 31: 573–590
O’Keefe J, Dostrovsky J (1971) The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res 34: 171–175
O’Keefe J, Nadel L (1978) The hippocampus as a cognitive map. Clarendon Press, Oxford
O’Keefe J, Speakman A (1987) Single unit activity in the rat hippocampus during a spatial memory task. Exp Brain Res 68: 1–27
O’Keefe J, Recce ML (1993) Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 3: 317–330
Quirk GJ, Muller RU, Kubie JL (1990) The firing of hippocampal place cells in the dark depends on the rat’s recent experience. J Neurosci 10: 2008–2017
Quirk GJ, Muller RU, Kubie JL, Ranck JB (1992) The positional firing properties of medial entorhinal neurons: description and comparison with hippocampal place cells. J Neurosci 12: 1945–1963
Redish AD, Touretzky DS (1999) Separating hippocampal maps. In: Burgess N, Jeffery KJ, O’Keefe J (eds) The hippocampal and parietal foundations of spatial cognition. Oxford University Press, pp 203–219
Rotenberg A, Mayford M, Hawkins RD, Kandel ER, Muller RU (1996) Mice expressing activated CaMKII lack low frequency LTP and do not form stable place cells in the CA1 region of the hippocampus. Cell 87: 1351–1361
Rotenberg A, Abel T, Hawkins RD, Kandel ER, Muller RU (2000) Parallel instabilities of long-term potentiation, place cells, and learning caused by decreased Protein Kinase A activity. J Neurosci 20: 8096–8102
Rumelhart DE, Zipser D (1986) Feature discovery by competitive learning. In: Rumelhart DE, McClelland JL (eds) Parallel distributed programming Vol 1: foundations, MIT Press, pp 151–193
Samsonovich A, McNaughton BL (1997) Path integration and cognitive mapping in a continuous attractor neural network model. J Neurosci 17: 5900–5920
Schultz W, Dayan P, Montague PR (1997) A neural substrate of prediction and reward. Science 275: 1593–1599
Sharp PE (1991) Computer simulation of hippocampal place cells. Psychobiol 19: 103–115
Sharp PE (1997) Subicular cells generate similar spatial firing patterns in two geometrically and visually distinctive environments: comparison with hippocampal place cells. Behav Brain Res 85: 71–92
Steele RJ, Morris RGM (1999) Delay-dependent impairment of a matching-to-place task with chronic and intrahippocampal infusion of the NMDA-antagonist D-AP5. Hippocampus 9: 118–136
Tanila H, Shapiro M, Gallagher M, Eichenbaum H (1997) Brain aging: changes in the nature of information coding by the hippocampus. J Neurosci 17: 5155–5166
Taubé JS, Muller RU, Ranck-JB J (1990) Head-direction cells recorded from the postsubiculum in freely moving rats. I: Description and quantitative analysis. J Neurosci 10: 420–435
Taubé JS (1998) Head direction cells and the neuropsychological basis for a sense of direction. Prog Neurobiol 55: 225–256
Tolman EC (1932) Purposive behavior in animals and men. Century, New York
Tolman EC (1948) Cognitive maps in rats and men. Psychol Rev 55: 189–208
Treves A, Rolls ET (1992) Computational constraints suggest the need for two distinct input systems to the hippocampal CA3 network. Hippocampus 2: 189–199
Tsodyks MV, Skaggs WE, Sejnowski TJ, McNaughton BL (1996) Population dynamics and theta rhythm phase precession of hippocampal place cell firing: a spiking neuron model. Hippocampus 6: 271–280
Wilson MA, McNaughton BL (1993) Dynamics of the hippocampal ensemble code for space. Science 261: 1055–1058
Zhang K (1996) Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J Neurosci 16: 2112–2126
Zipser D (1985) A computational model of hippocampal place fields. Behav Neurosci 99: 1006–1018
|
A Functional Approach to External Graph Algorithms
James Abello, Adam L. Buchsbaum, and Jeffery R. Westbrook
AT&T Labs, 180 Park Ave., Florham Park, NJ 07932, USA,
{abello,alb,firstname.lastname@example.org,
http://www.research.att.com/info/{abello,alb,jeffw}.
Abstract. We present a new approach for designing external graph algorithms and use it to design simple external algorithms for computing connected components, minimum spanning trees, bottleneck minimum spanning trees, and maximal matchings in undirected graphs and multi-graphs. Our I/O bounds compete with those of previous approaches. Unlike previous approaches, ours is purely functional—without side effects—and is thus amenable to standard checkpointing and programming language optimization techniques. This is an important practical consideration for applications that may take hours to run.
1 Introduction
We present a divide-and-conquer approach for designing external graph algorithms, i.e., algorithms on graphs that are too large to fit in main memory. Our approach is simple to describe and implement: it builds a succession of graph transformations that reduce to sorting, selection, and a recursive bucketing technique. No sophisticated data structures are needed. We apply our techniques to devise external algorithms for computing connected components, minimum spanning trees (MSTs), bottleneck minimum spanning trees (BMSTs), and maximal matchings in undirected graphs and multi-graphs.
We focus on producing algorithms that are purely functional. That is, each algorithm is specified as a sequence of functions applied to input data and producing output data, with the property that information, once written, remains unchanged. The function is then said to have no “side effects.” A functional approach has several benefits. External memory algorithms may run for hours or days in practice. The lack of side effects on the external data allows standard checkpointing techniques to be applied [16, 19], increasing the reliability of any real application. A functional approach is also amenable to general purpose programming language transformations that can reduce running time. (See, e.g., Wadler [22].) We formally define the functional I/O model in Section 1.1.
The key measure of external memory graph algorithms is the disk I/O complexity. For the problems mentioned above, our algorithms perform $O\left(\frac{E}{B} \log_{M/B} \frac{V}{B} \log_2 \frac{V}{M}\right)$ I/Os, where $E$ is the number of edges, $V$ the number of vertices, $M$ the size of main memory, and $B$ the disk block size. The BMST and maximal matching results are new. The asymptotic I/O complexities of our connected components and MST algorithms match those of Chiang et al. [8]. Kumar and Schwabe [15] give algorithms for breadth-first search (which can compute connected components) and MSTs that perform $O(V + \frac{E}{B} \log_2 \frac{E}{B})$ and $O\left(\frac{E}{B} \log_{M/B} \frac{E}{B} \log_2 B + \frac{E}{B} \log_2 V\right)$ I/Os, respectively. Our connected components algorithm is asymptotically better when $V < M^2/B$, and our
MST algorithm is asymptotically better when $V < M/B$. While the above algorithms of Chiang et al. [8] are functional, those of Kumar and Schwabe [15] are not. Compared to either previous approach, our algorithms are simpler to describe and implement.
We also consider a semi-external model for graph problems, in which the vertices but not the edges fit in memory. This is not uncommon in practice, and when vertices can be kept in memory, significantly more efficient algorithms are possible. We design new algorithms for external grouping and sorting with duplicates and apply them to produce better I/O bounds for the semi-external case of connected components.
We begin below by describing the I/O model. In Section 2, we sketch two previous approaches for designing external graph algorithms. In Section 3, we describe our functional approach and detail a suite of simple graph transformations. In Section 4, we apply our approach to design new, simple algorithms for computing connected components, MSTs, BMSTs, and maximal matchings. In Section 5, we consider semi-external graph problems and give improved I/O bounds for the semi-external case of connected components. We conclude in Section 6.
### 1.1 The Functional I/O Model
We adapt the I/O model of complexity as defined by Aggarwal and Vitter [1]. For some problem instance, we define $N$ to be the number of items in the instance, $M$ to be the number of items that can fit in main memory, $B$ to be the number of items per disk block, and $b = \lceil M/B \rceil$. A typical compute server might have $M \approx 10^9$ and $B \approx 10^3$.
We assume that the input graph is presented as an unordered list of edges, each edge a pair of endpoints plus possibly a weight. We define $V$ to be the number of vertices, $E$ to be the number of edges, and $N = V + E$. (We abuse notation and also use $V$ and $E$ to be the actual vertex and edge sets; the context will clarify any ambiguity.) In general, $1 < B \ll M < N$. Our algorithms work in the single-disk model; we discuss parallel disks in Section 6. For the connected components problem, the output is a delineated list of component edges, $\{C_1, C_2, \ldots, C_k\}$, where $k$ is the number of components: each $C_i$ is the list of edges in component $i$, and the output is the file of $C_i$s catenated together, with a separator record between adjacent components. For the MST and BMST problems the output is a delineated list of edges in each tree in the spanning forest. For matching, the output is the list of edges in the matching.
Following Chiang et al. [1] we define $scan(N) = \lceil N/B \rceil$ to be the number of disk I/Os required to transfer $N$ contiguous items between disk and memory, and we define $sort(N) = \Theta(scan(N)\log_b \frac{N}{B})$ to be the number of I/Os required to sort $N$ items. The I/O model stresses the importance of disk accesses over computation for large problem instances. In particular, time spent computing in main memory is not counted.
The *functional I/O* (FIO) model is as above, but operations can make only functional transformations to data, which do not change the input. Once a disk cell, representing some piece of state, is allocated and written, its contents cannot be changed. This imposes a sequential, write-once discipline on disk writes, which allows the use of standard checkpointing techniques [16, 19], increasing the reliability of our algorithms. When results of intermediate computations are no longer needed, space is reclaimed, e.g., through garbage collection. The maximum disk space active at any one time is used to measure the space complexity. All of our algorithms use only linear space.
2 Previous Approaches
2.1 PRAM Simulation
Chiang et al. [8] show how to simulate a CRCW PRAM algorithm using one processor and an external disk, thus giving a general method for constructing external graph algorithms from PRAM graph algorithms. Given a PRAM algorithm, the simulation maintains on disk arrays $A$, which contains the contents of main memory, and $T$, which contains the current state for each processor. Each step of the algorithm is simulated by constructing an array, $D$, which contains for each processor the memory address to be read. Sorting $A$ and $D$ by memory address and scanning them in tandem suffices to update $T$. A similar procedure is used to write updated values back to memory.
Each step thus requires a constant number of scans and sorts of arrays of size $|T|$ and a few scans of $A$. Typically, therefore, a PRAM algorithm using $N$ processors and $N$ space to solve a problem of size $N$ in time $t$ can be simulated in external memory by one processor using $O(t \cdot \text{sort}(N))$ I/Os. Better bounds are possible if the number of active processors and memory cells decreases linearly over the course of the PRAM algorithm. These techniques can, for example, simulate the PRAM maximal matching algorithm of Kelsen [14] in $O(\text{sort}(E) \log_2^3 V)$ I/Os and the PRAM connected components and MST algorithms of Chin, Lam, and Chen [9] in $O(\text{sort}(E) \log_2 \frac{N}{T})$ I/Os.
The PRAM simulation works in the FIO model, if each step writes new copies of $T$ and $A$. To the best of our knowledge, however, no algorithm based on the simulation has been implemented. Such an implementation would require not only a practical PRAM algorithm but also either a meticulous direct implementation of the corresponding external memory simulation or a suitable low-level machine description of the PRAM algorithm together with a general simulation tool. In contrast, a major goal of our work is to provide simple, direct, and implementable algorithms.
2.2 Buffering Data Structures
Another recent approach is based on external variants of classical internal data structures. Arge [3] introduces buffer trees, which support sequences of insert, delete, and deletemin operations on $N$ elements in $O\left(\frac{1}{B} \log_b \frac{N}{B}\right)$ amortized I/Os each. Kumar and Schwabe [15] introduce a variant of the buffer tree, achieving the same heap bounds as Arge. These bounds are optimal, since the heaps can be used to sort externally. Kumar and Schwabe [15] also introduce external tournament trees. The tournament tree maintains the elements 1 to $N$, each with a key, subject to the operations delete, deletemin, and update: deletemin returns the element of minimum key, and update reduces the key of a given element. Each tournament tree operation takes $O\left(\frac{1}{B} \log_2 \frac{N}{B}\right)$ amortized I/Os.
These data structures work by buffering operations in nodes and performing updates in batches. The maintenance procedures on the data structures are intuitively simple but involve many implementation details. The data structures also are not functional. The node-copying techniques of Driscoll et al. [10] could be used to make them functional, but at the cost of significant extra I/O overhead.
Finally, while such data structures excel in computational geometry applications, they are hard to apply to external graph algorithms. Consider computing an MST. The
classical greedy algorithm repeatedly performs deletims on a heap to find the next vertex $v$ to attach to the tree, $T$, and then performs updates for each neighbor $w$ of $v$ that becomes closer to $T$ by way of $v$. In the external version of the algorithm, however, finding the neighbors of $v$ is non-trivial, requiring a separate adjacency list. Furthermore, determining the current key of a given neighbor $w$ (distance of $w$ to $T$) is problematic, yet the key is required to decide whether to perform the update operation.
Intuitively, while the update operations on the external data structures can be buffered, yielding efficient amortized I/O complexity, standard applications of these data structures in graph algorithms require certain queries (key finding, e.g.) to be performed on-line. Current applications of these data structures thus require ancillary data structures to obviate this problem, increasing the I/O and implementation complexity.
3 Functional Graph Transformations
In this paper, we utilize a divide-and-conquer paradigm based on a few graph transformations that, for many problems, preserve certain critical properties. We implement each stage in our approach using simple and efficient techniques: sorting, selection, and bucketing. We illustrate our approach with connected components. Let $G = (V, E)$ be a graph. Let $f(G) \subseteq V \times V$ be a forest of rooted stars (trees of height one) representing the connected components of $G$. That is, if $r_G(v)$ is the root of the star containing $v$ in $f(G)$, then $r_G(v) = r_G(u)$ if and only if $v$ and $u$ are in the same connected component in $G$; $f(G)$ is presented as a delineated edge list.
Consider $E' \subseteq V \times V$, and let $G' = G/E'$ denote the result of contracting all vertex pairs in $E'$. (This generalizes the usual notion of contraction, by allowing the contraction of vertices that are not adjacent in $G$.) For any $x \in V$, let $s(x)$ be the supervertex in $G'$ into which $x$ is contracted. It is easy to prove that if each pair in $E'$ contains two vertices in the same connected component, then $r_{G'}(s(v)) = r_{G'}(s(u))$ if and only if $r_G(v) = r_G(u)$; i.e., contraction preserves connected components. Thus, given procedures to contract a list of components of a graph and to re-expand the result, we derive the following simple algorithm to compute $f(G)$.
1. Let $E_1$ be any half of the edges of $G$; let $G_1 = (V, E_1)$.
2. Compute $f(G_1)$ recursively.
3. Let $G' = G/E(G_1)$.
4. Compute $f(G')$ recursively.
5. $f(G) = f(G') \cup R(f(G'), f(G_1))$, where $R(X, Y)$ relabels edge list $Y$ by forest $X$: each vertex $u$ occurring in edges in $Y$ is replaced by its parent in $X$ if it exists.
Our approach is functional if selection, relabeling, and contraction can be implemented without side effects on their arguments. We show how to do this below. In the following section, we use these tools to design functional external algorithms for computing connected components, MSTs, BMSTs, and maximal matchings.
3.1 Transformations
Selection. Let $I$ be a list of items with totally ordered keys. $\text{Select}(I, k)$ returns the $k$th biggest element from $I$; i.e., $|\{x \in I | x < \text{Select}(I, k)\}| = k - 1$. We adapt the classical algorithm for $\text{Select}(I, k)$ [2].
1. Partition $I$ into $j$-element subsets, for some $j \approx M$.
2. Sort each subset in main memory. Let $S$ be the set of medians of the subsets.
3. $m \leftarrow \text{Select}(S, \lceil S/2 \rceil)$.
4. Let $I_1, I_2, I_3$ be those elements less than, equal to, and greater than $m$, respectively.
5. If $|I_1| \geq k$, then return $\text{Select}(I_1, k)$.
6. Else if $|I_1| + |I_2| \geq k$, then return $m$.
7. Else return $\text{Select}(I_3, k - |I_1| - |I_2|)$.
**Lemma 1.** $\text{Select}(I, k)$ can be performed in $O(\text{scan}(|I|))$ I/Os in the FIO model.
**Relabeling.** Given a forest $F$ as an unordered sequence of tree edges $\{(p(v), v), \ldots\}$, and an edge set $I$, relabeling produces a new edge set $I' = \{\{r(u), r(v)\} \mid \{u, v\} \in I\}$, where $r(x) = p(x)$ if $(p(x), x) \in F$, and $r(x) = x$ otherwise. That is, for each edge $\{u, v\} \in I$, each of $u$ and $v$ is replaced by its respective parent, if it exists, in $F$. We implement relabeling as follows.
1. Sort $F$ by source vertex, $v$.
2. Sort $I$ by second component.
3. Process $F$ and $I$ in tandem.
(a) Let $\{s, h\} \in I$ be the current edge to be relabeled.
(b) Scan $F$ starting from the current edge until finding $(p(v), v)$ such that $v \geq h$.
(c) If $v = h$, then add $\{s, p(v)\}$ to $I'$; otherwise, add $\{s, h\}$ to $I'$.
4. Repeat Steps 2 and 3, relabeling first components of edges in $I'$ to construct $I'$.
Relabeling is related to pointer jumping, a technique widely applied in parallel graph algorithms [11]. Given a forest $F = \{(p(v), v), \ldots\}$, pointer jumping produces a new forest $F' = \{(p(p(v)), v) \mid (p(v), v) \in F\}$; i.e., each $v$ of depth two or greater in $F$ points in $F'$ to its grandparent in $F$. (Define $p(v) = v$ if $v$ is a root in $F$.) Our implementation of relabeling is similar to Chiang’s [7] implementation of pointer jumping.
**Lemma 2.** Relabeling an edge list $I$ by a forest $F$ can be performed in $O(\text{sort}(|I|) + \text{sort}(|F|))$ I/Os in the FIO model.
**Contraction.** Given an edge list $I$ and a list $C = \{C_1, C_2, \ldots\}$ of delineated components, we can contract each component $C_i$ in $I$ into a supervertex by constructing and applying an appropriate relabeling to $I$.
1. For each $C_i = \{\{u_1, v_1\}, \ldots\}$:
(a) $R_i \leftarrow \emptyset$.
(b) Pick $u_1$ to be the canonical vertex.
(c) For each $\{x, y\} \in C_i$, add $(u_1, x)$ and $(u_1, y)$ to relabeling $R_i$.
2. Apply relabeling $\bigcup_i R_i$ to $I$, yielding the contracted edge list $I'$.
**Lemma 3.** Contracting an edge list $I$ by a list of delineated components $C = \{C_1, C_2, \ldots\}$ can be performed in $O(\text{sort}(|I|) + \text{sort}(\sum_i |C_i|))$ I/Os in the FIO model.
**Deletion.** Given edge lists $I$ and $D$, it is straightforward to construct $I' = I \setminus D$: simply sort $I$ and $D$ lexicographically, and process them in tandem to construct $I'$ from the edges in $I$ but not $D$. If $D$ is a vertex list, we can similarly construct $I'' = \{ \{u, v\} \in I \mid u, v \not\in D \}$, which we also denote by $I \setminus D$.
**Lemma 4.** Deleting a vertex or edge set $D$ from an edge set $I$ can be performed in $O(sort(|I|) + sort(|D|))$ I/Os in the FIO model.
## 4 Applying the Techniques
In this section, we devise efficient, functional external algorithms for computing connected components, MSTs, BMSTs, and maximal matchings of undirected graphs. Each of our algorithms needs $O(sort(E) \log_2 \frac{V}{M})$ I/Os. Our algorithms extend to multigraphs, by sorting the edges and removing duplicates in a preprocessing pass.
### 4.1 Deterministic Algorithms
**Connected Components.**
**Theorem 1.** The delineated edge list of components of a graph can be computed in the FIO model in $O(sort(E) \log_2 \frac{V}{M})$ I/Os.
**Proof (Sketch).** Let $T(E)$ be the number of I/Os required to compute $f(G)$, the forest of rooted stars corresponding to the connected components of $G$, presented as a delineated edge list. Recall from Section 3 the algorithm for computing $f(G)$. We can easily select half the edges in $E$ in $scan(E)$ I/Os. Contraction takes $O(sort(E))$ I/Os, by Lemma 3. We use the forest $f(G')$ to relabel the forest $f(G_1)$. Combining the two forests and sorting the result by target vertex then yields the desired result. Thus, $T(E) \leq O(sort(E)) + 2T(E/2)$. We stop the recursion when a subproblem fits in internal memory, so $T(E) = O(sort(E) \log_2 \frac{V}{M})$.
Given $f(G)$, we can label each edge in $E$ with its component in $O(sort(E))$ I/Os, by sorting $E$ (by, say, first vertex) and $f(G)$ (by source vertex) and processing them in tandem to assign component labels to the edges. This creates a new, labeled edge list, $E''$. We then sort $E''$ by label, creating the desired output $E'$.
**Minimum Spanning Trees.** We use our approach to design a top-down variant of Borůvka’s MST algorithm [4]. Let $G = (V, E)$ be a weighted, undirected graph, and let $f(G)$ be the delineated list of edges in a minimum spanning forest (MSF) of $G$.
1. Let $m = \text{Select}(E, |E|/2)$ be the edge of median weight.
2. Let $S(G) \subset E$ be the edges of weight less than $m$ and half the edges of weight $m$.
3. Let $G_2$ be the contraction $G/f(S(G))$.
4. Then $f(G) = f(S(G)) \cup R_2(f(G_2))$, presented as a delineated edge list, where $R_2(\cdot)$ re-expands edges in $G_2$ that are incident on supervertices created by the contraction $G/f(S(G))$ to be incident on their original endpoints in $S(G)$.
**Theorem 2.** A minimum spanning forest of a graph can be computed in the FIO model in $O(sort(E) \log_2 \frac{V}{M})$ I/Os.
**Proof (Sketch).** Correctness follows from the analysis of the standard greedy approach. The I/O analysis uses the same recursion as in the proof of Theorem 1. Selection incurs $O(scan(E))$ I/Os, by Lemma 1. The input to $f(\cdot)$ is an annotated edge list: each edge has a weight and a label. Contraction incurs $O(sort(E))$ I/Os, by Lemma 3, and installs the corresponding original edge as the label of the contracted edge. The labels allow us to “re-expand” contracted edges by a reverse relabeling. To produce the final edge list, we apply a procedure similar to that used to delineate connected components.
Because our contraction procedure requires a list of delineated components, we cannot implement the classical, bottom-up Borůvka MST algorithm [4], in which each vertex selects the incident edge of minimum weight, the selected edges are contracted, and the process repeats until one supervertex remains. An efficient procedure to contract an arbitrary edge list could thus be used to construct a faster external MST algorithm.
**Bottleneck Minimum Spanning Trees.** Given a graph $G = (V, E)$, a bottleneck minimum spanning tree (BMST, or forest, BMSF) is a spanning tree (or forest) of $G$ that minimizes the maximum weight of an edge. Camerini [5] shows how to compute a BMST of an undirected graph in $O(E)$ time, using a recursive procedure similar to that for MSTs.
1. Let $S(G)$ be the lower-weighted half of the edges of $E$.
2. If $S(G)$ spans $G$, then compute a BMST of $S(G)$.
3. Otherwise, contract $S(G)$, and compute a BMST of the remaining graph.
We design a functional external variant of Camerini’s algorithm analogously to the MST algorithm above. In the BMST algorithm, $f(G)$ returns a BMSF of $G$ and a bit indicating whether or not it is connected (a BMST). If $f(S(G))$ is a tree, then $f(G) = f(S(G))$; otherwise, we contract $f(S(G))$ and recurse on the upper-weighted half.
**Theorem 3.** A bottleneck minimum spanning tree can be computed in the FIO model in $O(sort(E) \log_2 \frac{V}{M})$ I/Os.
Whether BMSTs can be computed externally more efficiently than MSTs is an open problem. If we could determine whether or not a subset $E' \subseteq E$ spans a graph in $g(E')$ I/Os, then we can use that procedure to limit the recursion to one half of the edges of $E$, as in the classical BMST algorithm. This would reduce the I/O complexity of finding a BMST to $O(g(E) + sort(E))$ ($sort(E)$ to perform the contraction).
**Maximal Matching.** Let $f(G)$ be a maximal matching of a graph $G = (V, E)$, and let $V(f(G))$ be the vertices matched by $f(G)$. We find $f(G)$ functionally as follows.
1. Let $S(G)$ be any half of the edges of $E$.
2. Let $G_2 = E \setminus V(f(S(G)))$.
3. Then $f(G) = f(S(G)) \cup f(G_2)$.
**Theorem 4.** A maximal matching of a graph can be computed in the FIO model in $O(sort(E) \log_2 \frac{V}{M})$ I/Os.
**Proof (Sketch).** Selecting half the edges takes $scan(E)$ I/Os. Deletion takes $O(sort(E))$ I/Os, by Lemma 4. Deleting all edges incident on matched vertices must remove all edges in $E_1$ from $G$ by the assumption of maximality. Hence $|E(G_2)| \leq |E|/2$.
Finding a maximum matching externally (efficiently) remains an open problem.
### 4.2 Randomized Algorithms
Using the random vertex selection technique of Karger, Klein, and Tarjan [12], we can reduce by a constant fraction the number of vertices as well as edges at each step. This leads to randomized functional algorithms for connected components, MSTs, and BMSTs that incur $O(sort(E))$ I/Os with high probability. We can also implement a functional external version of Luby’s randomized maximal independent set algorithm [17] that uses $O(sort(E))$ I/Os with high probability, yielding the same bounds for maximal matching.
Similar approaches were suggested by Chiang et al. [8] and Mehlhorn [18]. We leave details of these randomized algorithms to the full paper.
### 5 Semi-External Problems
We now consider *semi-external* graph problems, when $V \leq M$ but $E > M$. These cases often have practical applications, e.g., in graphs induced by monitoring long-term traffic patterns among relatively few nodes in a network. The ability to maintain information about the vertices in memory often simplifies the problems.
For example, the semi-external MST problem can be solved with $O(sort(E))$ I/Os: we simply scan the sorted edge list, using a disjoint set union (DSU) data structure [21] in memory to maintain the forest. We can even solve the problem in $scan(E)$ I/Os, using dynamic trees [20] to maintain the forest. For each edge, we delete the maximum weight edge on any cycle created. The total internal computation time becomes $O(E \log V)$.
Semi-external BMSTs are similarly simplified, because we can check internally if an edge subset spans a graph. Semi-external maximal matching is possible in one edge-list scan, simply by maintaining a matching internally.
If $V \leq M$, we can compute the forest of rooted stars corresponding to the connected components of a graph in one scan, using DSU to maintain a forest internally. We can label the edges by their components in another scan, and we can then sort the edge list to arrange edges contiguously by component. The sorting bound is pessimistic for computing connected components, however, if the number of components is small. Below we give an algorithm to group $N$ records with $K$ distinct keys, so that records with distinct keys appear contiguously, in $O(scan(N) \log_b K)$ I/Os. Therefore, if $V \leq M$ we can compute connected components on a graph $G = (V, E)$ in $O(scan(E) \log_b C(G))$ I/Os, where $C(G) \leq V$ is the number of components of $G$. In many applications, $C(G) \ll V$, so this approach significantly improves upon sorting.
5.1 Grouping
Let \( \mathcal{I} = (x_1, \ldots, x_N) \) be a list of \( N \) records. We use \( x_i \) to mean the key of record \( x_i \) as well as the record itself. The grouping problem is to permute \( \mathcal{I} \) into a new list \( \mathcal{I}' = (x_{\pi_1}, \ldots, x_{\pi_N}) \), such that \( x_{\pi_i} = x_{\pi_j}, i < j \implies x_{\pi_i} = x_{\pi_{i+1}} \); i.e., all records with equal keys appear contiguously in \( \mathcal{I}' \). Grouping differs from sorting in that the order among elements of different keys is arbitrary.
We first assume that the keys of the elements of \( \mathcal{I} \) are integers in the range \([1, G]\); we can scan \( \mathcal{I} \) once to find \( G \), if necessary. Our grouping algorithm, which we call quickscan, recursively re-partitions the elements in \( \mathcal{I} \) as follows.
The first pass permutes \( \mathcal{I} \) so that elements in the first \( G/b \) groups appear contiguously, elements in the second \( G/b \) groups appear contiguously, etc. The second pass refines the permutation so that elements in the first \( G/b^2 \) groups appear contiguously, elements in the second \( G/b^2 \) groups appear contiguously, etc. In general, the output of the \( k \)th pass is a permutation \( \mathcal{I}_k = (x_{\pi_1^k}, \ldots, x_{\pi_N^k}) \), such that
\[
\left\lfloor \frac{x_{\pi_i^k}}{\lceil G/b^k \rceil} \right\rfloor = \left\lfloor \frac{x_{\pi_j^k}}{\lceil G/b^k \rceil} \right\rfloor, \quad i < j \implies \left\lfloor \frac{x_{\pi_i^k}}{\lceil G/b^k \rceil} \right\rfloor = \left\lfloor \frac{x_{\pi_{i+1}^k}}{\lceil G/b^k \rceil} \right\rfloor.
\]
Let \( \Delta_k = \lceil G/b^k \rceil \). Then keys in the range \([1 + j \Delta_k, 1 + (j + 1) \Delta_k - 1]\) appear contiguously in \( \mathcal{I}_k \), for all \( k \) and \( 0 \leq j < b^k \). \( \mathcal{I}_0 = \mathcal{I} \) is the initial input, and \( \mathcal{I}' = \mathcal{I}_k \) is the desired final output when \( k = \lceil \log_b G \rceil \), bounding the number of passes.
Refining a Partition. We first describe a procedure that permutes a list \( \mathcal{L} \) of records with keys in a given integral range \([K, K']\). Let \( \delta = \left\lfloor \frac{K' - K + 1}{b} \right\rfloor \). The procedure produces a permutation \( \mathcal{L}' \) such that records with keys in the range \([K + j \delta, K + (j + 1) \delta - 1]\) occur contiguously, for \( 0 \leq j < b \). Initially, each memory block is empty, and a pointer \( P \) points to the first disk block available for the output. As blocks of \( \mathcal{L} \) are scanned, each record \( x \) is assigned to memory block \( m = \left\lfloor \frac{x - K}{\delta} \right\rfloor \). Memory block \( m \) will thus be assigned keys in the range \([K + m \delta, K + (m + 1) \delta - 1]\). When block \( m \) becomes full, it is output to disk block \( P \), and \( P \) is updated to point to the next empty disk block.
Since we do not know where the boundaries will be between groups of contiguous records, we must construct a singly linked list of disk blocks to contain the output. We assume that we can reference a disk block with \( O(1) \) memory cells; each disk block can thus store a pointer to its successor. Additionally, each memory block \( m \) will store pointers to the first and last disk blocks to which it has been written. An output to disk block \( P \) thus requires one disk read and two disk writes: to find and update the pointers in the disk block preceding \( P \) in \( \mathcal{L}' \) and to write \( P \) itself.
After processing the last record from \( \mathcal{L} \), each of the \( b \) memory blocks is empty or partially full. Let \( M_1 \) be the subset of memory blocks, each of which was never filled and written to disk; let \( M_2 \) be the remaining memory blocks. We compact all the records in blocks in \( M_1 \) into full memory blocks, leaving at most one memory block, \( m_0 \), partially filled. We then write these compacted blocks (including \( m_0 \)) to \( \mathcal{L}' \).
Let \( \{m_1, \ldots, m_\ell\} = M_2 \). We wish to write the remaining records in \( M_2 \) to disk so that at completion, there will be at most one partial block in \( \mathcal{L}' \). Let \( L_i \) be the last disk block written from \( m_i \); among all the \( L_i \)'s we will maintain the invariant that there
is at most one partial block. At the beginning, $L_0$ can be the only partial block. When considering each $m_i$ in turn, for $1 \leq i \leq \ell$, only $L_{i-1}$ may be partial.
We combine in turn the records in $L_{i-1}$ with those in $m_i$, and possibly some from $L_i$ to *percolate* the partial block from $L_{i-1}$ to $L_i$. Let $|X|$ be the number of records stored in a block $X$. If $n_i = |L_{i-1}| + |m_i| \geq B$, we add $B - |L_{i-1}|$ records from $m_i$ to $L_{i-1}$, and we store the remaining $|m_i| - B + |L_{i-1}|$ records from $m_i$ in the new partial block, $L'_i$, which we insert into $L'$ after $L_i$; we then set $L_i \leftarrow L'_i$. Otherwise, $n_i < B$, and we add all the records in $m_i$ as well as move $B - n_i$ records from $L_i$ to $L_{i-1}$; this again causes $L_i$ to become the partial block.
**Quickscan.** We now describe phase $i$ of quickscan, given input $\mathcal{I}_{i-1}$, $\Delta_{i-1} = \lceil G/b^{i-1} \rceil$, and keys in the range $[1 + j\Delta_{i-1}, 1 + (j+1)\Delta_{i-1} - 1]$ appear contiguously in $\mathcal{I}_{i-1}$, for $0 \leq j < b^{i-1}$. While scanning $\mathcal{I}_{i-1}$, therefore, we always know the range of keys in the current region, so we can iteratively apply the refinement procedure.
We maintain a value $S$, which denotes the lowest key in the current range. Initially, $S = \infty$; the first record scanned will assign a proper value to $S$. We scan $\mathcal{I}_{i-1}$, considering each record $x$ in each block in turn. If $x \not\in [S, S + \Delta_{i-1} - 1]$, then $x$ is the first record in a new group of keys, each of which is in the integral range $[S', S' + \Delta_{i-1} - 1]$, for $S' = 1 + \left\lfloor \frac{S-1}{\Delta_{i-1}} \right\rfloor \cdot \Delta_{i-1}$. Furthermore, all keys within this range appear contiguously in $\mathcal{I}_{i-1}$. The record read previously to $x$ was therefore the last record in the old range, and so we finish the refinement procedure underway and start a new one to refine the records in the new range, $[S', S' + \Delta_{i-1} - 1]$, starting with $x$.
We use percolation to combine partial blocks remaining from successive refinements. The space usage of the algorithm is thus optimal: quickscan can be implemented using $2 \cdot \text{scan}(N)$ extra blocks, to store the input and output of each phase in an alternating fashion. A non-functional quickscan can be implemented in place, because the $i$th block output is not written to disk until after the $i$th block in the input is scanned.
In either case, grouping $N$ items with keys in the integral range $[1, G]$ takes $O(\text{scan}(N) \log_b G)$ I/Os. Note that grouping solves the *proximate neighbors* problem [8]: given $N$ records on $N/2$ distinct keys, such that each key is assigned to exactly two records, permute the records so that records with identical keys reside in the same disk block. Chiang et al. [8] show a lower bound of $\Omega(\min\{N, \text{sort}(N)\})$ I/Os to solve proximate neighbors in the single-disk I/O model. Our grouping result does not violate this bound, since in the proximate neighbors problem, $G = N/2$.
### 5.2 Sorting with Duplicates
Because of the compaction of partially filled blocks at the end of a refinement, quickscan cannot sort the output. By using a constant factor extra space, however, quickscan can produce sorted output. This is *sorting with duplicates*.
Consider the refinement procedure applied to a list $\mathcal{L}$. When the last block of $\mathcal{L}$ has been processed, the $b$ memory blocks are partitioned into sets $M_1$ and $M_2$. The procedure optimizes space by compacting and outputting the records in blocks in $M_1$ and then writing out blocks in $M_2$ in turn, percolating the (at most) one partially filled block on disk. Call the blocks in $M_1$ *short blocks* and the blocks in $M_2$ *long blocks*. The global compaction of the short blocks results in the output being grouped but not sorted.
The short blocks can be partitioned into subsets of $M_1$ that occur contiguously in memory, corresponding to contiguous ranges of keys in the input. We restrict compaction to contiguous subsets of short blocks, which we sort internally. Then we write these subsets into place in the output $\mathcal{L}'$. $\mathcal{L}'$ is thus partitioned into ranges that alternatively correspond to long blocks and contiguous subsets of short blocks. Since a long block and the following contiguous subset of short blocks together produce at most two partially filled blocks in $\mathcal{L}'$, the number of partially filled blocks in $\mathcal{L}'$ is bounded by twice the number of ranges produced by long blocks. Each range produced by long blocks contains at least one full block, so the number of blocks required to store $\mathcal{L}'$ is at most $3 \cdot \text{scan}(|\mathcal{L}|)$. The output $\mathcal{L}'$, however, is sorted, not just grouped.
We apply this modified refinement procedure iteratively, as above, to sort the input list $\mathcal{I}$. Each sorted, contiguous region of short blocks produced in phase $i$, however, is skipped in succeeding phases; we need not refine it further. Linking regions of short blocks produced at the boundaries of the outputs of various phases is straightforward, so in general the blocks in any phase alternate between long and short. Therefore, skipping the regions of short blocks does not affect the I/O complexity.
Sorting $N$ records with $G$ distinct keys in the integral range $[1, G]$ thus takes $\Theta(\text{scan}(N) \log_b G)$ I/Os. The algorithm is stable, assuming a stable internal sort.
### 5.3 Grouping with Arbitrary Keys
If the $G$ distinct keys in $\mathcal{I}$ span an arbitrary range, we can implement a randomized quickscan to group them. Let $H$ be a family of universal hash functions [6] that map $\mathcal{I}$ to the integral range $[1, b]$. During each phase $i$ in randomized quickscan, we pick an $h_i$ uniformly at random from $H$ and consider $h_i(x)$ to be the key of $x \in \mathcal{I}$.
Each bucket of records output in phase $i$ is thus refined in phase $i + 1$ by hashing its records into one of $b$ new buckets, using function $h_{i+1}$. Let $\eta$ be the number of distinct keys in some bucket. If $\eta \leq b$, we can group the records in one additional scan. Linking the records in the buckets output in the first phase $T$ in which each bucket has no more than $b$ distinct keys therefore produces the desired grouped output.
Let $\eta'$ be the number of distinct keys hashed into some new bucket from a bucket with $\eta$ distinct keys. The properties of universal hashing [6] show that $E[\eta'] = \eta/b$. Theorem 1.1 of Karp [13] thus shows that $\Pr[T \geq \lceil \log_b G \rceil + c + 1] \leq G/b^{\lceil \log_b G \rceil + c}$ for any positive integer $c$. Therefore, with high probability, $O(\text{scan}(N) \log_b G)$ I/Os suffice to group $\mathcal{I}$. We use global compaction and percolation to optimize space usage.
### 6 Conclusion
Our functional approach produces external graph algorithms that compete with the I/O performance of the best previous algorithms but that are simpler to describe and implement. Our algorithms are conducive to standard checkpointing and programming language optimization tools. An interesting open question is to devise incremental and dynamic algorithms for external graph problems. The data-structural approach of Arge [3] and Kumar and Schwabe [15] holds promise for this area. Designing external graph algorithms that exploit parallel disks also remains open. Treating $P$ disks as one with
a block size of $PB$ extends standard algorithms only when $P = O((M/B)^\alpha)$ for $0 \leq \alpha < 1$. In this case, the $\log_{M/B}$ terms degrade by only a constant factor.
**Acknowledgements.** We thank Ken Church, Kathleen Fisher, David Johnson, Haim Kaplan, David Karger, Kurt Mehlhorn, and Anne Rogers for useful discussions.
**References**
1. A. Aggarwal and J. S. Vitter. The input/output complexity of sorting and related problems. *C. ACM*, 31(8):1116–27, 1988.
2. A. V. Aho, J. E. Hopcroft, and J. D. Ullman. *The Design and Analysis of Computer Algorithms*. Addison-Wesley, 1974.
3. L. Arge. The buffer tree: A new technique for optimal I/O algorithms. In *Proc. 4th WADS*, volume 955 of *LNCS*, pages 334–45. Springer-Verlag, 1995.
4. O. Borůvka. O jistém problému minimálním. *Práce Mor. Přírodověd. Spol. v Brně*, 3:37–58, 1926.
5. P. M. Camerini. The min-max spanning tree problem and some extensions. *IPL*, 7:10–4, 1978.
6. J. L. Carter and M. N. Wegman. Universal classes of hash functions. *JCSS*, 18:143–54, 1979.
7. Y.-J. Chiang. *Dynamic and I/O-Efficient Algorithms for Computational Geometry and Graph Problems: Theoretical and Experimental Results*. PhD thesis, Dept. of Comp. Sci., Brown Univ., 1995.
8. Y.-J. Chiang, M. T. Goodrich, E. F. Grove, R. Tamassia, D. E. Vengroff, and J. S. Vitter. External-memory graph algorithms. In *Proc. 6th ACM-SIAM SODA*, pages 139–49, 1995.
9. F. Y. Chin, J. Lam, and I.-N. Chen. Efficient parallel algorithms for some graph problems. *C. ACM*, 25(9):659–65, 1982.
10. J. R. Driscoll, N. Sarnak, D. D. Sleator, and R. E. Tarjan. Making data structures persistent. *JCSS*, 38(1):86–124, February 1989.
11. J. JaJa. *An Introduction to Parallel Algorithms*. Addison-Wesley, 1992.
12. D. R. Karger, P. N. Klein, and R. E. Tarjan. A randomized linear-time algorithm to find minimum spanning trees. *J. ACM*, 42(2):321–28, 1995.
13. R. M. Karp. Probabilistic recurrence relations. *J. ACM*, 41(6):1136–50, 1994.
14. P. Kelsen. An optimal parallel algorithm for maximal matching. *IPL*, 52(4):223–8, 1994.
15. V. Kumar and E. J. Schwabe. Improved algorithms and data structures for solving graph problems in external memory. In *Proc. 8th IEEE SPDP*, pages 169–76, 1996.
16. M. J. Litzkow and M. Livny. Making workstations a friendly environment for batch jobs. In *Proc. 3rd Wks. on Work. Oper. Sys.*, pages 62–7, April 1992.
17. M. Luby. A simple parallel algorithm for the maximal independent set problem. *SIAM J. Comp.*, 15:1036–53, 1986.
18. K. Mehlhorn. Personal communication. http://www.mpi-sb.mpg.de/~crauser/courses.html, 1998.
19. J. S. Plank, M. Beck, G. Kingsley, and K. Li. **Libckpt**: Transparent checkpointing under UNIX. In *Proc. USENIX Winter 1995 Tech. Conf.*, pages 213–23, 1995.
20. D. D. Sleator and R. E. Tarjan. A data structure for dynamic trees. *JCSS*, 26(3):362–91, 1983.
21. R. E. Tarjan and J. van Leeuwen. Worst-case analysis of set union algorithms. *J. ACM*, 31(2):245–81, 1984.
22. P. Wadler. Deforestation: Transforming programs to eliminate trees. *Theor. Comp. Sci.*, 73:231–48, 1990.
|
Impact of Phlebotomine Sand Flies on U.S. Military Operations at Tallil Air Base, Iraq: 3. Evaluation of Surveillance Devices for the Collection of Adult Sand Flies
Douglas A. Burkett
*U.S. Air Force*
Ronald Knight
*United States Navy*
James A. Dennett
*United States Army*
Van Sherwood
*United States Army*
Edgar Rowton
*Walter Reed Army Institute of Research*
See next page for additional authors
Follow this and additional works at: [https://digitalcommons.unl.edu/usafresearch](https://digitalcommons.unl.edu/usafresearch)
Part of the Aerospace Engineering Commons
Burkett, Douglas A.; Knight, Ronald; Dennett, James A.; Sherwood, Van; Rowton, Edgar; and Coleman, Russell E., "Impact of Phlebotomine Sand Flies on U.S. Military Operations at Tallil Air Base, Iraq: 3. Evaluation of Surveillance Devices for the Collection of Adult Sand Flies" (2007). *U.S. Air Force Research*. 38.
[https://digitalcommons.unl.edu/usafresearch/38](https://digitalcommons.unl.edu/usafresearch/38)
Authors
Douglas A. Burkett, Ronald Knight, James A. Dennett, Van Sherwood, Edgar Rowton, and Russell E. Coleman
This article is available at DigitalCommons@University of Nebraska - Lincoln: https://digitalcommons.unl.edu/usafresearch/38
Impact of Phlebotomine Sand Flies on U.S. Military Operations at Tallil Air Base, Iraq: 3. Evaluation of Surveillance Devices for the Collection of Adult Sand Flies
DOUGLAS A. BURKETT,1 RONALD KNIGHT,2 JAMES A. DENNETT,3 VAN SHERWOOD,4 EDGAR ROWTON,5 AND RUSSELL E. COLEMAN5,6
J. Med. Entomol. 44(2): 381–384 (2007)
ABSTRACT We evaluated the effectiveness of commercially available light traps and sticky traps baited with chemical light sticks for the collection of phlebotomine sand flies. Evaluations were conducted at Tallil Air Base, Iraq, in 2003. In an initial study, a Centers for Disease Control and Prevention (CDC)-style trap with UV bulb collected significantly more sand flies than did an up-draft CDC trap, a standard down-draft CDC trap (STD-CDC), or a sticky strap with a green chemical light stick. In a subsequent study, we found that the addition of chemical light sticks to sticky traps resulted in a significant increase in the number of sand flies collected compared with sticky traps without the light sticks. These data indicate that 1) the CDC light trap with an UV bulb is an effective alternative to the standard CDC light trap for collecting phlebotomine sand flies in Iraq, and 2) that the addition of a chemical light stick to a sticky trap can result in a field-expedient tool for the collection of sand flies.
At the start of operation Iraqi Freedom in March 2003, U.S. forces rapidly established operations at Tallil Air Base (TAB), located ≈10 km west of An Nasiriyah in southern Iraq. As part of a surveillance program designed to assess and mitigate the risk of leishmaniasis, Centers for Disease Control and Prevention (CDC)-style light traps were used from April 2003 until October 2004 to monitor sand fly abundance at TAB (Coleman et al. 2006).
Although both light traps and sticky traps have been commonly used for the collection of sand flies (Lane et al. 1988, Mutero et al. 1991, Alexander 2000, Ornndorff et al. 2002), few studies have systematically evaluated the efficacy of different types of light traps or the use of light in combination with a sticky trap. Alexander (2000) reported that a potential disadvantage of light traps was that they might preferentially sample females of certain species that are highly phototropic and suggested that light traps had limited value in ecological studies of sand flies. However, Fryauff and Modi (1991) and Davies et al. (1995) determined that light trap collections were comparable to biting collections for *Phlebotomus papatasi* Scopoli in Egypt and sand flies in the Peruvian Andes, respectively, whereas Rioux et al. (1982) reported that adhesive traps provided results similar to human bait.
Because we had already established an ongoing sand fly surveillance program using unbaited CDC light traps, we decided to compare the efficacy of this trap with two commercially available light traps (a CDC trap that used an UV light source and an updraft CDC light trap) and a sticky trap that is routinely available during military deployments. We also decided to determine whether the addition of a light source to a sticky trap would increase the number of sand flies collected. Because chemical light sticks of a variety of different colors are readily available during military deployments, we chose to evaluate six of the most commonly found colors.
Materials and Methods
Light Trap Evaluation. Light trap evaluations were conducted from 15 to 25 June 2003. Each trapping period ran from 2100 to 0700 hours (local time) the next day. The four trap types evaluated included 1) a standard CDC-style downdraft light trap (model 1012, John W. Hock, Gainesville, FL), 2) a CDC-style light trap using an UV bulb (model 1312, John W. Hock), 3) an updraft CDC-style light trap (Trapkit® with updraft lid adapter, American Biophysics Corp., East Greenwich, RI), and 4) a green chemical light stick (Cyalume Oninglow Corp., West Springfield, MD).
| 1. REPORT DATE 2007 | 2. REPORT TYPE | 3. DATES COVERED 00-00-2007 to 00-00-2007 |
|---------------------|----------------|----------------------------------------|
| 4. TITLE AND SUBTITLE | Impact of Phlebotomine Sand Flies on U.S. Military Operations at Tallil Air Base, Iraq: 3. Evaluation of Surveillance Devices for the Collection of Adult Sand Flies |
| 5a. CONTRACT NUMBER | 5b. GRANT NUMBER | 5c. PROGRAM ELEMENT NUMBER |
| 6. AUTHOR(S) | 5d. PROJECT NUMBER | 5e. TASK NUMBER |
| 5f. WORK UNIT NUMBER | 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Walter Reed Army Institute of Research, Silver Spring, MD, 20910 | 8. PERFORMING ORGANIZATION REPORT NUMBER |
| 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) | 10. SPONSOR/MONITOR’S ACRONYM(S) | 11. SPONSOR/MONITOR’S REPORT NUMBER(S) |
| 12. DISTRIBUTION/AVAILABILITY STATEMENT | Approved for public release; distribution unlimited | |
| 13. SUPPLEMENTARY NOTES | | |
| 14. ABSTRACT see report | | |
| 15. SUBJECT TERMS | | |
| 16. SECURITY CLASSIFICATION OF: | | |
| a. REPORT unclassified | b. ABSTRACT unclassified | c. THIS PAGE unclassified | 17. LIMITATION OF ABSTRACT Same as Report (SAR) | 18. NUMBER OF PAGES 4 | 19a. NAME OF RESPONSIBLE PERSON |
attached to a 20- by 7-cm “cockroach” sticky trap (Fig. 1). Carbon dioxide or other supplemental attractants were not used. Evaluations were conducted in a 2,500-m$^2$ area of desert scrub habitat intermixed with piles of brick and stone building rubble, in an area with no prior vector control or insecticidal activities. Traps were placed ≈0.5 m above the ground and 30 m or more apart. After each trap night, the number of sand flies collected in each trap was determined. Sand flies were placed in vials with 75% ethanol and shipped to the Walter Reed Army Institute of Research (WRAIR) for identification to species. The trap evaluation consisted of two replicates of a 4 by 4 Latin square design. Trap, day, and location effects were evaluated using a three-way analysis of variance (ANOVA) (SAS Institute 1995). Trap data were transformed to $\log_{10}(x + 1)$ before analysis. Multiple comparisons were made using Duncan multiple range test ($\alpha = 0.05$).
**Chemical Light Color Evaluation.** Trials were conducted nightly from 2000 to 0600 hours the next day during the period 18–26 July 2003. Evaluations were conducted in assorted desert scrub and rubble habitats associated with rodent burrows. Six-inch blue, green, yellow, orange, red, and infrared (IR) chemical light sticks that were rated for 12 h were used. The sticky trap/light sticks were placed on the ground with the sticky surface and light stick facing up (Fig. 1). Traps were at least 30 m apart. After each trap night, sand flies stuck on each trap were counted. Because of the difficulty of removing the sand flies from the sticky board, the specimens were not sexed or saved for further identification. Controls included a plain sticky trap with no chemical light and a 20 by 10 cm index card coated with a thin layer of Castor oil. The trap evaluation consisted of an 8 by 8 Latin square design where trap, day, and location effects were evaluated using a three-way ANOVA (SAS Institute 1995). Trap data were transformed and analyzed as described for the light trap evaluation. Multiple comparisons were made using Duncan multiple range test ($\alpha = 0.05$).
**Results**
**Light Trap Evaluation.** In total, 966 sand flies (578 females and 388 males) was collected during the eight trap nights. The UV CDC-style (UV-CDC) trap collected significantly more female sand flies $(61.5 \pm 15.3, \text{mean} \pm \text{SEM})$ than all other traps combined $(F = 49.8, P < 0.0001)$ (Table 1). While the updraft CDC-style (UD-CDC) trap was the second most effective trap for collecting female sand flies $(7.3 \pm 2.6)$. The standard CDC-style (STD-CDC) trap $(3.3 \pm 1.3)$ and sticky trap $(0.3 \pm 0.2)$ baited with a chemical light (ST-CHEM) captured the fewest female sand flies. There were no significant location $(F = 2.8, P = 0.071)$ or day $(F = 0.91, P = 0.52)$ effects. In total, 184 sand flies collected in the light trap study were identified at least to genus, to include 41 *P. papatasi* (22.3%), 30 *Phlebotomus alexandri* Sinton (16.3%), two *Phlebotomus sergenti* Parrot (1.1%), and 111 *Sergentomyia* spp. (60.3%) (Table 2). Because only a few randomly selected sand fly specimens were identified from each trap, no statistical analyses were conducted on the individual sand fly species captured with the specific traps. However, the UV-CDC and UD-CDC traps seemed to be the most effective for collecting *P. alexandri* and *P. papatasi*, respectively (Table 2). The UD-CDC trap caught a smaller portion of *Sergentomyia* spp. than did either the UV-CDC or the STD-CDC traps.
**Colored Chemical Light Sticks.** In total, 434 sand flies was collected on the sticky traps during the 8-d collection period. Males and females were not differentiated or identified. Arithmetic means, standard errors, and significant differences for combined male and female sand flies are shown in Fig. 2. All colors of chemical light sticks in the visible spectrum captured significantly more sand flies than did the infrared-baited sticky traps, the sticky traps without light and than did the index cards coated with castor oil $(F = 3.36, P < 0.006)$. No color of trap captured significantly greater numbers of sand flies than did any other color in the visible light range. There were significant day $(F = 3.17, P < 0.009)$ and location $(F = 8.46, P < 0.006)$ effects.
---
**Table 1. Collection of sand flies by using various commercially available CDC-style light traps and a sticky-trap baited with a chemical light stick**
| Species | Mean (SEM) no. of sand flies collected in each trap$^a$ | $p$ value |
|---------|--------------------------------------------------------|----------|
| | Down-Draft UV | Up-Draft CDC | Down-Draft CDC | Chemlight CDC |
| Female | 61.5 (15.3)a | 7.3 (2.6)b | 3.3 (1.3)b | 0.3 (0.2)c | 0.0001 |
| Male | 41.0 (11.0)a | 4.4 (1.5)b | 2.5 (0.6)b | 0.3 (0.2)c | 0.0001 |
| Total | 102.9 (26.9)a | 11.6 (3.6)b | 5.8 (1.9)b | 0.5 (0.3)c | 0.0001 |
$^a$ Arithmetic mean within each row having the same letter are not significantly different ($n = 8$ nights; $\alpha = 0.05$).
Table 2. Species of phlebotomine sand flies collected in various types of trap
| Trap | No. sand flies collected (% of trap total) |
|----------|-------------------------------------------|
| | \( P.\ papatasi \) | \( P.\ alexandri \) | \( P.\ sergenti \) | Sergentomyia spp. | Total |
| UV-CDC | 13 (13.4) | 24 (24.7) | 0 (0.0) | 60 (61.9) | 97 |
| STD-CDC | 5 (13.2) | 4 (10.5) | 1 (2.6) | 28 (73.7) | 38 |
| UD-CDC | 23 (46.9) | 2 (4.1) | 1 (2.0) | 23 (46.9) | 49 |
| Total | 41 (22.2) | 30 (16.3) | 2 (1.1) | 111 (60.3) | 184 |
Discussion
As of November 2004, >1,100 cases of cutaneous leishmaniasis had been confirmed in U.S. military personnel deployed to Iraq and Afghanistan (Lay 2004). Because of the ongoing threat of leishmaniasis to military, humanitarian, and other operations being conducted in the Middle East, it is critical that preventive medicine and vector control personnel maximize their ability to monitor sand fly populations, evaluate control efforts, identify disease/vector foci, and suppress vector populations to mitigate the threat of leishmaniasis to deployed personnel. The identification of the most effective methods of collecting phlebotomine sand flies is key to this effort.
In this study, the overall number of medically important phlebotomine sand flies and the species composition of flies collected differed significantly among trap designs, ranging from a nightly mean of 102.9 ± 26.9 for the UV-CDC trap to 0.5 ± 0.3 for the chemical light-stick baited trap. The UV-CDC trap was clearly the most effective trap for collecting all species of phlebotomines, especially \( P.\ alexandri \), the primary suspected vector of visceral leishmaniasis in Iraq. However, large numbers of nontarget insects greatly extended the processing time of nightly collections. Surprisingly, the literature does not reference studies that have used UV light traps for the collection of sand flies in the Middle East or North Africa.
Because of the difficulty of accurate identification, not all papers evaluating the use of surveillance techniques have assessed differences in the collection of individual species of sand flies. The overall species composition collected during our investigation agrees with what others have found in central Iraq, with the most abundant species including three known vectors (\( P.\ papatasi \), \( P.\ alexandri \), and \( P.\ sergenti \)) of parasites causing human leishmaniasis (Al-Azawi and Abul-Hab 1977, Abul-Hab and Al-Hashimi 1988). The various light traps assessed in our study were excellent tools for the collection of \( P.\ papatasi \). Said et al. (1986) and Beavers et al. (2004) also used light traps to collect large numbers of \( P.\ papatasi \) in Egypt. However, Lewis (1971) found that \( P.\ papatasi \) was negatively phototrophic in Yemen, and Lane et al. (1988) reported that CDC-style light traps captured significantly fewer sand flies (including \( P.\ papatasi \)) in Jordon than did sticky traps with and without chemical light sticks. Lane et al. (1988) found no significant difference in

**Fig. 2.** Total number of sand flies (mean ± SE) captured using sticky traps baited with various colored chemical light sticks. Three-way ANOVA and a multiple comparison (Duncan multiple range test) were performed after log \((x + 1)\) transformation. Means having the same letter are not significantly different \((n = 8\) nights; \(\alpha = 0.05\)).
the number of sand flies captured on oiled cards with and without light sticks, although the species diversity was greater on the lighted sticky traps. In Kenya, Mutero et al. (1991) found updraft traps were more effective at collecting several species of sand flies near rodent burrows than downdraft traps. Although not statistically significant, the updraft CDC trap evaluated in our study was also more effective at collecting sand flies than the standard downdraft CDC trap.
We found that the UV-CDC trap was the most effective trap evaluated for the collection of large numbers of sand flies. Use of the UV-CDC trap is warranted if the primary objective of a surveillance program is to collect maximum numbers of sand flies to provide the best estimate of field infection rates of a particular pathogen (e.g., *Leishmania* parasites or sand fly fever virus) or to determine whether a particular species of sand fly is present in an area. However, because of the large number of nontarget insects in the UV-CDC trap and the corresponding increase in sample processing time, we feel that the standard CDC style downdraft traps are adequate in the majority of surveillance efforts. Although the updraft CDC trap collected more sand flies than did the downdraft CDC trap, the samples in the updraft trap tended to be in worse condition than those in the downdraft trap (presumably because of repeated contact of dead and moribund specimens with the fan blades at the bottom of the trap). Many other studies have reported that sticky traps baited with chemical light sticks or paper cards coated with oil are effective at collecting sand flies; however, we found that standard CDC light traps were much more effective than the sticky traps (with or without chemical lights) that we evaluated. Future work should focus on evaluating newer trap technologies that incorporate CO$_2$ or other attractants for the collection of phlebotomine sand flies.
**Acknowledgments**
We thank the military personnel who helped make our studies at TAB possible: COL Mark Wiener, LTC JoLynne Raymond, LTC Peter Weina, MAJ Jennifer Caci, CPT Barton Jennings, CPT Christily Silvernale, MSGT Rusty Cushing, MSGT Paul Weller, TSGT Dennis White, TSGT Rusty Pichardo, TSGT Michael Johnson, TSGT Christopher Lang, SSGT Joanne Johnston, SSGT Josh Arthur, SCT Kevin Fisher, SGT Rondell Hadley and A/C Jae Kim. In addition, we thank Wayne and Lara Gilmore at the WRAIR for the identification of sand flies. Funding for this study was provided by the Military Infectious Diseases Research Program and the Deployed War-Fighter Protection Program.
**References Cited**
Abul-Hab, J., and W. Al-Hashimi. 1988. Night man-biting activities of *Phlebotomus papatasi* Scopoli (Diptera, Phlebotomidae) in Suwara, Iraq. Bull. Endem. Dis. (Baghdad) 29: 5–15.
Al-Azawi, B. M., and J. Abul-Hab. 1977. Vector potential of *Phlebotomus papatasi* Scopoli (Diptera, Phlebotomidae) to kala azar in Baghdadh area. Bull. Endem. Dis. (Baghdad) 18: 35–44.
Alexander, B. 2000. Sampling methods for *Phlebotomine* sand flies. Med. Vet. Entomol. 14: 109–122.
Beavers, G. M., H. A. Hamafi, and E. A. Dykstra. 2004. Evaluation of 1-octen-3-ol and carbon dioxide as attractants for *Phlebotomus papatasi* (Diptera: Psychodidae) in southern Egypt. J. Am. Mosq. Control Assoc. 20: 130–133.
Coleman, R. E., D. A. Burkett, J. L. Putnam, V. Sherwood, J. B. Caci, B. T. Jennings, L. P. Hochberg, S. L. Spradling, E. D. Rowton, K. Blount, et al. 2006. Impact of phlebotomine sand flies on U.S. military operations at Tallil Air Base, Iraq. I. Introduction, military situation, and development of a ‘Leishmaniasis Control Program.’ J. Med. Entomol. 43: 601–602.
David, C. R., R. L. Page, P. Villaseca, S. Pyke, P. Campos, and A. Llanos-Cuentas. 1995. The relationship between CDC light-trap and human-bait catches of endophagic sandflies (Diptera: Psychodidae) in the Peruvian Andes. Med. Vet. Entomol. 9: 241–248.
Fryauf, D., and G. Modi. 1991. Predictive estimation of sandfly biting density at a focus of cutaneous leishmaniasis in the North Sinai Desert, Egypt. Parasitologia 33 (Suppl.): 245–252.
Lane, R. P., S. Abdel-Hafez, and S. Kanhawi. 1988. The distribution of Phlebotomine sandflies in the principal ecological zones of Jordan. Med. Vet. Entomol. 2: 237–246.
Lay, J. C. 2004. Leishmaniasis among U.S. Armed Forces, January 2003-November 2004. Medical Surveillance Monthly Report. U.S. Army Center for Health Promotion and Preventive Medicine 10: 2–5.
Lewis, D. J. 1971. Phlebotomid sand flies. Bull. W.H.O. 44: 535–551.
Mutero, C. M., J. Mutinga, M. H. Birley, F. A. Amimo, and D. M. Munyini. 1991. Description and performance of an updraft trap for sand flies. Trop. Med. Parasitol. 42: 407–412.
Orrudriff, R., M. Maroli, B. Cooper, and S. E. Rankin. 2002. Leishmaniasis in Sicily (Italy): an investigation of the distribution and prevalence of phlebotomine sandflies in Catania Province. Mil. Med. 167: 715–718.
Rioux, J. A., J. Perieres, R. Killick-Kendrick, G. Lanotte, and M. Bailly. 1982. Ecology of leishmaniasis in south France. 17. Sampling of *Phlebotomus* by the method of adhesive traps. Comparison with the technique of capture on human bait (French). Ann. Parasitol. Hum. Comp. 57: 631–635.
Said, E. S., J. C. Beier, B. M. El Sawaf, S. Doha, and E. E. Kordy. 1986. Sand flies (Diptera: Psychodidae) associated with visceral leishmaniasis in El Agamy, Alexandria Governorate, Egypt. II. Field behavior. J. Med. Entomol. 23: 610–615.
SAS Institute. 1995. SAS/STAT user’s manual, version 6.03. SAS Institute, Cary, NC.
Received 12 September 2005; accepted 19 December 2006.
|
WHEN GOD & GRIEF MEET
TRUE STORIES OF COMFORT & COURAGE
LYNN EIB
Author of When God & Cancer Meet
TYNDALE HOUSE PUBLISHERS, INC.
CAROL STREAM, ILLINOIS
Visit Tyndale’s exciting Web site at www.tyndale.com
*TYNDALE* and Tyndale’s quill logo are registered trademarks of Tyndale House Publishers, Inc.
*When God & Grief Meet: True Stories of Comfort and Courage*
Copyright © 2009 by Lynn Eib. All rights reserved.
Cover photo copyright © by Veer. All rights reserved.
Author photo copyright © 2005 by Steve Lock. All rights reserved.
Designed by Beth Sparkman
Unless otherwise indicated, all Scripture quotations are taken from the *Holy Bible*, New Living Translation, copyright © 1996, 2004, 2007 by Tyndale House Foundation. Used by permission of Tyndale House Publishers, Inc., Carol Stream, Illinois 60188. All rights reserved.
Scripture quotations marked NIV are taken from the HOLY BIBLE, NEW INTERNATIONAL VERSION®. NIV®. Copyright © 1973, 1978, 1984 by International Bible Society. Used by permission of Zondervan. All rights reserved.
Scripture quotations marked *The Message* are taken from *The Message* by Eugene H. Peterson, copyright © 1993, 1994, 1995, 1996, 2000, 2001, 2002. Used by permission of NavPress Publishing Group. All rights reserved.
Scripture quotations marked NKJV are taken from the New King James Version®. Copyright © 1982 by Thomas Nelson, Inc. Used by permission. All rights reserved. *NKJV* is a trademark of Thomas Nelson, Inc.
Scripture quotations marked NASB are taken from the *New American Standard Bible*®, copyright © 1960, 1962, 1963, 1968, 1971, 1972, 1973, 1975, 1977, 1995 by The Lockman Foundation. Used by permission.
Scripture quotations marked AMP are taken from the *Amplified Bible*®, copyright © 1954, 1958, 1962, 1964, 1965, 1987 by The Lockman Foundation. Used by permission.
Scriptures marked CEV are taken from the Contemporary English Version. Copyright © 1995 by American Bible Society. Used by permission.
Scripture quotations marked YLT are taken from Young’s Literal Translation.
---
**Library of Congress Cataloging-in-Publication Data**
Eib, Lynn.
When God & grief meet : true stories of comfort and courage / Lynn Eib.
p. cm.
ISBN-13: 978-1-4143-2174-5 (sc)
ISBN-10: 1-4143-2174-0 (sc)
1. Consolation. 2. Bereavement—Religious aspects—Christianity. 3. Grief—Religious aspects—Christianity. I. Title.
BV4905.3.E35 2009
248.8’66—dc22 2008031034
Printed in the United States of America
15 14 13 12 11 10 09
7 6 5 4 3 2 1
“Great writing about grief is all too rare. In this book you will journey with people who have faced incredible losses—the kinds most of us don’t even want to think about: people who have lost loved ones to cancer, heart attacks, car accidents, plane crashes, suicide, and even murder. Yet despite unbelievable grief and distress, each found a God who cared and was present. This is an essential book for all who have suffered a loss—I guess that means everyone.”
**James A. Avery, M.D.**
Medical director, Visiting Nurse Service of New York Hospice Care; assistant clinical professor, Mount Sinai School of Medicine
“Lynn Eib weaves together a magnificent patchwork of grievers’ stories with the golden thread of the powerful testimony of God’s words. If you’re so brokenhearted you don’t know if you’ll survive—if your soul is so parched it feels as if it’s cracking and turning to dust—if you’ve cried to the point that your tear glands feel as if they’ve dried up, don’t miss the soothing and healing balm in this wonderful book. When God and grief meet, true healing can begin.”
**Walt Larimore, M.D.**
Award-winning medical journalist; coauthor of *His Brain, Her Brain*
“When you’re grieving, it helps to spend time with other people who’ve been there—people who understand the very real fears, disappointments, and sorrow that you are going through. In *When God & Grief Meet*, Lynn Eib introduces us to a series of people who not only share their stories of grief but who offer us wise insights and practical ideas for getting through it.”
**Nancy Guthrie**
Author of *Holding On to Hope* and coauthor of *When Your Family’s Lost a Loved One*
“When God & Grief Meet is a poignant collection of heartwarming real-life stories of those who have found the significance of spirituality in searching for guidance through their grief journeys. This book is a valuable resource to address the most challenging questions one experiences following the loss of a loved one. Lynn has, once again, through the stories of others found comforting words to create peace at this most difficult time in life. I wholly endorse this very valuable grieving tool.”
**JUDY LENTZ, R.N., M.S.N.**
Chief executive officer, Hospice and Palliative Nurses Association
“Grief is hard and lonely work, but in these compelling stories and honest reflections from Lynn Eib, you will find a community of people who understand your sorrow and a God who restores your hope.”
**HAROLD G. KOENIG, M.D.**
Professor of psychiatry and behavioral sciences; associate professor of medicine; codirector, Center for Spirituality, Theology and Health, Duke University Medical Center
“The price of human caring is grief and loss. Whatever one’s present state of loss, Lynn provides the reader with an understanding of the grief process and how the application of an active faith can help one live through troubled times. The gift of this book to someone in need will make all the difference.”
**ROY SMITH, M.Div., Ph.D.**
Licensed psychologist; president, Pennsylvania Counseling Services
“Once again Lynn Eib takes readers on a peace-seeking journey with God as the guide. Lynn masterfully takes on the difficult but universally prevalent subject of grief. Her words will resonate with you and comfort you long after you’ve finished the book.”
**JULIE K. SILVER, M.D.**
Assistant professor, Harvard Medical School; author of *What Helped Get Me Through*
To my husband—
In the words of Joe Wise:
*I'm in love with my God*
*My God's in love with me.*
*And the more I love you*
*the more I know,*
*I'm in love with my God.*
*I'm forever grateful*
*I was part of God's plan*
*to turn your*
*mourning into joy.*
CONTENTS
Acknowledgments ......................................................... ix
1: Trusting the Magnetic Poles of the Earth .......................... 1
2: Feeling Your World Fall Apart ..................................... 11
3: Finding a Friend Who Understands ............................... 23
4: Preserving a Memory No One Can Steal ......................... 35
5: Being Held Up ......................................................... 47
6: Comprehending the Incomprehensible ............................ 59
7: Surviving the Imperfect Storm .................................... 71
8: Throwing Rocks at God’s Windows ............................... 85
9: Comforting Like No Others ........................................ 99
10: Wondering What’s Next ........................................... 113
11: Hoping for Heaven .................................................. 127
12: Going On before Us ............................................... 139
13: Continuing On When It Doesn’t Seem Possible .............. 153
14: Knowing When to Relax .......................................... 167
Grief Books for Adults .................................................. 181
Grief Books for Kids .................................................... 187
Grief Care Organizations and Resources ............................ 189
ACKNOWLEDGMENTS
Every time I finish writing another book, I am sure it will be my last. And then God puts another book inside my head and I have to write again. I can tell you that without His supernatural touch, I would have had nothing of eternal value to say. If anything in this book comforts your heart, please give Him all the credit.
I also would like to say thank you to:
My prayer partner and dear friend, Elizabeth Hirsh, for praying me through all my writing and expertly editing my manuscript so I look much better to my publisher.
My husband’s prayer partner and my dear friend, Dr. Marc Hirsh, for giving me a job that allows me to share the Lord with so many suffering and grieving people (and for not retiring yet!).
All the members of my Grief Prayer Support Group for allowing me to test some of my book ideas on them and for entrusting me with their grief-storms.
All the grievers in this book for unselfishly sharing their stories with the hope they might encourage others.
My Haitian friends in Christ, Johanne Phanord and Danny Perez, for assisting me with interviewing Elza Phanord and translating her comments.
My cousin retired USAF Major James Perkins for explaining how to fly into the eye of a hurricane (and come out alive).
Therapist Rebecca Rice for reviewing some of my psychology comments and making sure I knew what I was talking about even though I’m not a psychologist.
All the wonderful folks at the Knox Group of Tyndale House, especially: associate publisher Jan Long Harris for first suggesting I consider writing this book, author relations manager Sharon Leavitt for supporting my ministry with her fervent prayers, and editor Kim Miller for expertly improving my manuscript.
And most important, my family for loving me and cheering me on: my husband, Ralph; my daughter Bethany and her husband, Josh; my daughter Danielle Joy; and my daughter Lindsey and her new,
wonderful husband, Frank. (Please note: Now that I have three books, each daughter has had a turn to be mentioned first—whew! Hopefully, my sons-in-law are not as competitive and won’t demand equal time.)
And, of course, my parents, Robert and Gaynor Yoxtheimer, for giving me such a great start in life and for living long enough to see my writing success!
Let’s be honest: I never wanted to write a grief book and you never wanted to need one.
Frankly, I like movies with happy endings, fairy tales where everyone lives happily ever after, and answered prayers for miracle healings. But right now you and I are past all those hopes and dreams. Instead we are faced with harsh reality.
I don’t know your exact circumstances. Perhaps this enemy called Death snuck up and unexpectedly stole away your loved one. Or perhaps you had been expecting its arrival for some time. Either way it was an unwelcome intruder which brought the ending you never wanted to see.
So I do understand that you’d rather not be in the position to need this book. But if you picked it up for yourself, I’m honored you have chosen to take my words along with you on your grief journey. If someone gave you this book, I’m praying you’ll be just curious enough about
what will happen when God meets your grief that you’ll keep reading. And if you’re not quite ready to read yet, that’s okay with me. Just put the book aside (hopefully on the top of your pile!). I believe that sometime in the coming weeks you’ll know you’re ready. I’ll still be here for you then.
It might seem strange for me to say I didn’t want to write this book. After all, I am a journalist, and writing normally gives me great joy. I write and speak mostly on the topic of faith and medicine, drawing on my years of experience as a patient advocate offering emotional and spiritual support to cancer patients and their caregivers. As a longtime cancer survivor myself—I was diagnosed with advanced colon cancer at the age of thirty-six in 1990—I love working in my oncologist’s office encouraging those facing this dreaded disease. It can be a very sad job because more than half our patients die from their cancer. But at least some become survivors, and there’s always a glimmer of hope that even those with dire prognoses might defy the odds.
With grief, there’s no such glimmer. Nothing I write will change the reality of the loss you are mourning—which is why I was reluctant to write this book. But while my words can’t change your past, I believe these true stories from others’ grief-storms will give you comfort in your present and courage for your future.
These stories come from people of all walks of life
who have experienced many kinds of difficult losses. Some have lost loved ones to cancer and heart attacks; others have had their worlds ripped apart by a car accident, a plane crash, a suicide, and even a murder. I have no doubt you’ll find at least one person facing a grief-storm who has feelings very similar to yours.
The focus of the stories is not on how the loved ones died but on how those left behind are finding the strength to continue living without them. My hope is that these stories will help heal your heartache as much as they have mine.
I started feeling especially helpless dealing with grief a few years ago as I watched a march of mourning people come to my office searching for answers, direction, and peace after their loved ones passed away. Many had attended my Cancer Prayer Support Groups with their loved ones and really missed the encouragement those groups offered them. I kept sensing God asking me to start a similar group for grievers, but if you’ve read my other books, you know I’m not always eager to say yes to the hard things God calls me to do. (If you haven’t read my books, let’s just say I tend to think I have things all figured out and can convince the Almighty my way is right!)
Starting a grief group sounded really depressing to me. Granted, starting a cancer support group sounded really depressing to me back in 1991, and it turned out to be an incredible joy, but I was certain this time that a grief group definitely would be depressing.
Yet the march of mourners continued to come through my office door, and I found myself spending more and more time each day offering comfort and consolation. I also was having a harder time dealing with my own grief as the deaths of my patient-friends began to add up. Every week another one would die; sometimes a couple of friends would pass in the same day.
God kept tugging on my heart, and I finally asked my boss, Dr. Marc Hirsh, if it would be okay for me to start a grief group at the office. I could tell he really didn’t see the necessity of such a gathering, but if I wanted to do it, he wouldn’t say no.
So I sent out notes to my grieving friends, inviting them to come to a group meeting at our office. Bringing a bunch of sorrowful souls together in the same room still seemed like a depressing plan—especially because I was powerless to change their painful reality.
But I almost had forgotten that Someone else was going to show up. From the very first grief group, it was obvious to me that God was going to do something special in our midst. Sure, there were plenty of tissues and tear-filled memories, but there also were laughs and comfort-filled words. Instead of being depressed by hearing each other’s stories, we all felt just a little better as we realized we weren’t quite so alone. Instead of drowning in our own self-pity, grievers reached out, as if we were throwing life preservers to one another. And instead of feeling far from God, we began to sense His love was very near.
Now, more than five years after that first meeting, the grief group members enjoy each other so much that we also meet monthly for breakfast and dinner and have gotten together for picnics, shows, and concerts. An evening group has been added for those who can’t come during the day. And my boss thinks facilitating our ministry to grievers is one of the more important things I do in the office and one of the best ways our patients’ families can continue to see God meet their greatest needs.
So my prayer for you as you read these pages is that you’ll feel as if you’ve been to some really good support group meetings. You’ll have to add great snacks and jokes if you want them to be more like our group. (Yes, I said jokes. I start every meeting with them because I have found that grievers usually haven’t had much to smile about and need a safe place to learn to laugh again.)
You can “go” to a support group meeting once a day, once a week, or once a month depending on how quickly you read this book. You’ll know what the right pace is for you. (And if you just can’t put the book down, go ahead and have a marathon meeting—but after you finish you’ll probably want to come back now and then to give the words a chance to really soak in.)
As we walk this grief journey together, I think you’ll discover that many others share your deep feelings. And while I can appreciate the popular psychology that feelings are “neither right nor wrong,” I also know that feelings do not necessarily mirror God’s undeniable truth.
I witnessed this dilemma of strong feelings at odds with facts a few years ago when my husband and I were out on a boat with my boss, Marc, and his wife, Elizabeth.
The four of us had set out for our annual Labor Day weekend cruise on their thirty-two-foot Bayliner, despite rather foul-looking weather. We were headed up the Chesapeake Bay to a scenic, lively marina called Skipjack Cove on the Sassafras River of Maryland’s eastern shore. Elizabeth had checked with her brother who lives right on the Gunpowder River leading into the Chesapeake, and he had assured us the weather reports didn’t look that bad, despite a hurricane that was heading northward up the coast. (We later learned he had accidentally listened to the *wrong* forecast.)
So we took off, knowing that Marc and Elizabeth were seasoned boaters—although the whitecaps on the usually calm river should have been our first clue it wasn’t a good idea.
We had a short two-hour cruise ahead of us, but it wasn’t long before the whitecaps turned into three-foot waves. The wind whipped up, and then the thunder, lightning, and rain came. At first we all laughed and enjoyed the warm rain soaking us as the boat pounded through the waves. But then I stopped laughing, and my stomach started rebelling. Elizabeth handed me a supply of Ziploc bags, which I started filling. The waves were now five feet high and crashing clear over the top of the
boat’s windshield, drenching us. It was nearly impossible for Marc to see out of the rain-splattered windshield, and my husband and Elizabeth were trying to read the navigational charts and look for the numbered buoys, which would keep us in the correct channel away from large shipping vessels, shallow water, and crab pots. We were too far out to turn back toward home, yet not sure we could make it to our planned destination.
And then it got really bad.
Marc announced that according to the boat’s compass we were headed in exactly the wrong direction: south when we should have been heading north.
The rest of us were sure we hadn’t turned around—Elizabeth was especially positive we were still pointing in the right direction. She was convinced she would have noticed if the boat had made an about-face. From past experience, I knew she usually was right whenever the two of them had a disagreement about boating.
The three of us looked at Marc, waiting to see what he would do. (Well, I didn’t look long because I was busy praying there were enough Ziploc bags.)
After a long pause, Marc posed his now-famous question: “Should I trust my wife . . . or the magnetic poles of the earth?”
It wouldn’t have surprised me if he’d gone with Elizabeth’s feelings because she was so adamant about them, but his scientific brain won out and Marc made a 180-degree turn with the boat.
Within a few moments, we sighted buoys, confirming
that we, indeed, had been going in the wrong direction despite all of us “feeling” otherwise.
The storm raging around us had distorted reality, and our feelings had fallen fickle.
The same thing can happen in the storms of grief. We can *feel* as if we are completely alone or without purpose or unable to cope. These are the times we need a compass—something that always will steer us in the right direction. Don’t worry; I’m not suggesting that I’ll be your compass. After half a century of living, I continue to be directionally challenged. (My husband still cringes when he recalls that I once described Spain as being to “the left” of Germany!) Besides, you probably don’t need one more helpful person in your life telling you what you *should* (or *shouldn’t*) be doing.
What I am suggesting is that the God of the universe has a special affinity for brokenhearted people, and His words are the perfect compass for grievers. A magnetic compass always will point you to the North Pole, and God’s Word *always* will point you to His unchanging truths and promises.
*The LORD is close to the brokenhearted; he rescues those whose spirits are crushed.* Psalm 34:18
*He heals the brokenhearted and bandages their wounds.* Psalm 147:3
As our “group” facilitator, it’s not going to be my job to try and solve your problems. I can’t change the reality of your loved one’s death—no one can. But I hope to show or perhaps remind you that a deeper spiritual reality transcends our earthly reality. I’ll do it by pointing to God’s Word as your compass of undeniable truth. If you already think of the Bible as your guide to life, I know you’ll appreciate these tender reminders. But if you’ve not seriously given God’s Word central importance in your life, I hope you’ll give it a try now. You really have nothing to lose and everything to gain.
*I weep with sorrow; encourage me by your word.*
*Psalm 119:28*
*When doubts filled my mind, your comfort gave me renewed hope and cheer.* *Psalm 94:19*
And the truth of that second verse is the reason I decided I would write this book I never wanted to write—because God *can* supernaturally comfort and bring renewed hope and even cheer to those whose minds are filled with doubts and whose hearts are filled with grief.
If you want a book by a psychological expert, you’ll have to find an author with a lot more initials after his or her name than I have. If you want in-depth theological answers to the questions of suffering and dying, you’ll need to locate some of the resources I’ve listed in the back of this book. But if you want someone to
ride with you in your grief-storm and read the compass, then I’m your person. For some reason that only God knows, I believe He has entrusted me with a message for mourners. And as I share with you God’s words to the brokenhearted, I believe you will see that when God and grief meet, His power, peace, and presence are bigger and more real than our uncertainties, sorrow, and loneliness. He is able to be our guiding compass.
*The LORD will guide you continually, giving you water when you are dry and restoring your strength.*
**Isaiah 58:11**
*The LORD says, “I will guide you along the best pathway for your life. I will advise you and watch over you.”*
**Psalm 32:8**
*Your word is a lamp to guide my feet and a light for my path. . . .
*I have suffered much, O LORD; restore my life again as you promised.*
**Psalm 119:105, 107**
Like Marc as he captained our boat during that stormy trip, it’s your choice whether or not to trust the magnetic poles of the earth.
**TAKE COMFORT:** Grief may distort reality, but there is a deeper spiritual reality that always can be trusted.
GRIEF BOOKS FOR ADULTS
In addition to this book, *When God & Grief Meet* (Tyndale, 2009), I recommend the following resources:
*Confessions of a Grieving Christian* by Zig Ziglar (Broadman & Holman, 2004). Lessons learned by this well-known motivational speaker after the loss of his adult daughter.
*A Decembrered Grief* by Harold Ivan Smith (Beacon Hill Press of Kansas City, 1999). Living with loss while others are celebrating.
*Don’t Sing Songs to a Heavy Heart* by Kenneth C. Haugk (Stephen Ministries, 2004). How to relate to those who are suffering.
*Everyday Comfort: Meditations for Seasons of Grief* by Randy Becton (Baker Books, 2006). Thirty daily devotions to help navigate through heartache.
*Experiencing Grief* by H. Norman Wright (B&H Publishing Group, 2004). A short book helping readers deal with the five stages of grief.
*Finding Your Way after the Suicide of Someone You Love* by David B. Biebel and Suzanne L. Foster (Zondervan, 2005). A compassionate and practical guide that addresses the intensely personal issues of suicide for those left behind.
*Forgiving God* by Carla Killough McClafferty (Discovery House Publishers, 2000). Written by a mother who lost her young son and attempts to forgive a loving God who did not answer her prayers for her son.
*Getting to the Other Side of Grief: Overcoming the Loss of a Spouse* by Susan J. Zonnebelt-Smeenge and Robert C. De Vries (Baker Books, 1998). Written by two widowed persons on “overcoming” the loss of a spouse.
*A Gift of Mourning Glories: Restoring Your Life After Loss* by Georgia Shaffer (Vine Books, 2000). An excellent book on restoring your life after all kinds of loss.
*God on the Witness Stand: Questions Christians Ask in Personal Tragedy* by Daniel T. Hans (Baker Publishing Group, 1989). A pastor whose little girl died from a brain tumor answers the questions Christians ask during personal tragedy. (Hans also authored the booklet *When a Child Dies* [Desert Ministries, 1998]).
*Good Grief* by Granger E. Westberg (Augsburg Fortress, 2005). Since its first edition in 1962, this booklet has become a standard resource for people grieving losses. Written by a pioneer in the holistic health movement.
**Grief Books for Kids**
*Heaven for Kids* by Randy Alcorn (Tyndale, 2006). Answers kids will understand based on the book *Heaven*. Written in an easy-to-use Q&A format, the book covers the eternal topics kids wonder about.
*It’s Okay to Cry: A Parent’s Guide to Helping Children through the Losses of Life* by H. Norman Wright (WaterBrook Press, 2004). Practical helps for parents explaining the symptoms of loss and unresolved grief so parents can walk with their children on this journey. Includes help for all losses, not just death.
*Saying Goodbye When You Don’t Want To* by Martha Bolton (Vine Books, 2002). For teens dealing with the death of relatives or friends, as well as other non-death-related grief.
*Someday Heaven* by Larry Libby and Wayne McLoughlin (Zonderkidz, 2001). Provides biblically based answers on a topic that is not always easy to explain to a young child.
*Someone I Loved Died (Please Help Me, God)* by Christine Harder Tangvald (Chariot Victor Publishing, 1988). Includes a faith-parenting guide and helpful, personal activities.
*Tear Soup* by Pat Schwiebert and Chuck DeKlyen (Grief Watch, 1999). This wonderful picture book affirms the bereaved, educates the nonbereaved, and is a building block for children understanding grief. Excellent for adults and children. Also available in CD and video at www.griefwatch.com/tearsoup.
*What Happens When We Die?* by Carolyn Nystrom and Eira Reeves, Mini Book Edition (Moody Press, 1992) A simple yet profound book showing younger children some of the reasons people die and what God has in store for us in Heaven.
GRIEF CARE ORGANIZATIONS AND RESOURCES
The Compassionate Friends. National support group for bereaved parents, siblings, and grandparents who have experienced the death of a child at any age. 877-969-0010. www.compassionatefriends.org
The Dougy Center. National center for grieving children and families. Provides peer support groups for grieving children at no charge. www.dougy.org
GriefNet. An Internet community of persons dealing with grief, death, and major loss at www.griefnet.org, offering dozens of e-mail support groups. Their sister site, http://kidsaid.com, is a safe place for children to share and help each other deal with grief. Visitors can share feelings, show artwork, or meet with peers online.
Grief Recovery. Workbooks, personal workshops, and other resources to aid in the grieving process. www.grief-recovery.com
GriefShare. Seminars and support groups (from a Christian perspective) are led by people who understand what you are going through and want to help. There are thousands of GriefShare grief recovery support groups meeting throughout the United States, Canada, and in more than ten countries. Daily devotions available online, as well as other helpful resources. www.griefshare.org
Grief Watch. Resources for bereaved families and professional caregivers, including those who have lost an infant before or after birth. 503-284-7426. www.griefwatch.com
Outreach of Hope. Resources from a Christian perspective offering guidance and support for those who suffer, including those who have lost a loved one to cancer. Prayer support, online devotionals, and professional resources. 719-481-3528. www.outreachofhope.org
Stephen Ministries. A one-to-one caring ministry by trained lay ministers in local Christian congregations. Devotions, as well as grief resources and training opportunities for Stephen Ministers, are available online. www.stephenministries.org
|
GWIN, District Judge. This case returns to us for a third time. Most recently, we affirmed the judgment of the district court. *French v. Jones*, 282 F.3d 893 (6th Cir. 2002). Thereafter, the United States Supreme Court granted respondent’s petition for writ of certiorari, vacated our prior judgment, and remanded the case to our Court for further consideration in light of its decision in *Bell v. Cone*, 535 U.S. 685 (2002). *French v. Jones*, 535 U.S. 1109 (2002). Pursuant to the Supreme Court’s order, the case is again before us for our determination.
With this appeal we examine whether the district court wrongly granted a writ of habeas corpus after a Michigan court gave a supplemental instruction to a deadlocked jury. The Michigan trial court gave the supplemental instruction, which did not conform with the approved Michigan instruction, when the defendant’s attorney was not present.
At the first appeal to this Court, we vacated the district court’s order granting habeas relief and remanded the case for an evidentiary hearing. At that hearing, the district court was directed to review the role of Ty Jones, an alleged attorney from California. Ty Jones was present at the time the supplemental instruction was given but it was unclear whether he was licensed to practice law. *French v. Jones*, No. 99-1436, 2000 WL 1033021, at *1–2 (6th Cir. July 18, 2000).
At the ensuing evidentiary hearing, the district court learned Jones was not an attorney and was present only to observe Cornelius Pitts, one of French’s attorneys. After finding that Ty Jones was not an attorney, the district court held that French was denied representation during a critical stage of his trial and granted his petition for a writ of habeas corpus. *French v. Jones*, 114 F. Supp. 2d 638, 643 (E.D. Mich. 2000).
In our previous decision, we concluded that a defendant whose lawyer was not present when the trial judge gave a supplemental instruction to a deadlocked jury is entitled to habeas relief. After carefully reviewing the *Cone* decision, we see no reason to depart from our previous holding. Therefore, finding Petitioner French was denied counsel during a critical stage of his trial, we affirm the district court’s grant of a writ of habeas corpus.
I.
On September 10, 1994, French shot four fellow union officials at the Ford Motor Company Rouge facility in Dearborn, Michigan. After trial to a jury,\(^1\) French was found guilty but mentally ill\(^2\) of one count of first-degree murder, Mich. Comp. Laws § 750.316, one count of second-degree murder, Mich. Comp. Laws § 750.317, two counts of assault with intent to commit murder, Mich. Comp. Laws § 750.83, and one count of possession of a firearm during the commission of a felony, Mich. Comp. Laws § 750.227b. The Michigan trial court sentenced French to life imprisonment without parole for the first-degree murder conviction, fifteen years to thirty years imprisonment for each of the second-degree murder and assault with intent to murder convictions, and two years consecutive imprisonment for the firearm conviction. At trial, two attorneys, Cornelius Pitts and Monsey Wilson, represented French. Ty Jones was also present at defense counsel’s table for portions of the trial.
The confusion surrounding this case stems from representations made by Pitts and Wilson. At the beginning of the trial, Pitts introduced Jones to the prosecutor and trial judge as an attorney from California who specialized in jury selection. Pitts said Jones was present to assist with the trial. Based on Pitts’s representation, the trial judge allowed Jones to remain at the defense table.
During jury selection, Pitts again introduced Jones as “counsel from California” who was assisting with the trial. Jones was present at the defense table every day of trial but never spoke in the presence of the jury.
At the evidentiary hearing held in this matter, the district court learned Jones was not a licensed attorney. While he had attended one year of law school at New York University, Jones worked as a motion picture consultant and screenwriter in Los Angeles. Jones observed the trial as background for the development of a television series based on the Detroit legal system.
At the evidentiary hearing, Pitts testified that Jones told him that he was a lawyer. Although Pitts did not intend for Jones to participate in the trial, Pitts said he introduced Jones as “counsel from California” to give the impression of a large defense team.
French’s trial took more than two weeks before being submitted to the jury. After receiving instructions and choosing a foreperson, the jury recessed Thursday, April 27, 1995. The jury reconvened and began deliberating the morning of Friday, April 28, 1995. During that day, the jury twice requested copies of trial materials. On both occasions,
---
\(^1\) There was no dispute French committed the shootings. Instead, the issue at trial was French’s mental state at the time of the shootings. At trial, French presented expert testimony and other evidence to show he was legally insane on the day of the shootings because an erroneously prescribed overdose of a powerful hypertension drug had combined with a pre-existing mental disorder to make him paranoid and psychotic. In response, the government presented expert testimony that French was legally sane on the day of the shootings.
\(^2\) Michigan law allows a verdict of “guilty but mentally ill.” See Mich Comp. Laws § 768.36. To return such a verdict, a jury must find the following beyond a reasonable doubt: (1) the defendant is guilty of an offense; (2) the defendant was mentally ill at the time of the offense; (3) the defendant was not legally insane at the time of the offense. See Mich Comp. Laws § 768.36(1). A person found guilty but mentally ill is sentenced the same as if he were simply found guilty of the offense. Mich. Comp. Laws § 768.36(3). The only difference between the two verdicts is that the department of corrections is responsible for evaluating the defendant and providing any required psychiatric treatment. See id.
the prosecutor, Wilson, and the trial judge discussed the notes and sent the requested materials to the jury.
Late on Friday afternoon, the jury sent out a third note to which the trial judge did not immediately respond. Instead, the trial judge recessed the trial and excused the jury for the weekend.
On the morning of Monday, May 1, 1995, the trial judge read the note to Pitts and the prosecutor: “We can’t reach a unanimous decision. Our minds are set.” Pitts requested a mistrial, but the trial judge read the jury Michigan’s standard deadlocked jury instruction. The jury continued to deliberate until late afternoon, when they sent out a second note. The second note also said the jury was unable to reach a decision. The trial judge again recessed the trial and excused the jury for the day.
At 9:30 a.m. on May 2, 1995, the trial judge again instructed the jury and directed them to continue deliberations. After continuing deliberations, the jury sent out a third note at 11:00 a.m.: “We are not able to reach a verdict. We are not going to reach a verdict.” The trial judge responded by sending the jury to lunch and instructing the parties to appear at 2:00 p.m.
At 2:00 p.m., neither Pitts nor Wilson had returned to the courtroom. The trial judge asked Jones, who was present, to contact the two attorneys. Jones was unable to contact Pitts or Wilson. At 2:07 p.m., without Pitts or Wilson present, the trial judge brought the jury in and gave them a supplemental jury instruction. Unlike the first two instructions, the third instruction was not the standard deadlocked jury instruction.\(^3\)
---
\(^3\) The relevant part of the trial judge’s third instruction to the jury is as follows:
THE COURT: Now, ladies and gentlemen, I must remind you that you did take an oath to render a true and just verdict. But if you are to be expected to render a verdict, you must communicate, and you must talk with each other.
This case lasted how many days, Mr. Hutting? Approximately 16 days?
MR. HUTTING: Fourteen days trial. For jury selection –
THE COURT: All right. So it wouldn’t be uncommon for deliberations to go on for sometime, and I might remind you that you began to deliberate I think Friday, and I don’t know how you can come to the conclusion that you are not going to reach a verdict.
Based upon your oath that you would reach a true and just verdict, we expect you will communicate. As I stated before, exchange ideas. Give your views. Give your opinions and try to come to a verdict, if at all possible.
But if you don’t communicate, you know that you can’t reach a verdict. And when you took the oath, that was one of the promises that you made by raising your hand taking the oath, that you would deliberate upon a verdict, to try to reach a verdict. And we told you at the outset it would not be an easy task, but we know that you can rise to the occasion. So we’ll ask that you return to the jury room.
Thank you.
(J.A. at 84–85).
the Michigan Supreme Court denied the petitioner leave to appeal.
On October 16, 1998, the petitioner filed a petition in federal court for a writ of habeas corpus. On March 25, 1999, the district court granted the writ. As discussed above, the warden appealed, and we vacated the district court’s decision and remanded for an evidentiary hearing.
After the evidentiary hearing and with a fully developed record before us, we reviewed the district court’s decision to grant French’s petition for a writ of habeas corpus. As grounds for the writ, French argued that the Michigan Court denied his constitutional right to the assistance of counsel when it supplemented its instruction to the jury when no attorney for French was present. We affirmed the judgment of the district court and held that French was denied counsel during a critical stage of his trial. The respondent petitioned the United States Supreme Court for a writ of certiorari. The Supreme Court granted the petition and remanded the case to our Court for reconsideration in light of *Bell v. Cone*, 535 U.S. 685 (2002).
II.
Despite the Supreme Court’s order vacating and remanding the case for our redetermination, our standard of review remains unchanged. We review a district court’s grant of habeas relief de novo. *See, e.g., Harpster v. Ohio*, 128 F.3d 322, 326 (6th Cir. 1997).
In determining whether to issue a habeas writ, the standards set forth in the Antiterrorism and Effective Death Penalty Act of 1996, 28 U.S.C. § 2241 *et seq.* (“AEDPA”) govern the district court’s review of a state court decision. *Id.* The AEDPA only provides habeas relief for a state prisoner in certain circumstances:
(d) An application for a writ of habeas corpus on behalf of a person in custody pursuant to the judgment of a State court shall not be granted with respect to any claim that was adjudicated on the merits in State court proceedings unless the adjudication of the claim—
(1) resulted in a decision that was contrary to, or involved an unreasonable application of, clearly established Federal law, as determined by the Supreme Court of the United States; or
(2) resulted in a decision that was based on an unreasonable determination of the facts in light of the evidence presented in the State court proceeding.
28 U.S.C. § 2254(d) (2001).
The question of whether the trial judge deprived French of his right to counsel during the supplemental jury instruction is a mixed question of law and fact. *See Strickland v. Washington*, 466 U.S. 668, 698 (1984). In a habeas case, we apply the “unreasonable application” prong of § 2254(d)(1) to a mixed question of law and fact. *See* 28 U.S.C. § 2254(d)(1); *Harpster*, 128 F.3d at 326–27.
The recent Supreme Court decision of *Williams v. Taylor*, 529 U.S. 362 (2000), clarified the meaning of the operative clauses in §2254(d)(1). *Williams* stated that federal courts are to find “clearly established Federal law” in the holdings of the Supreme Court, as opposed to its dicta, at the time of the relevant state court decision. *Williams*, 529 U.S. at 412; *see also Harris v. Stovall*, 212 F.3d 940, 944 (6th Cir. 2000).
The *Williams* Court then went on to clarify the situations in which a court could grant a writ of habeas corpus under § 2254(d)(1):
---
4 We decide this case under the AEDPA because French filed his petition for a writ of habeas corpus in October 1998, well after AEDPA’s effective date of April 24, 1996. *See Barker v. Yukins*, 199 F.3d 867, 871 (6th Cir. 1999).
Under the “contrary to” clause, a federal habeas court may grant the writ if the state court arrives at a conclusion opposite to that reached by this Court on a question of law or if the state court decides a case differently than this Court has on a set of materially indistinguishable facts. Under the “unreasonable application” clause, a federal habeas court may grant the writ if the state court identifies the correct governing legal principle from this Court’s decisions but unreasonably applies that principle to the facts of the prisoner’s case.
*Williams*, 529 U.S. at 412–13. Therefore, we look to see if the Michigan courts unreasonably applied a governing legal principle identified by the Supreme Court when they applied harmless error analysis to French’s claim of deprivation of counsel during the supplemental instruction.
A.
Both parties agree that “the complete denial of counsel during a critical stage of a judicial proceeding mandates a presumption of prejudice.” *Roe v. Flores-Ortega*, 528 U.S. 470, 483 (2000); *see also United States v. Cronic*, 466 U.S. 648, 659 n.25 (1984). In our order remanding this case to the district court for an evidentiary hearing, we agreed with the district court that a supplemental jury instruction is a “critical stage” of a criminal proceeding. *French*, 2000 WL 1033021, at *3 (citing *Rogers v. United States*, 422 U.S. 35 (1975), and *Shields v. United States*, 273 U.S. 583 (1927)); *see also Curtis v. Duval*, 124 F.3d 1, 4 (1st Cir. 1997). In our first decision in this matter, we noted that the Michigan courts conceded that the supplemental instruction was a critical stage of the trial. *See French*, 2000 WL 1033021, at *3 n.5. In light of our holding, we remanded the case to the district court to determine the exact nature of Jones’s participation at the trial.
Once the evidentiary hearing established Jones was not an attorney, French argued that the Supreme Court has “uniformly found constitutional error without any showing of prejudice when counsel was either totally absent, or prevented from assisting the accused during a critical stage of the proceeding.” *Cronic*, 466 U.S. at 659 n.25. Since the supplemental jury instruction was a critical stage of the trial, French contended that reversal should be automatic. The district court agreed with French. Citing *Cronic*, the district court found a structural defect like the deprivation of counsel during a critical stage of the trial required automatic reversal. *See French*, 114 F. Supp. 2d at 642.
The Warden argues that the district court erred when it found French’s lack of counsel during the supplemental instruction to be a structural error. Instead, the appellant says French’s lack of counsel during the supplemental instruction was a trial error, subject to harmless error analysis.
In support of his position, the appellant says previous Supreme Court decisions indicate that an error during trial only requires automatic reversal when a defendant has suffered a total deprivation of counsel. The appellant says the present case does not involve the complete deprivation of counsel because the trial court’s actions did not prevent
---
5 The appellant says the following cases describe structural errors because they involve the total deprivation of counsel: *Geders v. United States*, 425 U.S. 80 (1976) (a trial court’s order preventing a defendant from consulting with his attorney at all during a recess between his direct and cross-examination); *Herring v. New York*, 422 U.S. 853 (1975) (the trial court’s refusal to allow counsel to be heard in summation of the evidence); *Brooks v. Tennessee*, 406 U.S. 605 (1972) (a statute requiring a defendant to testify before any other defense evidence was presented deprived the defendant of the advice of counsel in making the decision); *White v. Maryland*, 373 U.S. 59 (1963) (the denial of counsel at a preliminary hearing where a plea was entered); *Gideon v Wainwright*, 372 U.S. 335 (1963) (total deprivation of the right to counsel during trial); *Hamilton v. Alabama*, 368 U.S. 52 (1961) (the denial of counsel at arraignment, the stage at which insanity must be pleaded or the defense forfeited); *Ferguson v. Georgia*, 365 U.S. 570 (1961) (a statute permitting the admission only of the defendant’s unsworn statement and the denial of the right to have counsel examine the defendant regarding it); *Williams v. Kaiser*, 323 U.S. 471 (1945) (the refusal to appoint counsel before entry of the defendant’s plea).
counsel from advising French. Without such an act by the trial court, the appellant says French’s lack of counsel during the supplemental instruction is not a deprivation because it does not fall within any of the fact paradigms the Supreme Court describes as deprivations under the Sixth Amendment.
As further support for its position that the district court erred, the appellant argues the district court’s reliance on *Cronic* is misplaced. In *Cronic*, the Court reversed the court of appeals’ inference that a defendant had received ineffective assistance of counsel because of the attorney’s inexperience, limited access to witnesses, and short time to investigate and prepare for a complex trial. *Cronic*, 466 U.S. at 666. The Court remanded the case to consider whether specific alleged errors made by defense counsel were sufficient to support an ineffective assistance of counsel claim. *Id.* at 666–67. The appellant acknowledges *Cronic*’s statement in dicta that “[t]he presumption that counsel’s assistance is essential requires [a court] to conclude that a trial is unfair if the accused is denied counsel at a critical stage of his trial” is an accurate statement of law. *Id.* at 659. Because *Williams* instructs courts to only rely on the holding of cases when deciding clearly established federal law for habeas petition purposes, *Williams*, 529 U.S. at 412, the appellant argues the district court should not have relied on this statement of law because the language was not applicable to *Cronic*’s ultimate holding.
Instead, the appellant says the facts of this case resemble those in *Rushen v. Spain*, 464 U.S. 114 (1983). In *Rushen*, the Court applied harmless error analysis to a trial judge’s ex parte communications with a juror about the juror’s personal interaction with a police informant currently testifying before the juror. *Rushen*, 464 U.S. at 120–21. Neither defense counsel nor the prosecutor learned of the ex parte communications until after the trial.
The *Rushen* Court found that the post-trial hearing held on the matter provided sufficient evidence the communication between the judge and juror was innocuous and that no bias infected the jury’s deliberations. *Id.* The appellant says the Court’s use of harmless error analysis in *Rushen* supports his assertion that we should analyze the present case for harmless error.
We disagree. Despite the appellant’s attempt to characterize *Cronic*’s language as dicta, the Court has often held, both before and after *Cronic*, that absence of counsel during a critical stage of a trial is per se reversible error. Six years before *Cronic*, the Court held in *Holloway v. Arkansas*, 435 U.S. 475 (1978), that “when a defendant is deprived of the presence and assistance of his attorney, either throughout the prosecution or during a critical stage in, at least, the prosecution of a capital offense, reversal is automatic.” *Holloway*, 435 U.S. at 489 (citing *Gideon v. Wainwright*, 372 U.S. 335, 345 (1963), and *White v. Maryland*, 373 U.S. 59, 60 (1963)).
Four years after *Cronic*, the Court reiterated that harmless error analysis does not apply to Sixth Amendment claims involving the absence of counsel at a critical stage by stating that “the right to counsel is ‘so basic to a fair trial that [its] infraction can never be treated as harmless error.’” *Penson v. Ohio*, 488 U.S. 75, 88 (1988) (quoting *Chapman v. California*, 388 U.S. 18, 23 (1967)). As the Court most recently stated in a habeas case, “[t]he existence of [structural] defects—deprivation of the right to counsel, for example—requires automatic reversal of the conviction because they infect the entire trial process.” *Brecht v. Abrahamson*, 507 U.S. 619, 629–30 (1993).
Furthermore, the appellant wrongly characterizes *Rushen*. In *Rushen*, the Court did not say harmless error analysis applies to every occurrence of attorney absence at a critical stage of trial. Instead, the Court held that ex parte communications between a judge and juror should be analyzed for harmless error because “‘it is virtually impossible to shield jurors from every contact or influence that might theoretically affect their vote.’” *Rushen*, 464 U.S.
at 118–19 (quoting *Smith v. Phillips*, 455 U.S. 209, 217 (1982)).
The present case is vastly different from the incidental contact between a judge and juror at issue in *Rushen*. In this case, the trial judge delivered a supplemental instruction to a deadlocked jury. French’s attorneys did not have an opportunity to respond to the jury’s note nor were they present when the trial judge gave the supplemental instruction. The uncertainty of the prejudice French suffered because he was not represented by counsel during this critical stage of his trial makes the outcome of his trial unreliable. *See Roe*, 528 U.S. at 483; *Cronic*, 466 U.S. at 659 n.25.
*Cronic* correctly summarizes federal law when it states that absence of counsel during a critical stage of a trial amounts to constitutional error. *Cronic*, 466 U.S. at 659 n.25. In light of clear federal law, the Michigan courts unreasonably applied harmless error analysis to French’s deprivation of counsel during the supplemental instruction. The district court properly granted the writ of habeas corpus.
The decision in *Cone* does not alter our analysis. In *Cone*, the Court defined the type of ineffective-assistance claims that fit within *Cronic*’s second exception to the *Strickland* rule. *Cone*, 535 U.S. at 695-97. Under the *Strickland* rule, petitioners alleging a deprivation of their right to counsel must show (1) that counsel's performance fell below an objective standard of reasonableness and (2) that a reasonable probability exists that, but for counsel's substandard performance, the outcome would have been different. *Strickland*, 466 U.S. at 687. The *Cronic* Court held that the petitioner need not prove actual prejudice in the following three categories of circumstances: (1) when the petitioner was denied counsel at a critical stage of the proceedings; (2) when the petitioner’s counsel “failed to subject the prosecution’s case to meaningful adversarial testing”; and (3) when the circumstances of the trial prevent counsel from affording effective representation. *Cronic*, 466 U.S. at 656.
In *Cone*, the petitioner claimed his counsel had been ineffective when he failed to adduce mitigating evidence and waived final argument at the sentencing phase of the trial. *Cone*, 535 U.S. at 697. A panel of our Court below applied the second of *Cronic*’s three exceptions to the *Strickland* rule, holding that those failures were so egregious that they did not require a showing of actual prejudice. *See Cone v. Bell*, 243 F.3d 961, 979 (6th Cir.2001). The Supreme Court reversed that decision because the petitioner did not allege that trial counsel entirely failed to subject the prosecution’s case to meaningful adversarial testing, but alleged only that trial counsel failed at “specific points.” *Cone*, 535 U.S. at 697. In reaching this conclusion, the Court announced that to apply the second *Cronic* exception, “the attorney’s failure must be complete.” *Id.*
French does not argue that his counsel failed to subject the prosecution’s case to meaningful adversarial testing. Instead, he argues that the first *Cronic* exception applies because he says that he was completely denied counsel during a critical stage of a judicial proceeding. As discussed, *Cone* did not deal with a denial of counsel claim. Nor does the logic of *Bell*’s holding that the attorney’s failure must be complete extend to claims based on the denial of counsel at a critical stage of the proceedings. Therefore, we conclude that *Cone* does not apply to claims of denial of counsel during a critical stage.
III. Conclusion
At the evidentiary hearing we ordered in this case, the district court determined French did not have counsel during
---
6 We are not alone in this conclusion. The Eleventh Circuit recently noted in dicta that *Bell* does not apply to claims of denial of counsel at trial. *See Hunter v. Moore*, 304 F.3d 1066, 1070 n.4 (11th Cir. 2002). *Accord United States ex rel. Madej v. Schomig*, 223 F. Supp.2d 968, 971-72 n.2 (N.D. Ill. 2002) (“*Bell v. Cone* does not appear to alter the conditions for presuming prejudice under *Cronic* when a defendant is actually denied counsel at a critical stage of the proceedings or when the circumstances of the trial render effective assistance impossible.”)
the trial judge’s supplemental jury instruction. Because French was without counsel during a critical stage of his trial, the district court correctly granted French’s petition for a writ of habeas corpus. For the foregoing reasons, we AFFIRM the judgment of the Eastern District of Michigan.
|
Maximization of Approximately Submodular Functions
Thibaut Horel
Harvard University
email@example.com
Yaron Singer
Harvard University
firstname.lastname@example.org
Abstract
We study the problem of maximizing a function that is approximately submodular under a cardinality constraint. Approximate submodularity implicitly appears in a wide range of applications as in many cases errors in evaluation of a submodular function break submodularity. Say that $F$ is $\varepsilon$-approximately submodular if there exists a submodular function $f$ such that $(1 - \varepsilon)f(S) \leq F(S) \leq (1 + \varepsilon)f(S)$ for all subsets $S$. We are interested in characterizing the query-complexity of maximizing $F$ subject to a cardinality constraint $k$ as a function of the error level $\varepsilon > 0$. We provide both lower and upper bounds: for $\varepsilon > n^{-1/2}$ we show an exponential query-complexity lower bound. In contrast, when $\varepsilon < 1/k$ or under a stronger bounded curvature assumption, we give constant approximation algorithms.
1 Introduction
In recent years, there has been a surge of interest in machine learning methods that involve discrete optimization. In this realm, the evolving theory of submodular optimization has been a catalyst for progress in extraordinarily varied application areas. Examples include active learning and experimental design [10, 13, 15, 20, 21], sparse reconstruction [1, 6, 7], graph inference [24, 25, 8], video analysis [30], clustering [11], document summarization [22], object detection [28], information retrieval [29], network inference [24, 25], and information diffusion in networks [18].
The power of submodularity as a modeling tool lies in its ability to capture interesting application domains while maintaining provable guarantees for optimization. The guarantees however, apply to the case in which one has access to the exact function to optimize. In many applications, one does not have access to the exact version of the function, but rather some approximate version of it. If the approximate version remains submodular then the theory of submodular optimization clearly applies and modest errors translate to modest loss in quality of approximation. But if the approximate version of the function ceases to be submodular all bets are off.
Approximate submodularity. Recall that a function $f : 2^N \to \mathbb{R}$ is submodular if for all $S, T \subseteq N$, $f(S \cup T) + f(S \cap T) \leq f(S) + f(T)$. We say that a function $F : 2^N \to \mathbb{R}$ is $\varepsilon$-approximately submodular if there exists a submodular function $f : 2^N \to \mathbb{R}$ s.t. for any $S \subseteq N$:
\[(1 - \varepsilon)f(S) \leq F(S) \leq (1 + \varepsilon)f(S).\]
Unless otherwise stated, all submodular functions $f$ considered are normalized ($f(\emptyset) = 0$) and monotone ($f(S) \leq f(T)$ for $S \subseteq T$). Approximate submodularity appears in various domains.
- **Optimization with noisy oracles.** In these scenarios, we wish to solve optimization problems where one does not have access to a submodular function but a noisy version of it. An example recently studied in [5] involves maximizing information gain in graphical models; this captures many Bayesian experimental design settings.
• **PMAC learning.** In the active area of learning submodular functions initiated by Balcan and Harvey [1], the objective is to *approximately* learn submodular functions. Roughly speaking, the PMAC-learning framework guarantees that the learned function is a constant-factor approximation of the true submodular function with high probability. Therefore, after learning a submodular function, one obtains an approximately submodular function.
• **Sketching.** Since submodular functions have, in general, exponential-size representation, [2] studied the problem of *sketching* submodular functions: finding a function with polynomial-size representation approximating a given submodular function. The resulting sketch is an approximately submodular function.
**Optimization of approximate submodularity.** We focus on optimization problems of the form
\[
\max_{S : |S| \leq k} F(S)
\]
(2)
where \(F\) is an \(\varepsilon\)-approximately submodular function and \(k \in \mathbb{N}\) is the cardinality constraint. We say that a set \(S \subseteq N\) is an \(\alpha\)-approximation to the optimal solution of (2) if \(|S| \leq k\) and \(F(S) \geq \alpha \max_{|S| \leq k} F(S)\). As is common in submodular optimization, we assume the *value query model*: optimization algorithms have access to the objective function \(F\) in a black-box manner, i.e. they make queries to an oracle which returns, for a queried set \(S\), the value \(F(S)\). The query-complexity of the algorithm is simply the number of queries made to the oracle. An algorithm is called an \(\alpha\)-approximation algorithm if for any approximately submodular input \(F\) the solution returned by the algorithm is an \(\alpha\)-approximately optimal solution. Note that if there exists an \(\alpha\)-approximation algorithm for the problem of maximizing an \(\varepsilon\)-approximate submodular function \(F\), then this algorithm is a \(\frac{\alpha(1-\varepsilon)}{\varepsilon}\)-approximation algorithm for the original submodular function \(f^1\). Conversely, if no such algorithm exists, this implies an inapproximability for the original function.
Clearly, if a function is 0-approximately submodular then it retains desirable provable guarantees\(^2\), and it if it is arbitrarily far from being submodular it can be shown to be trivially inapproximable (*e.g.* maximize a function which takes value of 1 for a single arbitrary set \(S \subseteq N\) and 0 elsewhere). The question is therefore:
*How close should a function be to submodular to retain provable approximation guarantees?*
In recent work, it was shown that for any constant \(\varepsilon > 0\) there exists a class of \(\varepsilon\)-approximately submodular functions for which no algorithm using fewer than exponentially-many queries has a constant approximation ratio for the canonical problem of maximizing a submodular function under a cardinality constraint [1,4]. Such an impossibility result suggests two natural relaxations: the first is to make additional assumptions about the structure of errors, such a stochastic error model. This is the direction taken in [14], where the main result shows that when errors are drawn i.i.d. from a wide class of distributions, optimal guarantees are obtainable. The second alternative is to assume the error is subconstant, which is the focus of this paper.
### 1.1 Overview of the results
Our main result is a spoiler: even for \(\varepsilon = 1/n^{1/2 - \beta}\) for any constant \(\beta > 0\) and \(n = |N|\), no algorithm can obtain a constant-factor approximation guarantee. More specifically, we show that:
- For the general case of **monotone submodular functions**, for any \(\beta > 0\), given access to a \(\frac{1}{n^{1/3 - \beta}}\)-approximately submodular function, no algorithm can obtain an approximation ratio better than \(O(1/n^\beta)\) using polynomially many queries (Theorem 3).
- For the case of **coverage functions** we show that for any fixed \(\beta > 0\) given access to an \(\frac{1}{n^{1/3 - \beta}}\)-approximately submodular function, no algorithm can obtain an approximation ratio strictly better than \(O(1/n^\beta)\) using polynomially many queries (Theorem 4).
\(^1\)Observe that for an approximately submodular function \(F\), there exists many submodular functions \(f\) of which it is an approximation. All such submodular functions \(f\) are called *representatives* of \(F\). The conversion between an approximation guarantee for \(F\) and an approximation guarantee for a representative \(f\) of \(F\) holds for any choice of the representative.
\(^2\)Specifically, [2,3] shows that it possible to obtain a \((1 - 1/e)\) approximation ratio for a cardinality constraint.
The above results imply that even in cases where the objective function is arbitrarily close to being submodular as the number $n$ of elements in $N$ grows, reasonable optimization guarantees are unachievable. The second result shows that this is the case even when we aim to optimize coverage functions. Coverage functions are an important class of submodular functions which are used in numerous applications [12, 22, 19].
**Approximation guarantees.** The inapproximability results follow from two properties of the model: the structure of the function (submodularity), and the size of $\varepsilon$ in the definition of approximate submodularity. A natural question is whether one can relax either conditions to obtain positive approximation guarantees. We show that this is indeed the case:
- In the general case of **monotone submodular functions** we show that the greedy algorithm achieves a $(1 - 1/e - O(\delta))$ approximation ratio when $\varepsilon = \frac{\delta}{k}$ (Theorem 5). Furthermore, this bound is tight: given a $1/k^{1-\beta}$-approximately submodular function, the greedy algorithm no longer provides a constant factor approximation guarantee (Proposition 6).
- Since our query-complexity lower bound holds for coverage functions, which already contain a great deal of structure, we relax the structural assumption by considering functions with **bounded curvature** $c$; this is a common assumption in applications of submodularity to machine learning and has been used in prior work to obtain theoretical guarantees [16, 17]. Under this assumption, we give an algorithm which achieves an approximation ratio of $(1 - c)(\frac{1 - \varepsilon}{1 + \varepsilon})^2$ (Proposition 8).
We state our positive results for the case of a cardinality constraint of $k$. Similar results hold for matroids of rank $k$; the proofs of those can be found in the Appendix. Note that cardinality constraints are a special case of matroid constraints, therefore our lower bounds also apply to matroid constraints.
### 1.2 Discussion and additional related work
Before transitioning to the technical results, we briefly survey error in applications of submodularity and the implications of our results to these applications. First, notice that there is a coupling between approximate submodularity and erroneous evaluations of a submodular function: if one can evaluate a submodular function within (multiplicative) accuracy of $1 \pm \varepsilon$ then this is an $\varepsilon$-approximately submodular function.
**Additive vs multiplicative approximation.** The definition of approximate submodularity in (1) uses relative (multiplicative) approximation. We could instead consider absolute (additive) approximation, i.e. require that $f(S) - \varepsilon \leq f(S) \leq f(S) + \varepsilon$ for all sets $S$. This definition has been used in the related problem of optimizing approximately convex functions [4, 26], where functions are assumed to have normalized range. For un-normalized functions or functions whose range is unknown, a relative approximation is more informative. When the range is known, specifically if an upper bound $B$ on $f(S)$ is known, an $\varepsilon/B$-approximately submodular function is also an $\varepsilon$-additively approximate submodular function. This implies that our lower bounds and approximation results could equivalently be expressed for additive approximations of normalized functions.
**Error vs noise.** If we interpret Equation (1) in terms of error, we see that no assumption is made on the source of the error yielding the approximately submodular function. In particular, there is no stochastic assumption: the error is deterministic and worst-case. Previous work have considered submodular or combinatorial optimization under random noise. Two models naturally arise:
- **consistent noise**: the approximate function $F$ is such that $F(S) = \xi_S f(S)$ where $\xi_S$ is drawn independently for each set $S$ from a distribution $\mathcal{D}$. The key aspect of consistent noise is that the random draws occur only once: querying the same set multiple times always returns the same value. This definition is the one adopted in [14]; a similar notion is called persistent noise in [5].
- **inconsistent noise**: in this model $F(S)$ is a random variable such that $f(S) = \mathbb{E}[F(S)]$. The noisy oracle can be queried multiple times and each query corresponds to a new independent random draw from the distribution of $F(S)$. This model was considered in [27] in the context of dataset summarization and is also implicitly present in [18] where the objective function is defined as an expectation and has to be estimated via sampling.
Formal guarantees for consistent noise have been obtained in [14]. A standard way to approach optimization with inconsistent noise is to estimate the value of each set used by the algorithm to an accuracy $\varepsilon$ via independent randomized sampling, where $\varepsilon$ is chosen small enough so as to obtain approximation guarantees. Specifically, assuming that the algorithm only makes polynomially value queries and that the function $f$ is such that $F(S) \in [b, B]$ for any set $S$, then a classical application of the Chernoff bound combined with a union bound implies that if the value of each set is estimated by averaging the value of $m$ samples with $m = \Omega \left( \frac{Bn^2 \log n}{b\varepsilon^2} \right)$, then with high probability the estimated value $F(S)$ of each set used by the algorithm is such that $(1 - \varepsilon)f(S) \leq F(S) \leq (1 + \varepsilon)f(S)$. In other words, *randomized sampling is used to construct a function which is $\varepsilon$-approximately submodular with high probability*.
**Implications of results in this paper.** Given the above discussion, our results can be interpreted in the context of noise as providing guarantees on what is a tolerable noise level. In particular, Theorem 5 implies that if a submodular function is estimated using $m$ samples, with $m = \Omega \left( \frac{Bn^2 \log n}{b} \right)$, then the Greedy algorithm is a constant approximation algorithm for the problem of maximizing a monotone submodular function under a cardinality constraint. Theorem 3 implies that if $m = O \left( \frac{Bn \log n}{b} \right)$ then the resulting estimation error is within the range where no algorithm can obtain a constant approximation ratio.
## 2 Query-complexity lower bounds
In this section we give query-complexity lower bounds for the problem of maximizing an $\varepsilon$-approximately submodular function subject to a cardinality constraint. In Section 2.1, we show an exponential query-complexity lower bound for the case of general submodular functions when $\varepsilon \geq n^{-1/2}$ (Theorem 3). The same lower-bound is then shown to hold even when we restrict ourselves to the case of coverage functions for $\varepsilon \geq n^{-1/3}$ (Theorem 4).
**An general overview of query-complexity lower bounds.** At a high level, the lower bounds are constructed as follows. We define a class of monotone submodular functions $\mathcal{F}$, and draw a function $f$ uniformly at random from $\mathcal{F}$. In addition we define a submodular function $g : 2^N \to \mathbb{R}$ s.t. $\max_{|S| \leq k} f(S) \geq \rho(n) \cdot \max_{|S| \leq k} g(S)$, where $\rho(n) = o(1)$ for a particular choice of $k < n$. We then define the approximately submodular function $F$:
$$F(S) = \begin{cases}
g(S), & \text{if } (1 - \varepsilon)f(S) \leq g(S) \leq (1 + \varepsilon)f(S) \\
f(S) & \text{otherwise}
\end{cases}$$
Note that by its definition, this function is an $\varepsilon$-approximately submodular function. To show the lower bound, we reduce the problem of proving inapproximability of optimizing an approximately submodular function to the problem of distinguishing between $f$ and $g$ using $F$. We show that for every algorithm, there exists a function $f' \in \mathcal{F}$ s.t. if $f$ is unknown to the algorithm, it cannot distinguish between the case in which the underlying function is $f$ and the case in which the underlying function is $g$ using polynomially-many value queries to $F$, even when $g$ is known to the algorithm. Since $\max_{|S| \leq k} f(S) \geq \rho(n) \max_{|S| \leq k} g(S)$, this implies that no algorithm can obtain an approximation better than $\rho(n)$ using polynomially-many queries; otherwise such an algorithm could be used to distinguish between $f$ and $g$.
### 2.1 Monotone submodular functions
**Constructing a class of hard functions.** A natural candidate for a class of functions $\mathcal{F}$ and a function $g$ satisfying the properties described in the overview is:
$$f^H(S) = |S \cap H| \quad \text{and} \quad g(S) = \frac{|S|h}{n}$$
for $H \subseteq N$ of size $h$. The reason why $g$ is hard to distinguish from $f^H$ is that when $H$ is drawn uniformly at random among sets of size $h$, $f^H$ is close to $g$ with high probability. This follows from an application of the Chernoff bound for negatively associated random variables. Formally, this is stated in Lemma 1 whose proof is given in the Appendix.
Lemma 1. Let $H \subseteq N$ be a set drawn uniformly among sets of size $h$, then for any $S \subseteq N$, writing $\mu = \frac{|S|h}{n}$, for any $\varepsilon$ such that $\varepsilon^2\mu > 1$:
$$\mathbb{P}_H \left[ (1 - \varepsilon)\mu \leq |S \cap H| \leq (1 + \varepsilon)\mu \right] \geq 1 - 2^{-\Omega(\varepsilon^2\mu)}$$
Unfortunately this construction fails if the algorithm is allowed to evaluate the approximately submodular function at small sets: for those the concentration of Lemma 1 is not high enough. Our construction instead relies on designing $F$ and $g$ such that when $S$ is “large”, then we can make use of the concentration result of Lemma 1 and when $S$ is “small”, functions in $F$ and $g$ are deterministically close to each other. Specifically, we introduce for $H \subseteq N$ of size $h$:
$$f^H(S) = |S \cap H| + \min \left( |S \cap (N \setminus H)|, \alpha \left( 1 - \frac{h}{n} \right) \right)$$
$$g(S) = \min \left( |S|, \frac{|S|h}{n} + \alpha \left( 1 - \frac{h}{n} \right) \right)$$
The value of the parameters $\alpha$ and $h$ will be set later in the analysis. Observe that when $S$ is small ($|S \cap H| \leq \alpha(1 - h/n)$ and $|S| \leq \alpha$) then $f^H(S) = g(S) = |S|$. When $S$ is large, Lemma 1 implies that $|S \cap H|$ is close to $|S|h/n$ and $|S \cap (N \setminus H)|$ is close to $|S|(1 - h/n)$ with high probability.
First note that $f^H$ and $g$ are monotone submodular functions. $f^H$ is the sum of a monotone additive function and a monotone budget-additive function. The function $g$ can be written $g(S) = G(|S|)$ where $G(x) = \min(x, xh/n + \alpha(1 - h/n))$. $G$ is a non-decreasing concave function (minimum between two non-decreasing linear functions) hence $g$ is monotone submodular.
Next, we observe that there is a gap between the maxima of the functions $f^H$ and the one of $g$. When $S \leq k$, $g(S) = \frac{|S|h}{n} + \alpha \left( 1 - \frac{h}{n} \right)$. The maximum is clearly attained when $|S| = k$ and is upper-bounded by $\frac{kh}{n} + \alpha$. For $f^H$, the maximum is attained when $S$ is a subset of $H$ of size $k$ and is equal to $k$. So for $\alpha \leq k \leq h$, we obtain:
$$\max_{|S| \leq k} g(S) \leq \left( \frac{\alpha}{k} + \frac{h}{n} \right) \max_{|S| \leq k} f^H(S), \quad H \subseteq N$$
Indistinguishability. The main challenge is now to prove that $f^H$ is close to $g$ with high probability. Formally, we have the following lemma.
Lemma 2. For $h \leq \frac{n}{2}$, let $H$ be drawn uniformly at random among sets of size $h$, then for any $S$:
$$\mathbb{P}_H \left[ (1 - \varepsilon)f^H(S) \leq g(S) \leq (1 + \varepsilon)f^H(S) \right] \geq 1 - 2^{-\Omega(\varepsilon^2\alpha h/n)}$$
Proof. For concision we define $\bar{H} := N \setminus H$, the complement of $H$ in $N$. We consider four cases depending on the cardinality of $S$ and $S \cap \bar{H}$.
**Case 1:** $|S| \leq \alpha$ and $|S \cap \bar{H}| \leq \alpha \left( 1 - \frac{h}{n} \right)$. In this case $f^H(S) = |S \cap H| + |S \cap \bar{H}| = |S|$ and $g(S) = |S|$. The two functions are equal and the inequality is immediately satisfied.
**Case 2:** $|S| \leq \alpha$ and $|S \cap \bar{H}| \geq \alpha \left( 1 - \frac{h}{n} \right)$. In this case $g(S) = |S| = |S \cap H| + |S \cap \bar{H}|$ and $f^H(S) = |S \cap H| + \alpha(1 - \frac{h}{n})$. By assumption on $|S \cap \bar{H}|$, we have:
$$(1 - \varepsilon)\alpha \left( 1 - \frac{h}{n} \right) \leq |S \cap \bar{H}|$$
For the other side, by assumption on $|S \cap \bar{H}|$, we have that $|S| \geq \alpha(1 - \frac{h}{n}) \geq \frac{\alpha}{2}$ (since $h \leq \frac{n}{2}$). We can then apply Lemma 1 and obtain:
$$\mathbb{P}_H \left[ |S \cap \bar{H}| \leq (1 + \varepsilon)\alpha \left( 1 - \frac{h}{n} \right) \right] \geq 1 - 2^{-\Omega(\varepsilon^2\alpha h/n)}$$
**Case 3:** $|S| \geq \alpha$ and $|S \cap \bar{H}| \geq \alpha \left( 1 - \frac{h}{n} \right)$. In this case $f^H(S) = |S \cap H| + \alpha(1 - \frac{h}{n})$ and $g(S) = \frac{|S|h}{n} + \alpha(1 - \frac{h}{n})$. We need to show that:
$$\mathbb{P}_H \left[ (1 - \varepsilon)\frac{|S|h}{n} \leq |S \cap H| \leq (1 + \varepsilon)\frac{|S|h}{n} \right] \geq 1 - 2^{-\Omega(\varepsilon^2\alpha h/n)}$$
This is a direct consequence of Lemma 1.
**Case 4:** \(|S| \geq \alpha\) and \(|S \cap \bar{H}| \leq \alpha (1 - \frac{h}{n})\). In this case \(f^H(S) = |S \cap H| + |S \cap \bar{H}|\) and \(g(S) = \frac{|S|h}{n} + \alpha(1 - \frac{h}{n})\). As in the previous case, we have:
\[
\mathbb{P}_H \left[ (1 - \varepsilon) \frac{|S|h}{n} \leq |S \cap H| \leq (1 + \varepsilon) \frac{|S|h}{n} \right] \geq 1 - 2^{-\Omega(\varepsilon^2 \alpha h/n)}
\]
By the assumption on \(|S \cap \bar{H}|\), we also have:
\[
|S \cap \bar{H}| \leq \alpha \left(1 - \frac{h}{n}\right) \leq (1 + \varepsilon)\alpha \left(1 - \frac{h}{n}\right)
\]
So we need to show that:
\[
\mathbb{P}_H \left[(1 - \varepsilon)\alpha \left(1 - \frac{h}{n}\right) \leq |S \cap \bar{H}| \right] \geq 1 - 2^{-\Omega(\varepsilon^2 \alpha h/n)}
\]
and then we will be able to conclude by union bound. This is again a consequence of Lemma 1.
**Theorem 3.** For any \(0 < \beta < \frac{1}{2}, \varepsilon \geq \frac{1}{n^{1/2-\beta}}\), and any (possibly randomized) algorithm with query-complexity smaller than \(2^{\Omega(n^{1/\beta}/\varepsilon)}\), there exists an \(\varepsilon\)-approximately submodular function \(F\) such that for the problem of maximizing \(F\) under a cardinality constraint, the algorithm achieves an approximation ratio upper-bounded by \(\frac{2}{n^{\beta/2}}\) with probability at least \(1 - \frac{1}{2^{\Omega(n^{1/\beta})}}\).
**Proof.** We set \(k = h = n^{1-\beta/2}\) and \(\alpha = n^{1-\beta}\). Let \(H\) be drawn uniformly at random among sets of size \(h\) and let \(f^H\) and \(g\) be as in (3). We first define the \(\varepsilon\)-approximately submodular function \(F^H\):
\[
F^H(S) = \begin{cases}
g(S) & \text{if } (1 - \varepsilon)f^H(S) \leq g(S) \leq (1 + \varepsilon)f^H(S) \\
f^H(S) & \text{otherwise}
\end{cases}
\]
It is clear from the definition that this is an \(\varepsilon\)-approximately submodular function. Consider a deterministic algorithm \(A\) and let us denote by \(S_1, \ldots, S_m\) the queries made by the algorithm when given as input the function \(g\) (\(g\) is 0-approximately submodular, hence it is a valid input to \(A\)). Without loss of generality, we can include the set returned by the algorithm in the queries, so \(S_m\) denotes the set returned by the algorithm. By (5), for any \(i \in [m]\):
\[
\mathbb{P}_H[(1 - \varepsilon)f^H(S_i) \leq g(S_i) \leq (1 + \varepsilon)f^H(S_i)] \geq 1 - 2^{-\Omega(n^{\frac{\beta}{2}})}
\]
when these events realize, we have \(F^H(S_i) = g(S_i)\). By union bound over \(i\), when \(m < 2^{\Omega(n^{\frac{\beta}{2}})}\),
\[
\mathbb{P}_H[\forall i, F^H(S_i) = g(S_i)] > 1 - m2^{-\Omega(n^{\beta/2})} = 1 - 2^{-\Omega(n^{\beta/2})} > 0
\]
This implies the existence of \(H\) such that \(A\) follows the same query path when given \(g\) and \(F^H\) as inputs. For this \(H\):
\[
F^H(S_m) = g(S_m) \leq \max_{|S| \leq k} g(S) \leq \left(\frac{\alpha}{k} + \frac{h}{n}\right) \max_{|S| \leq k} f^H(S) = \left(\frac{\alpha}{k} + \frac{h}{n}\right) \max_{|S| \leq k} F^H(S)
\]
where the second inequality comes from (4). For our choice of parameters, \(\frac{\alpha}{k} + \frac{h}{n} = 2/n^{\frac{\beta}{2}}\), hence:
\[
F^H(S_m) \leq \frac{2}{n^{\frac{\beta}{2}}} \max_{|S| \leq k} F^H(S)
\]
Let us now consider the case where the algorithm \(A\) is randomized and let us denote \(A_{H,R}\) the solution returned by the algorithm when given function \(F^H\) as input and random bits \(R\). We have:
\[
\mathbb{P}_{H,R} \left[F^H(A_{H,R}) \leq \frac{2}{n^{\beta/2}} \max_{|S| \leq k} F^H(S)\right] = \sum_r \mathbb{P}[R = r] \mathbb{P}_H \left[F^H(A_{H,R}) \leq \frac{2}{n^{\beta/2}} \max_{|S| \leq k} F^H(S)\right]
\]
\[
\geq (1 - 2^{-\Omega(n^{\frac{\beta}{2}})}) \sum_r \mathbb{P}[R = r] = 1 - 2^{-\Omega(n^{\beta/2})}
\]
where the equality comes from the analysis of the deterministic case (when the random bits are fixed, the algorithm is deterministic). This implies the existence of \(H\) such that:
\[
\mathbb{P}_R \left[F^H(A_{H,R}) \leq \frac{2}{n^{\beta/2}} \max_{|S| \leq k} F^H(S)\right] \geq 1 - 2^{-\Omega(n^{\beta/2})}
\]
and concludes the proof of the theorem. \(\square\)
2.2 Coverage functions
In this section, we show that an exponential query-complexity lower bound still holds even in the restricted case where the objective function approximates a coverage function. Recall that by definition of a coverage function, the elements of the ground set $N$ are subsets of a set $U$ called the universe. For a set $S = \{S_1, \ldots, S_m\}$ of subsets of $U$, the value $f(S)$ is given by $f(S) = |\bigcup_{i=1}^m S_i|$.
**Theorem 4.** For any $0 < \beta < \frac{1}{2}$, $\varepsilon \geq \frac{1}{n^{1/3-\beta}}$, and any (possibly randomized) algorithm with query-complexity smaller than $2^{O(n^{3/2})}$, there exists a function $F$ which $\varepsilon$-approximates a coverage function, such that for the problem of maximizing $F$ under a cardinality constraint, the algorithm achieves an approximation ratio upper-bounded by $\frac{2}{n^{3/2}}$ with probability at least $1 - \frac{1}{2^{O(n^{3/2})}}$.
The proof of Theorem 4 has the same structure as the proof of Theorem 3. The main difference is a different choice of class of functions $F$ and function $g$. The details can be found in the appendix.
3 Approximation algorithms
The results from Section 2 can be seen as a strong impossibility result since an exponential query-complexity lower bound holds even in the specific case of coverage functions which exhibit a lot of structure. Faced with such an impossibility, we analyze two ways to relax the assumptions in order to obtain positive results. One relaxation considers $\varepsilon$-approximate submodularity when $\varepsilon \leq \frac{1}{k}$; in this case we show that the Greedy algorithm achieves a constant approximation ratio (and that $\varepsilon = \frac{1}{k}$ is tight for the Greedy algorithm). The other relaxation considers functions with stronger structural properties, namely, functions with bounded curvature. In this case, we show that a constant approximation ratio can be obtained for any constant $\varepsilon$.
3.1 Greedy algorithm
For the general class of monotone submodular functions, the result of [23] shows that a simple greedy algorithm achieves an approximation ratio of $1 - \frac{1}{e}$. Running the same algorithm for an $\varepsilon$-approximately submodular function results in a constant approximation ratio when $\varepsilon \leq \frac{1}{k}$. The detailed description of the algorithm can be found in the appendix.
**Theorem 5.** Let $F$ be an $\varepsilon$-approximately submodular function, then the set $S$ returned by the greedy algorithm satisfies:
$$F(S) \geq \frac{1}{1 + \frac{4k\varepsilon}{(1-\varepsilon)^2}} \left(1 - \left(\frac{1-\varepsilon}{1+\varepsilon}\right)^{2k} \left(1 - \frac{1}{k}\right)^k\right) \max_{S:|S|\leq k} F(S)$$
In particular, for $k \geq 2$, any constant $0 \leq \delta < 1$ and $\varepsilon = \frac{\delta}{k}$, this approximation ratio is constant and lower-bounded by $(1 - \frac{1}{e} - 16\delta)$.
**Proof.** Let us denote by $O$ an optimal solution to $\max_{S:|S|\leq K} F(S)$ and by $f$ a submodular representative of $F$. Let us write $S = \{e_1, \ldots, e_k\}$ the set returned by the greedy algorithm and define $S_i = \{e_1, \ldots, e_i\}$, then:
$$f(O) \leq f(S_i) + \sum_{e \in OPT} \left[f(S_i \cup \{e\}) - f(S_i)\right] \leq f(S_i) + \sum_{e \in O} \left[\frac{1}{1-\varepsilon} F(S_i \cup \{e\}) - f(S_i)\right]$$
$$\leq f(S_i) + \sum_{e \in O} \left[\frac{1}{1-\varepsilon} F(S_{i+1}) - f(S_i)\right] \leq f(S_i) + \sum_{e \in O} \left[\frac{1+\varepsilon}{1-\varepsilon} f(S_{i+1}) - f(S_i)\right]$$
$$\leq f(S_i) + K \left[\frac{1+\varepsilon}{1-\varepsilon} f(S_{i+1}) - f(S_i)\right]$$
where the first inequality uses submodularity, the second uses the definition of approximate submodularity, the third uses the definition of the Algorithm, the fourth uses approximate submodularity again and the last one uses that $|O| \leq k$.
Reordering the terms, and expressing the inequality in terms of $F$ (using the definition of approximate submodularity) gives:
$$F(S_{i+1}) \geq \left(1 - \frac{1}{k}\right) \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right)^2 F(S_i) + \frac{1}{k} \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right)^2 F(O)$$
This is an inductive inequality of the form $u_{i+1} \geq au_i + b$, $u_0 = 0$. Whose solution is $u_i \geq \frac{b}{1-a}(1-a^i)$. For our specific $a$ and $b$, we obtain:
$$F(S) \geq \frac{1}{1 + \frac{4k\varepsilon}{(1+\varepsilon)^2}} \left(1 - \left(1 - \frac{1}{k}\right)^k \left(\frac{1-\varepsilon}{1+\varepsilon}\right)^{2k}\right) F(O) \quad \square$$
The following proposition shows that $\varepsilon = \frac{1}{k}$ is tight for the greedy algorithm, and that this is the case even for additive functions. The proof can be found in the Appendix.
**Proposition 6.** For any $\beta > 0$, there exists an $\varepsilon$-approximately additive function with $\varepsilon = \Omega\left(\frac{1}{k^{1-\beta}}\right)$ for which the Greedy algorithm has non-constant approximation ratio.
**Matroid constraint.** Theorem 5 can be generalized to the case of matroid constraints. We are now looking at a problem of the form: $\max_{S \in I} F(S)$, where $I$ is the set of independent sets of a matroid.
**Theorem 7.** Let $I$ be the set of independent sets of a matroid of rank $k$, and let $F$ be an $\varepsilon$-approximately submodular function, then if $S$ is the set returned by the greedy algorithm:
$$F(S) \geq \frac{1}{2} \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right) \frac{1}{1 + \frac{k\varepsilon}{1-\varepsilon}} \max_{S \in I} f(S)$$
In particular, for $k \geq 2$, any constant $0 \leq \delta < 1$ and $\varepsilon = \frac{\delta}{k}$, this approximation ratio is constant and lower-bounded by $(\frac{1}{2} - 2\delta)$.
### 3.2 Bounded curvature
With an additional assumption on the curvature of the submodular function $f$, it is possible to obtain a constant approximation ratio for any $\varepsilon$-approximately submodular function with constant $\varepsilon$. Recall that the curvature $c$ of function $f : 2^N \rightarrow \mathbb{R}$ is defined by $c = 1 - \min_{a \in N} \frac{f_{N \setminus \{a\}}(a)}{f(a)}$. A consequence of this definition when $f$ is submodular is that for any $S \subseteq N$ and $a \in N \setminus S$ we have that $f_S(a) \geq (1-c)f(a)$.
**Proposition 8.** For the problem $\max_{|S| \leq k} F(S)$ where $F$ is an $\varepsilon$-approximately submodular function which approximates a monotone submodular $f$ with curvature $c$, there exists a polynomial time algorithm which achieves an approximation ratio of $(1-c)(\frac{1-\varepsilon}{1+\varepsilon})^2$.
### References
[1] F. Bach. Structured sparsity-inducing norms through submodular functions. In *NIPS*, 2010.
[2] A. Badanidiyuru, S. Dobzinski, H. Fu, R. Kleinberg, N. Nisan, and T. Roughgarden. Sketching valuation functions. In *SODA*, pages 1025–1035. SIAM, 2012.
[3] M.-E. Balcan and N. J. Harvey. Learning submodular functions. In *Proceedings of the forty-third annual ACM symposium on Theory of computing*, pages 793–802. ACM, 2011.
[4] A. Belloni, T. Liang, H. Narayanan, and A. Rakhlin. Escaping the local minima via simulated annealing: Optimization of approximately convex functions. In *COLT*, pages 240–265, 2015.
[5] Y. Chen, S. H. Hassani, A. Karbasi, and A. Krause. Sequential information maximization: When is greedy near-optimal? In *COLT*, pages 338–363, 2015.
[6] A. Das, A. Dasgupta, and R. Kumar. Selecting diverse features via spectral relaxation. In *NIPS*, 2012.
[7] A. Das and D. Kempe. Submodular meets spectral: Greedy algorithms for subset selection, sparse approximation and dictionary selection. In ICML, 2011.
[8] A. Defazio and T. Caetano. A convex formulation for learning scale-free networks via submodular relaxation. In NIPS, 2012.
[9] D. P. Dubhashi, V. Priebe, and D. Ranjan. Negative dependence through the fkg inequality. BRICS Report Series, 3(27), 1996.
[10] D. Golovin and A. Krause. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. JAIR, 42:427–486, 2011.
[11] R. Gomes and A. Krause. Budgeted nonparametric learning from data streams. In ICML, 2010.
[12] C. Guestrin, A. Krause, and A. Singh. Near-optimal sensor placements in Gaussian processes. In International Conference on Machine Learning (ICML), August 2005.
[13] A. Guillory and J. Bilmes. Simultaneous learning and covering with adversarial noise. In ICML, 2011.
[14] A. Hassidim and Y. Singer. Submodular optimization under noise. CoRR, abs/1601.03095, 2016.
[15] S. Hoi, R. Jin, J. Zhu, and M. Lyu. Batch mode active learning and its application to medical image classification. In ICML, 2006.
[16] R. K. Iyer and J. A. Bilmes. Submodular optimization with submodular cover and submodular knapsack constraints. In NIPS, pages 2436–2444, 2013.
[17] R. K. Iyer, S. Jegelka, and J. A. Bilmes. Curvature and optimal algorithms for learning and minimizing submodular functions. In NIPS, pages 2742–2750, 2013.
[18] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the spread of influence through a social network. In KDD, 2003.
[19] A. Krause and C. Guestrin. Near-optimal observation selection using submodular functions. In National Conference on Artificial Intelligence (AAAI), Nectar track, July 2007.
[20] A. Krause and C. Guestrin. Nonmyopic active learning of gaussian processes. an exploration–exploitation approach. In ICML, 2007.
[21] A. Krause and C. Guestrin. Submodularity and its applications in optimized information gathering. ACM Trans. on Int. Systems and Technology, 2(4), 2011.
[22] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In ACL/HLT, 2011.
[23] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions—i. Mathematical Programming, 14(1):265–294, 1978.
[24] M. G. Rodriguez, J. Leskovec, and A. Krause. Inferring networks of diffusion and influence. ACM TKDD, 5(4), 2011.
[25] M. G. Rodriguez and B. Schölkopf. Submodular inference of diffusion networks from multiple trees. In ICML, 2012.
[26] Y. Singer and J. Vondrák. Information-theoretic lower bounds for convex optimization with erroneous oracles. In NIPS, pages 3186–3194, 2015.
[27] A. Singla, S. Tschiatschek, and A. Krause. Noisy submodular maximization via adaptive sampling with applications to crowdsourced image collection summarization. arXiv preprint arXiv:1511.07211, 2015.
[28] H. Song, R. Girshick, S. Jegelka, J. Mairal, Z. Harchaoui, and T. Darrell. On learning to localize objects with minimal supervision. In ICML, 2014.
[29] S. Tschiatschek, R. Iyer, H. Wei, and J. Bilmes. Learning mixtures of submodular functions for image collection summarization. In NIPS, 2014.
[30] J. Zheng, Z. Jiang, R. Chellappa, and J. Phillips. Submodular attribute selection for action recognition in video. In NIPS, 2014.
A Proof of Theorem 4
Proof. The proof follows the same structure as the proof of Theorem 3 but uses a different construction for $f^H$ and $g$ since the ones defined defined in Section 2.1 are not coverage functions. For $H \subseteq N$ of size $h$, we define:
$$f^H(S) = \begin{cases} |S \cap H| + \alpha & \text{if } S \neq \emptyset \\ 0 & \text{otherwise} \end{cases} \quad \text{and} \quad g(S) = \begin{cases} \frac{|S|h}{n} + \alpha & \text{if } S \neq \emptyset \\ 0 & \text{otherwise} \end{cases}$$
It is clear that $f^H$ and $g$ can be realized as coverage functions; $|S \cap H|$ and $\frac{|S|h}{n}$ are additive functions which are a subclass of coverage functions. The offset of $\alpha$ can be obtained by having all sets defining $f^H$ and $g$ cover the same $\alpha$ elements of the universe.
We now relate the maxima of $g$ and $f^H$: the maximum of $f^H$ is attained when $S$ is a subset of $H$ of size $k$ and is equal to $k + \alpha \geq k$. The value of $g$ only depends on $|S|$ and is equal to $\frac{kh}{n} + \alpha$ when $|S|$ is of size $k$. Hence:
$$\max_{|S| \leq k} g(S) \leq \left( \frac{\alpha}{k} + \frac{h}{n} \right) \max_{|S| \leq k} f^H(S) \tag{6}$$
We now show a concentration result similar to (4): let $H$ be drawn uniformly at random among sets of size $h$, then for any $S$ and $0 < \varepsilon < 1$:
$$\mathbb{P}_H \left[ (1 - \varepsilon)f^H(S) \leq g(S) \leq (1 + \varepsilon)f^H(S) \right] \geq 1 - 2^{-\Omega(\varepsilon^3 \alpha h/n)} \tag{7}$$
We will consider two cases depending on the size of $|S|$. When $|S| \leq \varepsilon \alpha$, the inequality is deterministic. For the right-hand side:
$$(1 + \varepsilon)f(S) \geq (1 + \varepsilon)\alpha \geq \alpha + |S| \geq \alpha + \frac{|S|h}{n} = g(S)$$
where the first inequality used $|S \cap H| \geq 0$, the second inequality used the bound on $|S|$ and the last inequality used $h \leq n$. For the left-hand side:
$$(1 - \varepsilon)f(S) = (1 - \varepsilon)\alpha + (1 - \varepsilon)|S \cap H| \leq \alpha - \varepsilon \alpha + |S| \leq \alpha \leq g(S)$$
where the first inequality used $1 - \varepsilon \leq 1$ and $|S \cap H| \leq |S|$ and the second inequality used the bound on $|S|$.
Let us now consider the case where $|S| \geq \varepsilon \alpha$. This case follows directly by applying Lemma 1 after observing that when $|S| \geq \alpha$, $\mu \geq \frac{\varepsilon \alpha h}{n}$.
We can now conclude the proof of Theorem 4 by combining (6) and (7) the exact same manner as the proof of Theorem 3 after setting $h = k = n^{1-\beta/2}$ and $\alpha = n^{1-\beta}$. □
B Proof of Lemma 1
The Chernoff bound stated in Lemma 1 does not follow from the standard Chernoff bound for independent variables. However, we use the fact that the Chernoff bound also holds under the weaker negative association assumption.
Definition 9. Random variables $X_1, \ldots, X_n$ are negatively associated iff for every $I \subseteq [n]$ and every non-decreasing functions $f : \mathbb{R}^I \to \mathbb{R}$ and $g : \mathbb{R}^I \to \mathbb{R}$:
$$\mathbb{E}[f(X_i, i \in I)g(X_j, j \in \bar{I})] \leq \mathbb{E}[f(X_i, i \in I)]\mathbb{E}[g(X_j, j \in \bar{I})]$$
Claim 10 ([9]). Let $X_1, \ldots, X_n$ be $n$ negatively associated random variables taking value in $[0, 1]$. Denote by $\mu = \sum_{i=1}^n \mathbb{E}[X_i]$ the expected value of their sum, then for any $\delta \in [0, 1]$:
$$\mathbb{P}\left[ \sum_{i=1}^n X_i > (1 + \delta)\mu \right] \leq e^{-\delta^2 \mu/3}$$
$$\mathbb{P}\left[ \sum_{i=1}^n X_i < (1 - \delta)\mu \right] \leq e^{-\delta^2 \mu/2}$$
Claim 11 ([2]). Let $H$ be a random subset of size $h$ of $[n]$ and let us define the random variables $X_i = 1$ if $i \in H$ and $X_i = 0$ otherwise. Then $X_1, \ldots, X_n$ are negatively associated.
The proof of Lemma 1 is now immediate after observing that $|S \cap H|$ can be written $|S \cap H| = \sum_{i \in S} X_i$ where $X_i$ is defined as in Claim 11. Since $\mathbb{P}[X_i = 1] = \frac{h}{n}$ we have $\mu = \frac{|S|h}{n}$.
C Proofs for Section 3.1
The full description of the greedy algorithm used in Theorem 5 can be found in Algorithm 1.
Algorithm 1 APPROXIMATEGREEDY
1: initialize $S \leftarrow \emptyset$
2: while $|S| \leq k$ do
3: \hspace{1em} $S \leftarrow S \cup \arg\max_{a \in N \setminus S} F(S \cup \{a\})$.
4: end while
5: return $S$
Proof of Proposition 6. Fix $\beta > 0$ and $\varepsilon = \frac{1}{k^{1-\beta}}$. Let us consider an additive function $f$ where the ground set $N$ can be written $N = A \cup B \cup C$ with:
- $A$ is a set of $\frac{1}{2\varepsilon}$ elements of value 2.
- $B$ is a set of $\frac{n}{2} - \frac{1}{4\varepsilon}$ elements of value $\frac{1}{n}$.
- $C$ is a set of $\frac{n}{2} - \frac{1}{4\varepsilon}$ elements of value 1.
We now define the following $\varepsilon$-approximately submodular function $F$:
$$F(S) = \begin{cases}
\frac{1}{\varepsilon} & \text{if } S = A \cup \{c\} \text{ with } c \in C \\
f(S) & \text{otherwise}
\end{cases}$$
$F$ is an $\varepsilon$-approximately submodular function. Indeed, the only case where $F$ differs from $f$ is when $S = A \cup \{c\}$ with $c \in C$. In this case $F(S) = \frac{1}{\varepsilon} \leq \frac{1}{\varepsilon} + 1 = f(S)$ and:
$$F(S) = \frac{1}{\varepsilon} \geq (1 - \varepsilon) \left( \frac{1}{\varepsilon} + 1 \right) = (1 - \varepsilon)f(S)$$
When $\varepsilon < \frac{1}{2}$, the greedy algorithm selects all elements from $A$ and spends the remaining budget on $B$ and obtains a value of $\frac{1}{\varepsilon} + \frac{1}{n}(k - k^{1-\beta}/2) = O(k^{1-\beta})$ when given $F$ as input. However, it is clear that the optimal solution for $F$ is to select all elements in $A$ and spend the remaining budget on $C$ for a value of $\frac{1}{\varepsilon} + (k - k^{1-\beta}/2) = \Omega(k)$. The resulting approximation ratio is $O\left(\frac{1}{k^{\beta}}\right)$ which converges to zero as the budget constraint $k$ grows to infinity. □
Theorem 7 uses a slight modification of Algorithm 1 to accommodate the matroid constraint. The full description is given in Algorithm 2.
Algorithm 2 MATROIDGREEDY
1: initialize $S \leftarrow \emptyset$
2: while $N \neq \emptyset$ do
3: \hspace{1em} $x^* \leftarrow \arg\max_{x \in N} F(S \cup \{x\})$
4: \hspace{1em} if $S \cup \{x\} \in I$ then
5: \hspace{2em} $S \leftarrow S \cup \{x\}$
6: \hspace{1em} end if
7: \hspace{1em} $N \leftarrow N \setminus \{x^*\}$
8: end while
9: return $S$
Proof of Theorem 7. Let us consider $S^* \in \arg\max_{S \subseteq I} f(S)$. W.l.o.g. we can assume that $S^*$ is a basis of the matroid ($|S^*| = k$). It is clear that the set $S$ returned by Algorithm 2 is also a basis. By the basis exchange property of matroids, there exists $\phi : S^* \to S$ such that:
$$S = \phi(x) + x \in I, \quad x \in S^*$$
Let us write $S^* = \{e_1^*, \ldots, e_k^*\}$ and $S = \{e_1, \ldots, e_k\}$ where $e_i = \phi(e_i^*)$ and define $S_i = \{e_1, \ldots, e_i\}$ then:
$$f(S^*) \leq f(S) + \sum_{i=1}^{k} f_S(e_i^*) \leq f(S) + \sum_{i=1}^{k} f_{S_{i-1}}(e_i^*)$$
$$\leq f(S) + \sum_{i=1}^{k} \left[ \frac{1+\varepsilon}{1-\varepsilon} f(S_i) - f(S_{i-1}) \right]$$
$$= f(S) + \sum_{i=1}^{k} [f(S_i) - f(S_{i-1})] + \frac{2\varepsilon}{1-\varepsilon} \sum_{i=1}^{k} f(S_i)$$
$$\leq 2f(S) + \frac{2k\varepsilon}{1-\varepsilon} f(S)$$
where the first two inequalities used submodularity, the third used the definition of an $\varepsilon$-erroneous oracle, and the fourth used monotonicity. The result then follows by applying the definition of $\varepsilon$-approximate submodularity.
D Proofs for Section 3.2
The proof of Proposition 8 follows from Lemma 12 which shows how to construct an additive approximation of $F$.
Lemma 12. Let $F$ be an $\varepsilon$-approximately submodular function which approximates a submodular function $f$ with bounded curvature $c$. Let $F_a$ be the function defined by $F_a(S) = \sum_{e \in S} F(e)$ then:
$$\frac{1-\varepsilon}{1+\varepsilon} F(S) \leq F_a(S) \leq \frac{1}{1-c} \frac{1+\varepsilon}{1-\varepsilon} F(S), \quad S \subseteq N$$
Proof. For the left-hand side:
$$F_a(S) = \sum_{e \in S} F(e) \geq (1-\varepsilon) \sum_{e \in S} f(e) \geq (1-\varepsilon) f(S) \geq \frac{1-\varepsilon}{1+\varepsilon} F(S)$$
where the first and third inequalities used approximate submodularity and the second inequality used that submodular functions are subadditive.
For the right-hand side, let us enumerate $S = \{e_1, \ldots, e_\ell\}$ and write $S_i = \{e_1, \ldots, e_i\}$ (with $S_0 = \emptyset$ by convention). Then:
$$F_a(S) = \sum_{i=1}^{\ell} F(e_i) \leq (1+\varepsilon) \sum_{i=1}^{\ell} f(e_i) \leq \frac{1+\varepsilon}{1-c} \sum_{i=1}^{\ell} f_{S_{i-1}}(e_i) = \frac{1+\varepsilon}{1-c} f(S) \leq \frac{1}{1-c} \frac{1+\varepsilon}{1-\varepsilon} F(S)$$
where the first and last inequalities used approximate submodularity, and the second inequality used the curvature assumption.
Proof of Proposition 8. Let us denote by $S_n$ a solution to $\max_{|S| \leq k} F_a(S)$ where $F_a$ is defined as in Lemma 12. Since $F_a$ is an additive function, $S_n$ can be found by querying the value query oracle for $F$ at each singleton and selecting the top $k$. The approximation ratio then follows directly from Lemma 12.
|
President’s Notes for December 2016
Our 2m repeater: The noise continues to be quite variable, with some mornings having no noise, some having continuous noise. Each work day, I can see the lights on the hotel if I take Michigan Avenue into my office. With the loss of the leaves from the trees, the hotel is actually visible from the parking lot of the Ford Engineering Laboratory, known by many retirees by its former names of EEE or POEE building. The lights are on and functioning no matter if the noise is present or not. Humid or dry, warm or cold, I have not found a pattern with the weather that predicts this noise. Even though this goes on, please join us for the Sunday night net. We have been able to conduct the net regularly since Murray was up and made some updates in October.
Back on the air over the HF bands: I reassembled my station and got back on the air in November for the ARRL SSB Sweepstakes. The station had been in a state of disassembly over the summer, as I unhooked my radio in order to connect and test the club Yaesu FT-991, its case set-up, and the issues we had with the antenna tuner. After all got set up and tested, I did not have the time to reconnect everything. So, for fall, I dug my antenna tuner out of the club case (I had it in for back-up while the club tuner was out for repair), and hooked up the antennas, grounds, and get everything working. Next, I tackled the MFJ Voice Keyer. I thought we had used it in the past on the Icom radios, but it was completely set up for the Yaesu FT-990 radio. I dug into the case and reset all of the jumpers, plus found the jumper to supply voltage to the Icom IC microphone. Icom
microphones need DC power, and the MFJ tuner blocked that with a capacitor. However, they also supplied their own optional power, and I managed to get that hooked up and working. After all of this, I was ready for the Sweeps! I enjoyed the sweepstakes, managing 63 out of the 83 sections, with 142 contacts over 6 hours of operating. I managed to work 15m, 20m, 40m, and 80m during the contest. Not bad for sunspots at a whopping 28 that weekend! However, my quest for a sweep was dashed by a lack of both close-in and far away sections. I did not log any Hawaii, Alaska, Northwest Territories, Puerto Rico, nor the Virgin Islands. I also missed almost all of the Ontario sections close in. Nevertheless, I enjoyed hunting new sections on the bands and calling CQ with the help of the voice keyer. I am ready for a few more operating events this winter down in my basement shack.
73,
David Treharne
N8HKU
FARL Club President
December Christmas Dinner!
We are heading back to the Mexican Fiesta Restaurant on Ford Rd at Telegraph in Dearborn Heights for our annual dinner. We will meet up at 6pm on Thursday, December 8th for our event. Look for an email to RSVP so we can get the best table size. Remember to avoid parking in the IHOP parking lot, as they will ticket and tow!
Ford Amateur Radio League (AKA: The Tin Lizzy Club)
Club Meeting Minutes – November 10, 2016
Ford Amateur Radio League Meeting Minutes November 10. 2016
Call to order at 6:02 PM by Dave was done, we started the early time OK with most all in the earlier time! A knock was heard and we answered.
1. **Minutes from prior meeting** reviewed, corrected for the spelling of Roger’s name (MS Word does not like the spelling) and approved.
2. **Treasurers Report**: Pat reported that the balance was (not shown) with interest (not shown).
- Dues for $20 received from prior meeting.
- The Radio FT 990 sold on eBay and was shipped to a buyer. Final amount to league was $468.06, cost of shipping, PayPal in the agenda for meeting and will be in the account.
- Cost of $15.00 for the Domain name was approved to pay to Roger for the website annual cost.
3. **Board of Directors Discussion**: no comments
4. **Committee Reports**:
A. **Repeater** noise has been lessened and the hum is not as bad as prior to the updates of metal box actions that were done.
Murray (repeater coordinator) has had access again in last month and has worked on shield, ground and a full 2m shutdown relay added. Its working sometime better than prior.
Some issues are emerging yet and its been shut off by remote for noise issue, but does come back on again (on its own). Time announcement is not always correct, sometimes later its correct again.
Battery backup power is added again to be working with the relay. Also Murray added a RS 232 port to have better access to the connections and service.
The access to the site is good so far with new engineering director and may have some interest in the Lighting installation to improve. Dave has some ferrite beads to suggest and discussion may proceed, have to see what the site allows and wants.
B. **Net**, going on Sunday nights, can be a noisy but Digital works.
C. **Education and Training**, none coming.
D. **Communications and newsletter**: send items to Rajiv
E. Historian has transitioned to Tye Winkel, no discussion at meeting.
F. The radio for new hams has two kits ready for antenna, radio and cable.
5. Unfinished and Current Business:
- Club Liability Insurance was discussed and in the mail sent, received at box and people somehow this lapsed. Discussion was that for $320 the policy for the prior amount of coverage can be renewed per quote. Necessary to cover the operations, motion to approve made and approved.
6. New Business
- Party for the year discussed and locations reviewed, Date of meeting. The plan for 12/8 is normal 2nd Tuesday and its at the Mexican Fiesta in Dearborn (Telegraph and Ford Road corner, a block west on Telegraph). Parking was discussed and its tight but you can park at AutoZone, Bank, and on streets around restaurant.)
- Tom Bray was a former member and returned to Ford after Visteon, has rejoined.
Program: GPS: how it works was discussed by Dave Treharne and followed on the APRS topic prior. He thought this would be interesting since the prior APRS with Yaesu Fusion Radio was using GPS and had some issues of operation, but how it works and the signal, satellites and function was a long discussion.
Adjourned at 7:56 PM.
Attendance:
1. Dave Treharne, N8HKU
2. Malcolm Lunn, KD8TPO
3. Jessie Trimble, AC8WP
4. Gerry Trimble, KB8HZ
5. Bob Stead, K8ETE
6. Pat Quinn, WD8JDZ
7. Rodney Deyo, K8SGL
8. Roger Reini, KD8CSE
9. Sam Wells, KD8YTR
10. Al Habbal, W8AMH
11. Tom Bray, W8TJB, ex WB8COX
12. Robert Rusty Eizen, N8RGI
ThumbSat: Here is some information from a group who is working to launch small satellites, called Thumbsats, and working to distribute and sell small SDR receivers to send the data back to their mission control. The receiver is a neat unit, like an SDR dongle. For the schools, they have an automatic azimuth/elevation antenna pointing system made from light, small motors and 3D printed gears.
Their first launch is planned on a Star 1 satellite launch for Dec 15th, 2016. Their 2nd satellite is planned to launch on March 15, 2017. They wanted to share their designs and ideas with as many clubs as possible, per below.
Dave, N8HKU
ThumbSat - Wade VanLandingham, W4VNO
My name is Wade VanLandingham and I manage a non-profit organization called ThumbNet. (http://www.thumbsat.com/thumbnet) To quote part of our own page: ThumbNet was born to encourage students around the world to look up at the stars and to give them a chance to feel that they are part of something larger. The hardware to track and monitor radio signals from satellites in orbit is donated to the schools by ThumbNet, and with over 225 volunteer groups in more than 72 countries, we're having an immediate and positive effect on the lives of hundreds of students around the globe!
I'm not sure if your club puts out a newsletter to your members around the Dearborn area, but I know from experience that finding content can sometimes be tricky month after month. I have a small press release about
a new receiver for Software Defined Radio (SDR) that we developed for ThumbNet, that may interest your members. It may make a nice, small article for you, just in time for Christmas.
ThumbNet's N3 SDR receiver is unique in its design and it is competitive in cost with the majority of the SDR dongles on Amazon or eBay. We design and produce updates of the receiver to meet the needs of the ThumbNet network, (This is the 3rd generation.) but due to minimum order quantities from the manufacturer, we end up with several hundred more than we need each time we do so. In order to not have thousands of SDR radios laying around, I try to offer them to others that share our education efforts (like Girl Scouts) or simply may have an interest in working with them (like HAM clubs). The small proceeds that remain go right back into the project to support its continued outreach and growth, and not into someone's retirement fund.
Due to the risk of viruses, I would not send you an attachment to open via email, but your members might like to visit the page here: http://www.thumbsat.com/thumbnet and http://www.nongles.com to learn more and consider supporting ThumbNet. Reviews are also beginning to appear on various SDR blogs such as http://www.rtl-sdr.com.
Club Repeater Information
The Ford Amateur Radio League operates 3 club repeaters under the club K8UTT license. All the repeaters are located in the Dearborn, MI area near the Southfield Freeway. All repeaters are open for members and guests to operate.
| Repeater | Output Freq | Input Freq | Tone |
|-------------------|-------------|--------------|------------|
| 2 M Repeater | 145.270 | -600 KHz | 100 Hz PL |
| 1 1/4 M Repeater | 224.520 | -1.6 MHz | 100 Hz PL |
Club Net: 8pm on Sunday, 2 and 1-1/4 Meter Repeaters!
Classes and Exams
The following amateur radio clubs conduct license exams throughout the year. Many clubs allow walk-ins but pre-registration will insure an exam is available for you when you attend.
| Club Name | Contact Person | Phone | Email |
|-------------------|----------------|--------------|------------------|
| Ford Amateur Radio League | Bill Boyke | 313-805-8877 | email@example.com |
| South Lyon ARC | Christian Anderson | 248-437-3088 | firstname.lastname@example.org |
| Motor City ARC | Don Novak | 734-281-7030 | email@example.com |
| Hazel Park ARC | Jerry Begel | 248-543-2284 | firstname.lastname@example.org|
| USECA ARC | Joseph Kennedy | 586-977-7222 | email@example.com |
| ARROW Assn | Roger Place | 734-663-4625 | firstname.lastname@example.org|
Some of the above clubs also conduct license classes. Please contact them for additional information.
2016-2017 Club Officers
Please contact any of the officers for information regarding the Ford Amateur Radio League, or go to the club website at www.k8utt.org for current events and activities.
| Position | Name | Call Sign | Phone |
|-------------------|-----------------|-------------|-------------|
| President | Dave Treharne | N8HKU | 734-476-1666|
| Vice President | Roger Reini | KD8CSE | 734-728-1509|
| Treasurer | Pat Quinn | WD8JDZ | 734-729-1993|
| Secretary | Mac | KD8TPO | |
| Repeater Chair | Murray Scott | KE8UM | 248-743-1704|
| K8UTT Trustee | Dave Treharne | N8HKU | 734-476-1666|
| Activity Chair | Bill Boyke | N8OZV | 313-805-8877|
| Bolt Editor | Rajiv Paul | KD8LHF | 313-244-2515|
Club Meetings
The Ford Amateur Radio Club meets on the second Thursday of each month, except for Christmas and the summer months July and August. The meetings are held at 6:30 PM at the Ford Engine Manufacturing & Development Offices (EMDO) building. EMDO (located at 17000 Southfield Rd, Allen Park, MI) is south of I-94 on the east side of Southfield just north of the Allen Park Municipal offices. Park in the front of the building and come into the main lobby at the side. Knock on the inside door on the right if no one is standing there to let you in.
Next Club Meeting: December 08, 2016 at 6:00PM
Topic: Annual Christmas Dinner: Mexican Fiesta Restaurant, Ford Rd at Telegraph.
The Ford Amateur Radio League
PO Box 2711
Dearborn, MI 48123
|
THE PRESIDENCY
Before: Judge Chile Eboe-Osuji, President
Judge Robert Fremr, First Vice-President
Judge Howard Morrison
SITUATION IN THE DEMOCRATIC REPUBLIC OF THE CONGO
IN THE CASE OF
THE PROSECUTOR V. THOMAS LUBANGA DYLIO
Public
Additional Observations by Judge Perrin de Brichambaut
To be notified in accordance with regulation 31 of the *Regulations of the Court* to:
**The Office of the Prosecutor**
**Counsel for the Defence**
Ms Catherine Mabille
Ms Jean-Marie Biju-Duval
**Legal Representatives of the Victims**
Mr Luc Walleyn
Mr Franck Mulenda
Ms Carine Bapita Buyangandu
Mr Joseph Keta Orwinyo
Mr Paul Kabongo Tshibang
**Legal Representatives of Applicants**
**Unrepresented Applicants**
(Participation/Reparation)
**The Office of Public Counsel for Victims**
Ms Paolina Massida
Ms Sarah Pellet
**The Office of Public Counsel for the Defence**
**States’ Representatives**
**Trust Fund for Victims**
---
**REGISTRY**
**Registrar**
Mr Peter Lewis
**Other**
Trial Chamber II
Plenary of Judges
I. PROCEDURAL HISTORY
1. On 10 April 2019, the Defense for Mr Lubanga filed its ‘Requête urgente de la Défense aux fins de récusation de M le Juge Marc Perrin de Brichambaut’ requesting the Presidency to disqualify Judge Perrin de Brichambaut in reparation proceedings in *The Prosecutor v. Thomas Lubanga Dyilo*.
2. On 16 May 2019, Judge Perrin de Brichambaut filed written observations in response to the request.
3. On 23 May 2019, the Defence for Mr Lubanga filed its ‘Requête de la Défense aux fins de solliciter l’autorisation de déposer une réplique à la Réponse de M. le Juge Marc Perrin de Brichambaut’, requesting leave to reply to Judge Perrin de Brichambaut’s Observations and to admit an audio-visual recording of the 17 May 2017 Presentation.
4. On 11 June 2019, the *Ad Hoc* Presidency, in consultation with the plenary of judges, rendered its decision and authorising the Defence to communicate a copy of the audio-visual recording to it.
II. INTRODUCTION
5. The decision of the *Ad Hoc* Presidency of 11 June 2019 authorised the Defence for Mr Lubanga to introduce an additional piece of evidence at a late stage of the proceedings, *i.e.* just a few days before the Plenary scheduled for 17 June 2019. This decision was adopted pursuant to article 41(2) of the Statute and rule 34(2) of the Rules of Procedure and Evidence and, considering that the latter provision explicitly allows for the Judge in question to provide observations, it is my view that I am fully entitled to submit these additional observations as part of my initial observations addressed to the Presidency.
6. Seeing as rule 34(2) of the Rules of Procedure and Evidence refers explicitly to “evidence”, any request lodged pursuant to this provision must *per analogiam* comply with the relevant requirements related to evidence contained in the Statute and, more generally, the fair trial rights set forth in the Statute. In any event, the *Ad Hoc*
Presidency decision raises issues implicating rights so fundamental that they must be respected in any type of judicial proceedings. On this basis, I am of the view that the decision of the *Ad Hoc* Presidency contravenes such basic notions of fairness in the following ways.
**III. OBSERVATIONS**
A. First, the procedure leading to the adoption in the decision of the *Ad Hoc* Presidency of 11 June 2019 is, as such, incompatible with any rational notion of fairness.
7. As indicated by a number of Judges consulted by the *Ad Hoc* Presidency, it has been the constant practice of this Court to limit the consideration of the Plenary to the request for disqualification and the observations filed by the Judge in question. The decision by the *Ad Hoc* Presidency to admit an additional piece of evidence through a request for leave to reply fundamentally alters this established way of conducting such proceedings. This is a novelty before the Court for which the *Ad Hoc* Presidency provides no justification save a reference to a previous finding in another case. However, in that particular instance, the Plenary denied a request for leave to reply without admitting additional evidence. In admitting the additional piece of evidence through the request for leave to reply by the Defence of Mr Lubanga, the *Ad Hoc* Presidency is reversing its previous jurisprudence without a proper basis or explicit justification. The decision by the *Ad Hoc* Presidency also omits a critical consideration from the previous decision it invoked, namely: “[…] the present request for leave to reply to the Submission was, in any case, filed on the eve of the plenary session […].”\(^1\) Had the *Ad Hoc* Presidency consistently applied this principle, it would have had to dismiss the request for leave to reply, including any additional evidence attached to this request, in these proceedings as well.
---
\(^1\) *The Prosecutor v. Jean Pierre-Bemba Gombo et al.*, Decision of the Plenary of Judges on the Defence Applications for the Disqualification of Judge Cuno Tarfusser from the case of *The Prosecutor v. Jean-Pierre Bemba Gombo, Aimé Kilolo Musamba, Jean-Jacques Mangenda Kabongo, Fidèle Babala Wandu and Narcisse Arido*, 20 June 2014, ICC-01/05-01/13-511-Anx, para. 13.
8. The decision of the *Ad Hoc* Presidency contravenes the notion of fairness of process in three ways. First, the Judge in question must be provided with an opportunity to respond to the *request for leave to reply*. Second, and more importantly, the person against whom allegations are made must also be provided with an opportunity to challenge the evidence introduced (see, for example, article 67(1)(d) of the Statute).
9. Third, and as a consequence of the two aforementioned violations, the decision contravenes article 74(2) of the Statute, applied *per analogiam*, stating that a decision by the Court must be based “only on evidence submitted and discussed before it” (emphasis added). It has always been understood that the word “discussed” means that all parties must have been afforded the opportunity to make submissions on the evidence. This principle of the equality of arms between parties is widely regarded as a fundamental element of the right to due process, including in the international criminal tribunals.\(^2\) As the International Criminal Tribunal for the former Yugoslavia emphasized, “It is well established in the jurisprudence of this Tribunal that equality of arms [...] mean[s] [...] that each party must have a reasonable opportunity to defend its interests under conditions which do not place him at a substantial disadvantage *vis-à-vis* his opponent”.\(^3\)
---
\(^2\) For example, the United Nations Human Rights Committee emphasizes in General Comment No. 32 that due process under article 14 of the International Covenant for Civil and Political Rights requires equality of arms between the parties. “The right to equality before courts and tribunals also ensures equality of arms. This means that the same procedural rights are to be provided to all the parties unless distinctions are based on law and can be justified on objective and reasonable grounds, not entailing actual disadvantage or other unfairness to the defendant. There is no equality of arms if, for instance, only the prosecutor, but not the defendant, is allowed to appeal a certain decision. *The principle of equality between parties applies also to civil proceedings, and demands, inter alia, that each side by given the opportunity to contest all the arguments and evidence adduced by the other party.*” (emphasis added, footnotes omitted). *See also, e.g., Dudko v. Australia*, Human Rights Committee, 23 July 2007, U.N. Doc. CCPR/C/90/D/1347/2005, para. 7.4 (“It is for the State party to show that any procedural inequality was based on reasonable and objective grounds, not entailing actual disadvantage or other unfairness to the author.”); Article 6 of the European Convention of Human Rights considers the principle of equality of arms as inherent to the concept of a fair trial. *See “Guide on Article 6 of the European Convention on Human Rights”*, paras 327-328. *See also, e.g., Borgers v. Belgium*, [1991] ECHR 46, 12005/86, para. 24; *Zahirović v. Croatia*, [2013] ECHR 58590/11, paras 47-50.
\(^3\) *The Prosecutor v. Prlić et al.*, Decision on Translation, 4 September 2008, IT-04-74-AR73.9, para. 29. *See also The Prosecutor v. Šešelj*, Contempt Appeal Judgment, 30 May 2013, IT-03-67-R77.4-A, para. 37 (“[T]he Appeals Chamber recalls that a trial chamber continues to abide by the principle of equality of
Ad Hoc Presidency ignores this basic guarantee as I have not been provided with any opportunity to make submissions regarding these matters.\(^4\)
B. Second, the relevant provisions invoked by the Ad Hoc Presidency do not constitute an adequate basis to admit additional evidence.
10. The Ad Hoc Presidency’s conclusion that “[t]he procedural requirements of a disqualification request are clearly established by article 41(2) of the Rome Statute and rule 34(2) of the Rules [...]” and that “[o]bservations [...] submitted pursuant to these provisions are not simply a response within the meaning of regulation 24 of the Regulations” should have logically led it to the conclusion that the proposed additional evidence cannot be admitted on this basis. The Defence for Mr Lubanga relied upon regulation 24(5) of the Regulations of the Court and presents its arguments based on the specific wording of that provision. The Ad Hoc Presidency concluded that that regulation is “ill-suited”. The only possible outcome, therefore, was to dismiss the request for leave to reply, including any additional evidence. By the same token, neither article 41(2) of the Rome Statute nor rule 34(2) of the Rules of Procedure and Evidence provide for a reply or the admission of additional evidence. Rather, the wording of rule 34(2) of the Rules suggests that, following the submission
---
\(^4\) For examples of violations of the equality of arms principle see, e.g., Jansen-Gielien v. Netherlands, Human Rights Committee, 3 April 2001, U.N. Doc. CCPR/C/71/D/846/1999, para. 8.2 (“[I]t was the duty of the Court of Appeal, which was not constrained by any prescribed time limit to ensure that each party could challenge the documentary evidence which the other filed or wished to file and, if need be, to adjourn proceedings. In the absence of the guarantee of equality of arms between the parties in the production of evidence for the purposes of the hearing, the Committee finds a violation of article 14, paragraph 1 of the Covenant.”); Äärelä and Nakkäläjärvi v. Finland, Human Rights Committee, 4 February 1997, U.N. Doc. CCPR/C/73/D/779/1997, para. 7.4 (“[T]he Committee notes that it is a fundamental duty of the courts to ensure equality between the parties, including the ability to contest all the argument and evidence adduced by the other party. The Court of Appeal states that it had “special reason” to take account of these particular submissions made by the one party, while finding it manifestly unnecessary to invite a response from the other party. In so doing, the authors were precluded from responding to a brief submitted by the other party that the Court took account of in reaching a decision favourable to the party submitting those observations. The Committee considers that these circumstances disclose a failure of the Court of Appeal to provide full opportunity to each party to challenge the submissions of the other, thereby violating the principles of equality before the courts and of fair trial contained in article 14, paragraph 1, of the Covenant.”).
of the request together with any evidence, no further submissions can be made except those of the judge in question.
C. Third, even if article 41(2) of the Rome Statute read together with rule 34(2) of the Rules of Procedure and Evidence provided for the possibility of submitting additional evidence, such evidence should have been rejected for failing to comply with the relevant requirements of the Rome Statute.
11. As a general rule, the introduction of evidence at an advanced stage of the proceedings is predicated upon two conditions: (1) the proposed evidence must be new, that is evidence that was not previously available, and (2) the proposed evidence could not have been discovered through due diligence.\(^5\) In the instant case, the Defence for Mr Lubanga clearly indicated that it has had the video in its possession before the request for disqualification was filed. Nothing prevented the Defence from seeking to introduce it when it filed the original request. The proposed evidence is thus not new and, by failing to place it before the Plenary immediately, it has forfeited its entitlement to rely on it.
12. In addition, in admitting the additional evidence, the *Ad Hoc* Presidency failed to consider “any prejudice that such evidence may cause to a fair trial” (article 69(4) of the Statute). The transcripts of the presentation in question are already available and there is consequently no need for the video-recording. The only purpose of the introduction of the video-recording at this late stage of the proceedings is to taint the judges’ mind and broaden the scope of the argument as initially presented by the Defence for Mr Lubanga which addressed only three brief segments of this recording, one of which appeared only on the occasion of the questions.
\(^5\) *The Prosecutor v. Lubanga*, Judgment on the appeal of Mr Thomas Lubanga Dyilo against his conviction, 1 December 2014, ICC-01/04-01/06 A 5, para. 50; see also *The Prosecutor v. Blagojević and Jokić*, Decision on Appellant Vidoje Blagojević’s Motion for Additional Evidence Pursuant to Rule 115, 21 July 2005, IT-02-60-A, paras 6-7 (“In order to demonstrate that evidence was not available at trial, a party seeking its admission at the appeal stage must show not only that he did not possess the evidence during the trial proceedings, but also that he could not have obtained it through the exercise of due diligence.”).
D. Finally, the effect of the *Ad Hoc* Presidency decision is that the person against whom allegations have been made has not been afforded the opportunity to have the last word.
13. In general, any party facing allegations is expected to have the last word in judicial proceedings. By way of example, the defence has the right to be the last to examine a witness (rule 140(2)(d) of the Rules of Procedure and Evidence). It is also the defence’s presentation of evidence that closes the evidentiary phase of the proceedings. During closing arguments the defence has the opportunity to speak last (rule 141(2) of the Rules of Procedure and Evidence). This is also implied in rule 34(2) of the Rules of Procedure and Evidence: “[t]he request shall state the grounds and attach any relevant evidence, and shall be transmitted to the person concerned, *who shall be entitled to present written submissions*” (emphasis added). This specific wording and the absence of any reference to additional submissions and/or evidence clearly suggests that this rule was drafted in such a way to ensure that the judge in question is entitled to have the last word. However, in the present case, the party *making* the allegations has had the last word by the decision of the *Ad Hoc* Presidency allowing the Defence to submit additional evidence through a request for leave to reply.
IV. CONCLUSION
14. In sum, the decision of 11 June 2019 entails the following violations of basic principles of fairness applicable to any judicial proceedings:
- the procedure leading to the adoption in the decision of the *Ad Hoc* Presidency of 11 June 2019: (i) departed from previous jurisprudence without providing any reasoning and without legal basis; and (ii) entailed a denial of the right to present observations on the request for leave to reply, including on the proposed additional evidence;
- neither regulation 24(5) of the Regulations of the Court nor article 41(2) of the Rome Statute in combination with rule 34(2) of the Rules provide an adequate basis for the admission of additional evidence in these proceedings;
- the proposed additional evidence should have been rejected as: (i) it is not new and could have been introduced by the Defence for Mr Lubanga as the time the request for disqualification was lodged; and (ii) the introduction of additional evidence at a late stage of the proceedings is highly prejudicial given that the transcripts are already available and is a violation of article 69(4) of the Statute; and
- the obligation to provide the person against whom allegations are made to have the last word has not been respected.
15. As a result of the above, the present proceedings suffer from and are vitiated by serious procedural defects. Consequently, the decision of the *Ad Hoc* Presidency must be considered to be a nullity and the additional evidence introduced and admitted by way of this decision should not be considered during the Plenary.
16. This Court and its judges, whose task it is to uphold the rule of law and ensure fair proceedings, might also want to consider the potential consequences of the standards set by the *Ad Hoc* Presidency on other present and future cases concerning the disqualification of Judges. It would not be in the interests of the Court to provide for an opening for systematic harassment of Judges leading to prolonged procedural and evidentiary debates.
17. In the event that the Plenary, despite the serious aforementioned procedural errors and their regrettable potential consequences, decides to proceed and to take into account the video-recording, I request that consideration thereof be limited strictly to the three passages mentioned in the initial written request and to set aside the rest of the recording.
__________________________
Judge Marc Perrin de Brichambaut
The Hague, 14 June 2019
|
ORDINANCE NO. 010666
Approving the issuance by Kansas City Municipal Assistance Corporation (the "Corporation") of its Leasehold Revenue Bonds, in the principal amount not to exceed $25,000,000.00 (the "Bonds"); approving and authorizing a First Supplemental Lease Purchase Agreement between the Corporation and the City (the "First Supplemental Lease Purchase Agreement"), and approving and authorizing certain other actions relating to the issuance of the Bonds.
Sponsor: Director of Finance
Prepared by: Heather Brown, Assistant City Attorney
COMMITTEE REPORT NO. 1
Date 5/3/01
FINANCE AND AUDIT Committee
Recommends Attached Ordinance/Res.:
✓ Do Pass _ Do Not Pass
_ Be Adopted _ W/O Recommendation
Other action:
Chairman
Vice Chairman
Member
Member
Marvin Berry Committee Secy.
Present 3 Ayes _ Abstain
1 Absent _ Nays Paul Norbury (Name)
First Reading 4/24/2001
FNA Committee
Second Reading 5/3/01
COMMITTEE REPORT NO. 2
Date
Committee
Recommends Attached Ordinance/Res.:
_ Do Pass _ Do Not Pass
_ Be Adopted _ W/O Recommendation
Other action:
Chairman
Vice Chairman
Member
Member
Committee Secy.
Present _ Ayes _ Abstain
_ Absent _ Nays (Name)
Third Reading
Passed MAY 03 2001
Effective
ORDINANCE NO. 010666
Approving the issuance by Kansas City Municipal Assistance Corporation (the "Corporation") of its Leasehold Revenue Bonds, in the principal amount not to exceed $25,000,000.00 (the "Bonds"); approving and authorizing a First Supplemental Lease Purchase Agreement between the Corporation and the City (the "First Supplemental Lease Purchase Agreement"), and approving and authorizing certain other actions relating to the issuance of the Bonds.
WHEREAS, the Corporation has previously issued, pursuant to an Indenture of Trust, dated as of April 1, 1999, between the Corporation and First Bank of Missouri, Gladstone, Missouri ("Trustee"), its $7,950,000.00 Leasehold Revenue Bonds (City of Kansas City, Missouri, Lessee), Series 1999A (the "Series 1999A Bonds") to provide funds for the acquisition of street lights and to acquire, construct, improve and equip a ten-story parking facility to be located at Eleventh and Oak Streets in Kansas City, Missouri; and
WHEREAS, the Corporation proposes to issue, pursuant to a First Supplemental Indenture of Trust ("First Supplemental Indenture") by and between the Corporation and the Trustee, its Bonds in the principal amount not to exceed $25,000,000.00, to provide funds (the "Expenditures") to complete the construction, improvement, equipping and furnishing of the parking facility to be located at Eleventh and Oak Streets in Kansas City, Missouri ("Phase II Parking Facility Project"), to fund a debt service reserve fund and a capitalized interest fund for the Bonds, and to fund certain costs of issuance of the Bonds; and
WHEREAS, in connection with the issuance of the Series 1999A Bonds, the City and the Corporation entered into a Lease Purchase Agreement dated as of May 1, 1999 (the "Original Lease Agreement") and it is necessary to amend and supplement the Original Lease Agreement to provide for the issuance of the Bonds; and
WHEREAS, the City desires to indicate its expectation and intent to reimburse all or a portion of the Expenditures with the proceeds of the Bonds; and
WHEREAS, the City has found and determined that financing of the Phase II Parking Facility Project will benefit the citizens of the City; NOW, THEREFORE,
BE IT ORDAINED BY THE COUNCIL OF KANSAS CITY:
Section 1. That the First Supplemental Lease Purchase Agreement by the City to the Corporation be and is hereby approved in substantially the form attached hereto as Exhibit A and the Director of Finance is authorized to execute and deliver a First Supplemental Lease Purchase Agreement containing the terms and conditions of the First Supplemental Lease Purchase Agreement attached hereto as Exhibit A with such changes therein and additions thereto as the Director of Finance deems necessary or desirable.
Section 2. The City hereby requests, directs and instructs the Corporation to, and consents to and approves the issuance by the Corporation of its Bonds, in the principal amount not to exceed $25,000,000.00, for the purpose of providing funds for the Phase II Parking Project, to fund a debt
service reserve fund and a capitalized interest fund for the Bonds, and to fund certain costs of issuance of the Bonds. The Bonds shall be dated the date of issue, shall bear interest at a rate not to exceed seven (7) percent per annum, shall have such other terms and provisions as shall be provided in the First Supplemental Indenture and approved by the Director of Finance, and shall mature in the principal amounts and at the times set forth as follows:
**MATURITY SCHEDULE**
| Date | Principal |
|--------|---------------|
| 3/1/04 | $500,000.00 |
| 3/1/05 | 1,070,000.00 |
| 3/1/06 | 1,115,000.00 |
| 3/1/07 | 1,155,000.00 |
| 3/1/08 | 1,205,000.00 |
| 3/1/09 | 1,250,000.00 |
| 3/1/10 | 1,295,000.00 |
| 3/1/11 | 1,355,000.00 |
| 3/1/12 | 1,420,000.00 |
| 3/1/13 | 1,485,000.00 |
| 3/1/14 | 1,555,000.00 |
| 3/1/15 | 1,635,000.00 |
| 3/1/16 | 1,705,000.00 |
| 3/1/17 | 1,790,000.00 |
| 3/1/18 | 1,885,000.00 |
| 3/1/19 | 4,455,000.00 |
| | $24,875,000.00|
Section 3. The City reasonably expects to reimburse all or a portion of the Expenditures with proceeds of the Bonds for the purpose of providing for the completion of the Phase II Parking Facility Project. The maximum principal amount of the Bonds to be issued for the Phase II Parking Facility Project is $25,000,000.00.
Section 4. The City hereby delegates authority to the Director of Finance to prepare, approve and deem final a Preliminary Official Statement and a final Official Statement, with the signature of the Director of Finance thereon being conclusive evidence of the Director's approval and the City's approval thereof. The City hereby consents to the use and public distribution of the Preliminary Official Statement and the final Official Statement in connection with the offering for sale of the Bonds.
Section 5. The City hereby accepts the proposals of Fahnestock & Co. and The Chapman Company for financial advisory services in connection with the Bonds.
Section 6. The City hereby accepts the proposal of Shaffer Lombardo Shurin, a professional corporation, Logan Riley Carson & Kaup, L.C. and Fields & Brown, for bond counsel services in connection with the Bonds.
Section 7. The officials of the City are further authorized and directed to execute such documents, instruments and certificates and to take such further actions on behalf of the City as shall be necessary or desirable to effect the terms and provisions of this Ordinance.
I hereby certify that there is a balance, otherwise unencumbered, to the credit of the appropriation to which the foregoing expenditure is to be charged, and a cash balance, otherwise unencumbered, in the treasury, to the credit of the fund from which payment is to be made, each sufficient to meet the obligation hereby incurred.
Kevin Ryan
Director of Finance
Approved as to form and legality:
Assistant City Attorney
Authenticated as Passed
KAY BARNES, Mayor
Catherine T. Rocha, City Clerk
DATE PASSED MAY 03 2001
To authorize the issuance of KCMAC Leasehold Revenue Bonds, Series 2001A in an approximate principal amount not to exceed $25 million for the 11th & Oak Garage Project by the Corporation and to authorize the City to enter into certain documents related to the sale and take certain actions in connection therewith.
Resolution Number 970419 was approved on 5/8/97, to authorize the City Manager to select a design team to begin work on the facility's design and cost estimates, to develop a capital financing package of up to $27 million and to begin negotiations on land acquisition.
Resolution Number 981439 was approved on 12/21/98 to obtain the following authorization: 1) for the City Manager to proceed with the land acquisition, relocation and demolition activities necessary to prepare the site for the 11th & Oak Garage; 2) for the City Manager to develop a capital financing package of up to $30 million, in two issuances; 3) for the City Manager to proceed with the design and engineering activities; 4) for the City Manager to develop a recommended interim parking program for the City Hall; and 5) to issue a request for proposal to determine whether there are private or public/private organizations interested in partnering with the City in the development of the 11th & Oak Garage.
A Resolution was passed in 1999 by the KCMAC Board to authorize the issuance of KCMAC Leasehold Revenue Bonds in the principal amount not to exceed $7.95 million. The Bonds were issued to finance two unrelated projects, namely (a) the land acquisition, design and related improvements of the 11th & Oak Parking Garage Project ($5.9 million par amount), and (b) the acquisition of overhead streetlight from the Missouri Public Service ($2.03 million par amount). Both projects have been completed.
Request for Ordinance/Resolution
City of Kansas City, Missouri
Request for ☑ Ordinance
☐ Resolution (Special Instructions Below)
Before using this form see Administrative Regulation 4-1, Procedures for Handling Ordinance Requests
Date: 04/25/01
Request Made By: Kevin Riper
Department: Finance
Desired Docketing Date:
Emergency Measure Required?
☑ Yes ☐ No
If Emergency, Give Reason (See Sec. 15 of Charter)
Bond sale event is scheduled on May 17, 2001.
Justification for Proposed Legislation
Approving the issuance by Kansas City Municipal Assistance Corporation (the "Corporation") of its Leasehold Revenue Bonds, in the principal amount not to exceed $35,000,000 (the "Bonds"); approving and authorizing a First Supplemental Lease Purchase Agreement between the Corporation and the City (the "First Supplemental Lease Purchase Agreement"), approving and authorizing certain other actions relating to the issuance of the Bonds, and recognizing an emergency due to bond sale event scheduled for May 17, 2001.
See attached.
Resolution Special Instructions:
Parchment Resolutions Required?
Yes ☐ Number
No ☐ ____________
Wish to Review and Approve this Ordinance prior to its introduction. Requestor Does ☐ Does Not ☐
If this is a Resolution, does the Sponsor desire the adoption on the first reading?
Yes ☐ No ☐
Date: 4-25-01
Director's Signature: Kevin Riper
To be Used by the Finance Department
Budget and Systems Division Head Signature: David Drake
Date: 4/26/01
Account Numbers and Appropriation Balances Checked
Supervisor of Accounts Signature
Fund Availability Approved
Director of Finance Signature
Distribution:
White City Clerk
Blue City Clerk
Green City Manager
Canary City Counselor
Pink Finance Dept.
Goldenrod Department
EXHIBIT ATTACHED: ________________
EXHIBIT NOT ATTACHED: ________________
Date: _______________________
City Manager's Signature
Request for Ordinance/Resolution
City of Kansas City, Missouri
Request for □ Ordinance
□ Resolution (Special Instructions Below)
To be entered by the City Clerk
| Legislative Control No. | Date |
|-------------------------|------|
| | |
| Docketing Date | Committee Assignment |
|----------------|----------------------|
| | |
Before using this form see Administrative Regulation 4-1, Procedures for Handling Ordinance Requests
| Date | Request Made By | Department |
|------------|-----------------|------------|
| 04/25/01 | Kevin Riper | Finance |
| Desired Docketing Date | If Emergency, Give Reason (See Sec. 15 of Charter) |
|------------------------|-----------------------------------------------------|
| | Bond sale event is scheduled on May 17, 2001. |
Emergency Measure Required?
☐ Yes ☐ No
Justification for Proposed Legislation
Approving the issuance by Kansas City Municipal Assistance Corporation (the "Corporation") of its Leasehold Revenue Bonds, in the principal amount not to exceed $_________ (the “Bonds”); approving and authorizing a First Supplemental Lease Purchase Agreement between the Corporation and the City (the "First Supplemental Lease Purchase Agreement"), approving and authorizing certain other actions relating to the issuance of the Bonds, and recognizing an emergency due to bond sale event scheduled for May 17, 2001.
See attached.
Resolution Special Instructions:
Parchment Resolutions Required?
Yes ☐ Number __________
No ☐
Wish to Review and Approve this Ordinance prior to its introduction. Requestor Does ☐ Does Not ☐
If this is a Resolution, does the Sponsor desire the adoption on the first reading?
Yes ☐ No ☐
Date: 4-25-01
Director's Signature
To be Used by the Finance Department
| Budget and Systems Division Head Signature | Date: 4-25-01 |
|-------------------------------------------|--------------|
| Account Numbers and Appropriation Balances Checked | Date: |
|---------------------------------------------------|-------|
Supervisor of Accounts Signature
| Fund Availability Approved | Date: |
|---------------------------|-------|
Director of Finance Signature
Distribution:
White City Clerk
Blue City Clerk
Green City Manager
Canary City Counselor
Pink Finance Dept.
Goldenrod Department
EXHIBIT ATTACHED: ________________
EXHIBIT NOT ATTACHED: ________________
Date: _______________________
City Manager's Signature
Request for Ordinance/Resolution
City of Kansas City, Missouri
Request for □ Ordinance
□ Resolution (Special Instructions Below)
To be entered by the City Clerk
| Legislative Control No. | Date |
|-------------------------|------|
| | |
| Docketing Date | Committee Assignment |
|----------------|-----------------------|
| | |
Before using this form see Administrative Regulation 4-1, Procedures for Handling Ordinance Requests
| Date | Request Made By | Department |
|------------|-----------------|------------|
| 04/25/01 | Kevin Riper | Finance |
| Desired Docketing Date | If Emergency, Give Reason (See Sec. 15 of Charter) |
|------------------------|-----------------------------------------------------|
| | Bond sale event is scheduled on May 17, 2001. |
Emergency Measure Required?
☐ Yes ☐ No
Justification for Proposed Legislation
Approving the issuance by Kansas City Municipal Assistance Corporation (the "Corporation") of its Leasehold Revenue Bonds, in the principal amount not to exceed $__________ (the “Bonds”); approving and authorizing a First Supplemental Lease Purchase Agreement between the Corporation and the City (the "First Supplemental Lease Purchase Agreement"), approving and authorizing certain other actions relating to the issuance of the Bonds; and recognizing an emergency due to bond sale event scheduled for May 17, 2001.
See attached.
Resolution Special Instructions:
Parchment Resolutions Required?
☐ Yes ☐ No
Wish to Review and Approve this Ordinance prior to its introduction. Requestor Does ☐ Does Not ☐
If this is a Resolution, does the Sponsor desire the adoption on the first reading?
☐ Yes ☐ No
Date: ________________
Director's Signature
To be Used by the Finance Department
| Budget and Systems Division Head Signature | Date: ________________ |
|-------------------------------------------|-----------------------|
| Account Numbers and Appropriation Balances Checked Supervisor of Accounts Signature | Date: ________________ |
| Fund Availability Approved Director of Finance Signature | Date: ________________ |
Distribution:
White City Clerk
Blue City Clerk
Green City Manager
Canary City Counselor
Pink Finance Dept.
Goldenrod Department
EXHIBIT ATTACHED: _______________________
EXHIBIT NOT ATTACHED: _______________________
Date: ________________
City Manager's Signature
Request for Ordinance/Resolution
City of Kansas City, Missouri
Request for □ Ordinance
□ Resolution (Special Instructions Below)
Before using this form see Administrative Regulation 4-1, Procedures for Handling Ordinance Requests
| Date | Request Made By | Department |
|------|-----------------|------------|
| 04/25/01 | Kevin Riper | Finance |
Desired Docketing Date
Emergency Measure Required?
□ Yes □ No
If Emergency, Give Reason (See Sec. 15 of Charter)
Bond sale event is scheduled on May 17, 2001.
Justification for Proposed Legislation
Approving the issuance by Kansas City Municipal Assistance Corporation (the "Corporation") of its Leasehold Revenue Bonds, in the principal amount not to exceed $__________ (the “Bonds”), approving and authorizing a First Supplemental Lease Purchase Agreement between the Corporation and the City (the "First Supplemental Lease Purchase Agreement"), approving and authorizing certain other actions relating to the issuance of the Bonds, and recognizing an emergency due to bond sale event scheduled for May 17, 2001.
See attached.
Resolution Special Instructions:
Parchment Resolutions Required?
Yes □ Number
No □
Wish to Review and Approve this Ordinance prior to its introduction. Requestor Does □ Does Not □
If this is a Resolution, does the Sponsor desire the adoption on the first reading?
Yes □ No □
Date: ________________
Director's Signature
To be Used by the Finance Department
Budget and Systems Date: ________________
Division Head Signature
Account Numbers and Appropriation Balances Checked
Supervisor of Accounts Signature
Fund Availability Approved
Director of Finance Signature
Distribution:
White City Clerk
Blue City Clerk
Green City Manager
Canary City Counselor
Pink Finance Dept.
Goldenrod Department
EXHIBIT ATTACHED: _______________________
EXHIBIT NOT ATTACHED: _______________________
Date: ________________
City Manager's Signature
OUT
APR 26 2001
OFFICE OF MANAGEMENT & BUDGET
RECEIVED
APR 26 2001
OFFICE OF MANAGEMENT & BUDGET
The City now intends to proceed with the construction phase of the parking garage. The purpose of this second issuance is to provide construction funds for the ten-story parking garage to be located at 11th and Oak Streets in Kansas City, Missouri, to fund a debt service reserve fund, capitalized interest for the bonds, and certain costs of issuance of the bonds.
This ordinance will authorize the issuance of a tax-exempt second bond series, KCMAC Leasehold Revenue Bonds, Series 2001A, in an approximate principal amount not to exceed $25 million. It will also authorize the City to enter into certain documents related to the sale and to take certain other actions in connection therewith.
On 1/16/01, the KCMAC Board passed a Resolution authorizing the issuance of a tax-exempt KCMAC Leasehold Revenue Bonds in an approximate principal amount not to exceed $27 million.
On 4/18/01, the KCMAC Board passed another resolution authorizing the Corporation to enter into certain documents related to the sale and to take certain actions in connection therewith. It also approved the increase of the par amount to an amount not to exceed the principal amount of $29 million from the previous Resolution.
The City has selected the firm of Fahnestock & Co. and The Chapman Co. to act as co-financial advisors in connection with the issuance of bonds. The total fee for this issue will not exceed $35,000. The contract is split between Fahnestock & Co. and The Chapman Co. 80% - 20%.
Shaffer Lombardo Shurin, Logan Riley Carson & Kaup, L.C. and Fields & Brown have been selected as co-bond counsels and their contract amount is approximately $17,500 plus expenses.
Bond Counsel MBE/WBE goal is 7%
Actual Contract Split:
Shaffer Lombardo Shurin: 37.5%
Logan Riley Carson & Kaup, L.C.(WBE): 37.5%
Fields & Brown (MBE): 25%
Financial Advisor contract has not been assigned with a MBE/WBE goal
Applicable Dates:
Fact Sheet Prepared by:
Willie Roman
Treasury Division, Finance Department
Reviewed by:
Reference Numbers
|
DEVELOPING AN ACTIVE ASIAN BOND MARKET – LESSONS FROM THAILAND
Dilshan Rodrigo
Deputy General Manager, Risk and Credit Quality
Hatton National Bank PLC
Background and Rationale
Whilst much has been achieved in Europe and USA in capital market development the position in Asia has been woefully inadequate. (Refer Table 1)
| | % of Global Population | Funds under Management |
|----------------|------------------------|------------------------|
| Asia | 60% | 13% |
| USA | 7% | 52% |
Pension coverage in the USA and Europe exceeds 80% and 60% respectively, whilst in Asia not surprisingly only 15%.
This issue is further exacerbated when you consider that Asia’s population is ageing and in the next two decades the ageing population is expected to grow by over 20%.
Notwithstanding above the case for strengthening Asian Bond Markets is strong given the following factors.
1. Large Savings Pools and economies of scale
2. Potential for increasing investor choice and diversification – intra asia and globally
3. Retention of fund management jobs and expertise within asia
2.0 State of Asian Markets today
In the wake of the 1997 Asian financial crisis, regional policymakers and international organizations recognized the importance of developing the bond market in Asian economies to increase the efficiency of the financial system and promote regional growth.
2.1 Some causes for the crisis were:
- Lack of a regulatory framework for the banking sector with poor standards of governance and accountability.
• Over-reliance on bank borrowing
• Funding of domestic long term projects through short term foreign currency denominated loans which created currency and maturity mismatches.
Since then and propelled by the global sub-prime crisis in 2008, progress has also been made on a regulatory level with many developing nations adopting the Basel Capital Accord.
The benefits of promoting an efficient bond market were to:
• Reduce currency risk and maturity risk
• Diversify sources of borrowing and reduce counterparty risk
• Provide alternative channels of financing private and public investments
• Increase investor confidence and product choice
Whereas emerging Asian equity markets are well established and have been dominating the global market in performance over the last few years, the level of development of the bond markets still vary significantly from one jurisdiction to the other. This is mainly due to the different stages of economic development and unique features of each country’s financial structure.
A number of initiatives have been made by international organizations to promote the Asian bond market by developing the government and corporate bond sectors together with the banking system. Some of these collaborations have been highlighted below.
A. The Asia Pacific Economic Cooperation (APEC)
A project which APEC began in 1998 with the assistance of the World Bank and the Asian Development Bank aimed at developing a bond market for APEC member countries was intensified in 2002 with the development of securitization and credit guarantee markets with efforts further intensified in 2003 by increasing cross-border talks and bi-lateral cooperation.
APEC members include Australia, Brunei Darussalam, Canada, Chile, China, Hong Kong, Indonesia, Japan, Malaysia, Mexico, New Zealand, Papua New Guinea, Peru, Philippines, Republic of Korea, Russia, Singapore, Taipei, Thailand, United States and Vietnam.
B. The Association of South East Asian Nations + China, Japan and Korea (ASEAN+3)
The Asian Bond Markets Initiative (ABMI) launched in 2002 by members of ASEAN+3 and strongly supported by the Asian Development Bank, had the primary aim of developing efficient and liquid bond markets in the region with a better utilization of “Asian savings for Asian investments”. The establishment of a Credit Guarantee and Investment Facility (CGIF) fund to extend credit for the issuance of corporate bonds in the Asian region was one of the objectives of this program with the intention of spearheading market activity in the primary corporate bond segment.
Further Milestones:
In 2005 the ABMI roadmap proposed a new framework which collated and shared information on bond market development, promoting self-assessment by member countries based on feedback from market participants and the launch of a study on an Asian currency basket bond.
These initiatives were further enhanced in 2007 by exploring new debt instruments, promoting the securitization of loan credits and receivables and the promotion of an Asian Medium Term Note (MTN) program.
In 2008 the roadmap was to include promoting the issuance and facilitating the demand of local currency denominated bonds, improving the regulatory framework and infrastructure for the bonds markets.
ASEAN+3 member countries include Brunei Darussalam, Cambodia, Indonesia, Laos, Malaysia, Myanmar, Philippines, Singapore, Thailand, Vietnam plus China, Japan and Korea.
C. The Executives’ Meeting of East Asia and Pacific (EMEAP)
Another initiative to develop financial markets was the establishment of bond funds by the EMEAP to facilitate the introduction of investment trusts.
2.2 Asian Bond Fund (ABF1 and ABF2)
The Asian Bond launched by the central bank forum EMEAP comprising of 11 member countries is managed by BIS (Bank of International Settlement). It was the first bond fund to be launched in Asia (2003) promoting cooperation between regional central banks and encouraging the bond market development of EMEAP member states.
EMEAP Members: China, Hong Kong, Indonesia, Korea, Malaysia, Philippines, Singapore and Thailand. Additional Members: Australia, New Zealand and Japan.
ABF1
The original ABF referred to as ABF1 was denominated in US dollars with seed money of $1bn received from the 11 EMEAP member central banks. The fund invested in dollar bonds issued by eight governments and quasi-government organizations of the member countries.
The second stage of the Asian Bond Fund (ABF2) was launched in 2005 with an initial capital of $2bn and invested in local currency bonds also issued by governments and quasi-government organizations of the eight EMEAP member countries mentioned above. Under the ABF2 umbrella nine separate bond funds were formed.
ABF2
The structure of the ABF2 as illustrated in Figure 1 consists of:
- The Pan Asia Bond Index Fund (PAIF)
- Eight single market bond index funds
The PAIF fund manager is State Street Global Advisors and custodian bank is HSBC.
Figure 1 ABF 2
Source: BIS
As a result of these collaborative efforts Asian Bond markets have developed considerably in the past 10 years but still lags behind the developed markets. More progress is needed especially in the development of the Asian corporate bond market which is still at an infancy stage or non-existent in some emerging countries.
Building blocks, critical success factors and issues for the development of an Asian bond market
- Efficiency and further liberalization of emerging markets
- Improved regulatory standards and corporate governance
- Policies promoting liquidity, tax reforms for investors
- Long term capital flows to emerging economies from the developed Asian countries
- An internationally recognized clearing system
- Vulnerability of Asian equity markets and currencies to global market volatility
- Increased market literacy and broader investor base
- Financial integration in regional markets
- Greater responsibility of emerging economies in growth of global economy
3.0 The development of an Asian funds passport
A report published by State Street Global Advisors estimates the growth capacity of Asia’s $3.9 trillion (2009) in collective fund assets to be over 10% in 2014 for selected emerging markets and as much as 15.3% for regional giants such as China. In spite of the lack of a common Asian currency or regional authority such as the European Union, the consensus is that financial innovations with a solid regulatory framework such as the EU funds passport vehicle score high on investor confidence and developing a passport scheme for Asian funds would provide international investors with a wider array of off-shore products in otherwise inaccessible markets. Such a scheme would also increase cooperation between Asian regulators, facilitate cross-border marketing of local products and lead to integration of the fast growing emerging Asian economies into the regional framework. The Australian government has also shown its growing interest in the establishment of an Asia region funds passport scheme to support its expansion plans into global markets.
In order for the asset management industry to grow however, efficient and diversified equity and bond markets are pre-requisites. So promoting the development of these markets is key to the development of the asset management industry in the emerging economies.
3.1 The benefits:
- Fast growing emerging economies would have access to cheaper capital to fund their expansion plans, diversification of savings and investments from pension funds can fund demand
- Cross border products would allow asset managers to access a larger pool of investors and achieve economies of scale
- Investors would have a broader and cheaper array of financial products to choose from, giving them access to smaller markets and local expertise
- Growth in the asset management industry would lead to more job opportunities and retain expertise and talent in Asia
3.2 Critical factors:
- Setting up a level playing field promoting less developed Asian markets
- A solid regulatory framework
- Cross border cooperation in the development of fund products which focus on Asia rather than just the domestic market
- Increase investor awareness and dissemination of information
- Developing technology tools to facilitate product implementation
Case study – Thailand
The decision to select Thailand as our country case study is to highlight the development of
Thailand’s financial sector being a story of growth with unprecedented initiatives for change in the midst of adversity following the aftermath of the 1997 Asian financial crisis. The case study also provides an overview of Thailand’s regulatory policy known as the Financial Sector Master Plan (FSMP) which highlights the country’s ability to manage economic and market shocks effectively.
4.0 Background
With the influx of Japanese capital in the late 1980s and early 1990s, Thailand had a manufacturing-led economic growth supported by inexpensive labour, liberalized foreign investment policies and developments in the private sector. During this period commercial banks played an important role in mobilizing funds though bank deposits with little activity seen in the domestic bond market.
Following the Asian crisis in 1997 with investor confidence at a low, the country’s financial system went through a major restructuring program to reform the local regulatory framework, strengthening corporate governance of banks and implementing initiatives to reduce reliance on banks and external sources of funding.
In the midst of extensive political unrest, Thailand enjoyed a solid export driven growth from 2000 to 2007 only to be severely hit by the global economic crisis in 2008 cutting exports drastically in most sectors. In 2010 exports rebounded and the country’s GDP growth of 7.6% during the year made it one of the fastest growing economies in Asia. Despite the polarized political situation, business and investor sentiment remained positive and the stock market continued to grow. The historic floods in 2011 that inundated more than two-thirds of the country’s 77 provinces, disrupting supply chains on a global scale also highlight the major role played by Thailand in the world economy.
In spite of these setbacks, the projected economic growth rate for the country in 2012 and 2013 are 5.5% and 7.5% respectively (Bank of Thailand, April 2012). The short-term outlook for the country is favourable however significant risks still remain due to the uncertainties of the world economy.
Regulatory and policy decisions
1999 - 2003 Emergency measures which included closing down insolvent companies, recapitalizing viable companies and introducing debt restructuring mechanisms. Building blocks for formation of the Financial Sector Master Plan.
2004 - 2014 Financial Sector Master Plan (FSMP) - A medium term development and reform program to create a more transparent and internationally competitive financial market.
Phase I (2004-2009) involved increasing efficiency of the financial sector by
- Restructuring the licensing system of commercial banks (prior to this multiple types of licenses were issued to financial institutions).
- Limiting operations of foreign banks and number of new entrants to the banking sector.
- Bank of Thailand (BOT) empowered as the sole regulator and supervisor of financial institutions.
- Establishment of a committee to promote micro financing, with government owned special institutions (SFIs) to play a greater role in lending to the underserved sectors.
- Increasing access to financial services amongst the unbanked and for small enterprises by providing incentives to banks on the risk weights assigned for certain types of loans when determining their capital requirements thus freeing funds for the extension of credit to the undeveloped sectors.
- Improving consumer protection through formation of the Deposit Protection Act (DPA). The aim is to replace the blanket deposit guarantee system with a limited deposit guarantee, increasing the responsibility of banks and reducing the public cost of the deposit insurance system.
- Banks required to update deposit and lending interest rates on their websites, on a daily basis and submit this information to BOT. Consumers can then compare rates across banks by accessing the BOT website.
**Phase II (2010-2014) to concentrate on:**
- Reducing system-wide operating costs caused by distressed assets arising from the 1997 financial crisis
- On improving financial competitiveness of banks by allowing them to expand their product lines and branch into other business areas such as mutual funds and venture capital management.
- New and existing foreign bank subsidiaries will be allowed to open up to 20 branches from 2012 with the aim of increasing competitiveness of domestic commercial banks.
- Strengthening of risk management practices in financial institutions
### 5.0 Stage of development of Financial Markets
#### 5.1 Bond Market
Thailand’s capital markets have come a long way. In 1994 the “Bond Dealers Club” (BDC) was set up to promote the secondary market for debt securities and introduced an electronic bond trading system for the first time in the Thai bond market. After receiving the Bond Exchange license from the SEC, BDC was renamed in 1998 as The Thai Bond Dealing Centre (ThaiBDC). A major reform by the ThaiBDC in 2005 was to centralize the trading platform at the Stock Exchange of Thailand (SET) providing trading information, bond data, reference yields and market and regulatory news on both the primary and secondary market through its websites.
To strengthen its role as a self-regulated organization and information center, the ThaiBDC changed its status in 2005 to a licensed securities related association under the SEC Act and was named the Thai Bond Market Association (ThaiBMA).
As part of its commitment to promote best practices and assist in the development of the bond market, the ThaiBMA developed several key financials such as government bond yield curves and benchmark bonds. It has also initiated additional data techniques such as zero calculations, credit spreads, bond analysis and Value-at-Risk (VaR) for bond investment and portfolio management.
The Thai bond market can be categorized into three segments: government securities, corporate bonds and foreign bonds. The proportion of foreign bonds is very small and around 1% of the total domestic bond market. The corporate sector began issuing bonds in 1992 with approval required from the SEC prior to a corporate bond issuance and with credit ratings being a pre-requisite for all bond offerings with some exceptions. (Refer graph 1)
Following the 1997 economic crisis, the Thai government successively issued treasury bonds to finance the resulting budget deficit increasing the value of the bond market from THB 547 billion in 1997 to THB 1,883 billion at the end of 2001. Trading on the secondary market also increased substantially in the same period contributing to a robust bond market. The domestic bond market (government and corporate) at the end of 2011 was THB 7,327 billion.
The ratio of Thai domestic bonds to GDP shows an increasing trend since the 1997 crisis although the issuance of corporate bonds remains relatively small. In 2011 government securities made up around 80% and corporate bonds around 20% of the domestic bond market. The ratio of the value of the bond market to GDP was 70% at the end of 2011, whereas the value of corporate issuance was only 14% and still needs developing. (Refer graph 2)
Local currency denominated foreign bonds made their debut in 2005 when the Asian Development Bank issued domestic bonds to the value of 4 billion Baht. In the same year under the Asian Bond Market Initiative (ABMI), the Japan Bank for International Cooperation issued a 5 year Thai bond for 3 billion Baht, the first bond to be issued by a foreign government on the Thai capital market with the aim of financing business operations of Japanese companies in Thailand.
5.2 Equity Market
The stock markets is one of the most important sources for companies to raise capital for business development. The liquidity that a stock exchange provides gives investors the confidence and ability to buy and sell securities with relative ease compared to other investments such as real estate or bonds.
The capital markets in Thailand developed in two stages. The privately owned stock market known as the Bangkok Stock Exchange (BSE) began operating in 1962. Activity at the BSE was rather poor and stocks performed badly. The BSE finally ceased operations in the early 1970s. The general view is that the BSE did not succeed because of limited investor understanding of the equity market and also due to a lack of official support.
In 1974 the Securities Exchange of Thailand was formed and started trading in 1975. In 1991 its name was changed to the Stock Exchange of Thailand (SET). In 1997, in line with other regulatory reforms carried out after the crisis, the SET implemented new cap and floor price limits for trading and also introduced a circuit breaker system to reduce unusual volatility in the market which might cause investor panic. In addition to the SET index and sub-indices, the SET also calculates industry and sector indices. (Refer graph 3)
Historically from June 1997 to June 2012, the USDTHB reached an all time high of 55.50 in January 1998 and a record low of 23.15 in June 1997. The current exchange rate is approx. THB 31.87 to the USD (June 25th 2012). Most significant is that despite the collapse of its currency in 1997, Thailand’s economy remained sound. (Refer graph 4)
6.0 Development of Investment Funds
(Graph 4)
The development of the Thai asset management industry began in 1975 by the formation of the Mutual Fund Company which was controlled by the Thai Government and the International Finance Corporation (IFC). In 1992 the mutual fund industry was liberalized leading to a rapid increase in the number of funds available. In 2002, there were 346 funds under management, in 2011 the number increased to 1,265 funds. Assets also grew substantially from THB 435 billion in 2002 to THB 2,846 billion in 2011.
Today mutual funds dominate the market, followed by a smaller proportion of investments in private and provident funds. The major types of mutual funds offered in Thailand are fixed income, equity, balanced and property funds. Fixed income funds have the largest amount of assets under management at THB 1.23 trillion in 2010 in comparison to equity funds with assets of 261 billion under the same period. The local demand is for short-term money market funds, domestic fixed income and a growing interest for foreign debt from Asian countries like Korea and to some extent from China. (Refer graph 5, 6 and 7)
(Graph 5)
(Graph 6)
6.1 Challenges
- Less variety of locally developed investment funds, more sophisticated funds offered by foreign-owned asset managers
- Need for one regulator (Bank of Thailand, Thai Bond Dealing Centre and SEC regulations tend to impinge on each other)
- Mutual fund products more short term and offered as alternative to deposits leading to insufficient wealth at retirement
- Banks distribution channels limited to marketing of own investment funds
- Standard regulatory requirements for retail and high net worth individuals hamper potential to generate more income
- Government to build gaps in yield curve and build strong repo market, liberalise hedging instruments
6.2 Derivative Trading
The Thailand Futures Exchange (TFEX) formed in 2003 allows the trade of derivatives such as futures, options and options on futures on permitted underlying assets as approved by the SEC.
7.0 Key Learnings
- **Promote liquidity in bond market and establishment of a secondary market**
The Thai government issued a significant amount of debt instruments following the 1997 Asian financial crisis promoting liquidity and reducing reliance on banks and external sources of funding. At the end of 2011, the outstanding value of domestic bonds totaled THB 7,327 billion of which around 80% were government bonds. The secondary market takes place through the OTC market or through the Bond Electronic Exchange (BEX), the latter is used for retail trading activities. Liquidity is provided by authorized dealers.
- **Maintain stable currency**
Thailand adopted a floating exchange rate policy in 1997, with market forces and economic data determining the value of the Baht. However in the case of extreme currency volatility in the market, the Bank of Thailand will intervene to stabilize the Baht. The Bank also maintained a policy to control excessive foreign exchange outflows which was relaxed to allow direct investment and portfolio investment abroad within permitted limits.
- **Improve standards in corporate governance and risk management**
Thailand has significantly improved the financial viability of its banks by addressing non-performing loans (NPLs) and implemented Basel II with effect from January 2009. Thai banks are required to maintain a minimum CAR of 8.5% under Pillar 1. Pillar 2 was implemented in 2009 with banks required to develop an internal capital adequacy assessment process (ICAAP) by the end of 2010. The BOT started the supervisory review and evaluation process (SREP) in 2011. Information is disclosed by banks since 2009 for Pillar 3. The Bank of Thailand has already indicated plans to implement the Basel III regime in 2013.
The Securities and Exchange Commission (SEC) and the Stock Exchange of Thailand (SET) have become more proactive in enforcing regulations, protecting the rights of minority shareholders and enhancing rights of shareholders. In 2002, the SET established a Corporate Governance Centre to offer advisory services to listed companies and in order to improve investor confidence in Thai capital markets.
- **Establish effective market trading systems**
The Thai Securities Depository Company, a subsidiary of the Stock Exchange of Thailand (SET) has implemented systems to provide the following services to its members: securities and fund registrations, securities depository, security clearing and settlement and back office services. The trading platform implemented by the Thai Bond Market Association (ThaiBMA) at the SET provides trading information, bond data, reference yields and news on both the primary and secondary markets.
• **Increase market literacy**
The SEC and the SET publish information on their web sites in order to educate investors about financial investments. They also provide training courses with the objective of familiarizing local investors with new products. The Financial Consumer Protection Centre (FCC) opened in 2012 by the Bank of Thailand plays an educational role, creating consumer financial awareness and understanding of their rights and responsibilities. It is also geared at addressing consumer complaints about the services of financial institutions.
• **Broaden investor and issuer base**
Regulatory developments allow non-residents to issue debt securities (Thai local currency bonds) and qualified local institutional investors to invest in foreign securities. Furthermore non-residents can invest in government bonds without any restrictions regarding repatriation of investment and return.
### 8.0 Conclusions and Recommendations
Asian economies need to take a lead role in the establishment of regional cross border products providing foreign and local investors with a broader range of financial products which will contribute to the development of their local financial markets and spearhead regional economic integration.
Thailand’s success story in developing an active capital market despite many obstacles (natural disasters and economic setbacks) is a good case study of ‘evolution’ and ‘growing pains’ for other Asian nations.
Although it seems feasible to promote a funds passport for the more developed Asian countries where stable equity and bond markets exists, for the emerging economies with small treasury markets and non-existent corporate bond markets, development initiatives should be first on the agenda. The work done so far to promote Asian bond markets needs to continue and be intensified with promotion of bond markets in the non-member countries of these initiating organizations. Also governments together with asset managers need to look at ways of growing and promoting the local fund management industry. Countries which have a product structure in place need to promote bi-lateral talks with other smaller nations providing them with technical assistance to develop local and cross border products. Regional Trade Bodies such as APEC, SAARC a can play an important role by initiating the dialogues for the Asian region in collaboration with governments of its member countries.
References
- Asian Development Bank - Asia Capital Markets Monitor (August 2011)
- Guonan Ma, BIS- Opening markets through a bond fund: The Asian Bond Fund II presented at the APEC Seminar in 2005
- HYUN Suk and JANG Hong Bum – Bond Market Development in Asia (2008)
- Philip Turner, BIS Papers No 11 - The development of bond markets in emerging economies (June 2002)
- Price Waterhouse – Asia Region Funds Passport, the future of the funds management industry in Asia (2010) and web site http://www.pwc.com
- Rafael Consing - Creating an Efficient Asian Bond Market-The Private Sector Perspective (2003)
- State Street – Vision Focus: Asian Funds Passport to Growth (December 2010)
- ASEAN web site - http://www.asean.org/15030.htm
- Financial Services Council web site http://www.fsc.org.au/
- The Thai Bond Market Association web site http://www.thaibma.or.th/
- The Bank of Thailand web site - Press Release on the International Monetary Fund (IMF) http://www.bot.or.th/
- The Association of Investment Management Companies http://www.aimc.or.th/
- S. Nathaphan and P. Chunhachinda - Determinants of Growth for Thai Mutual Fund Industry, International Research Journal of Finance and Economics, Issue 86 (2012)
- Asia Focus July 2010 - Country Analysis Unit of the Federal Reserve Bank of San Francisco on Financial System Reform in Thailand
- Bank of Thailand - Thailand’s Financial Sector Master Plan Handbook http://www.bot.or.th/Thai/FinancialInstitutions/Highlights/
- Bank of Thailand - http://www.seacen.org/GUI/pdf/publications/bankwatch/2012/16-BOT.pdf
- Thailand Investment Review - BOI October 2005
- Thailand Builds a Bond Market – McKinsey Quarterly Dec 2011
|
In The Supreme Court of the United States
STOK & ASSOCIATES, P.A.,
Petitioner,
v.
CITIBANK, N.A.,
Respondent.
On Petition For A Writ Of Certiorari To The United States Court Of Appeals For The Eleventh Circuit
REPLY BRIEF FOR PETITIONER STOK & ASSOCIATES, P.A.
ROBERT A. STOK*
JASON M. FOLK
JOSHUA R. KON
STOK & ASSOCIATES, P.A.
Turnberry Plaza
2875 NE 191st Street, Suite 304
Aventura, Florida 33180
Tel: (305) 935-4440
Fax: (305) 935-4470
email@example.com
*Counsel of Record
Attorneys for Petitioner
# TABLE OF CONTENTS
| Section | Page |
|------------------------------------------------------------------------|------|
| TABLE OF CONTENTS | i |
| TABLE OF AUTHORITIES | ii |
| ARGUMENT | 1 |
| A. There is an Undeniable Conflict Among the Circuits as to Whether a Party Must Demonstrate That it has Suffered Prejudice in Order to Establish That an Opposing Party Has Waived its Right to Arbitrate | 1 |
| B. There is a Conflict Between the Circuit Courts of Appeals and the Various States With Respect to the Prejudice Requirement | 6 |
| C. The Position Taken by the Majority of the Circuit Courts of Appeals is Inconsistent with the Supreme Court's Own Precedent | 8 |
| D. The Abolition of the Prejudice Requirement is Supported by the Principal Treatise on Arbitration | 10 |
| CONCLUSION | 13 |
# TABLE OF AUTHORITIES
| Case | Page |
|----------------------------------------------------------------------|------|
| **FEDERAL CASES:** | |
| **UNITED STATES SUPREME COURT:** | |
| *Bingler v. Johnson*, 394 U.S. 741 (1969) | 1 |
| *Circuit City Stores, Inc. v. Adams*, 532 U.S. 105 (2001) | 9 |
| *United States v. O'Malley*, 383 U.S. 627 (1966) | 1 |
| **CIRCUIT COURTS OF APPEALS:** | |
| *Cabinetree of Wisconsin, Inc. v. Kraftmaid Cabinetry, Inc.*, 50 F.3d 388 (7th Cir. 1995) | 2 |
| *Cargill Ferrous Int'l v. Sea Phoenix MV*, 325 F.3d 695 (5th Cir. 2003) | 4 |
| *Dickinson v. Heinhold Securities, Inc.*, 661 F.2d 638 (7th Cir. 1981) | 3 |
| *Harris v. Green Tree Fin. Corp.*, 183 F.3d 173 (3d Cir. 1999) | 9 |
| *Ivax Corp. v. B. Braun of Am., Inc.*, 286 F.3d 1309 (11th Cir. 2002) | 5 |
| *Joseph Chris Pers. Services Inc. v. Rossi*, 249 F. App'x. 988 (5th Cir. 2007) | 3 |
| *Khan v. Parsons Global Services, Ltd.*, 521 F.3d 421 (D.C. Cir. 2008) | 2 |
| *Miller Brewing Co. v. Fort Worth Distrib. Co.*, 781 F.3d 494 (5th Cir. 1986) | 3 |
| *Morewitz v. W. of England Ship Owners Mut. Prot. & Indem. Ass'n (Luxembourg)*, 62 F.3d 1356 (11th Cir. 1995) | 5 |
TABLE OF AUTHORITIES – Continued
National Foundation for Cancer Research v. A.G. Edwards & Sons, Inc., 821 F.2d 772 (D.C. Cir. 1987).................................................................2
Price v. Drexel Burnham Lambert, Inc., 791 F.2d 1156 (5th Cir. 1986).................................................................5
Reid Burton Construction, Inc. v. Carpenters District Council of Southern Colorado, 614 F.2d 698 (10th Cir.), cert denied, 449 U.S. 824, 101 S. Ct. 85, 66 L.Ed.2d 27 (1998).......................3
S & H Contractors, Inc. v. A.J. Taft Coal Co., 906 F.2d 1507 (11th Cir. 1990).................................................................5
St. Mary's Medical Center of Evansville, Inc. v. Disco Aluminum Products Co., Inc., 969 F.2d 585 (7th Cir. 1992).................................................................2
Walker v. J.C. Bradford & Co., 938 F.2d 575 (5th Cir. 1991).................................................................4
FLORIDA DISTRICT COURTS OF APPEAL:
Bared and Company, Inc. v. Specialty Maintenance and Construction, Inc., 610 So.2d 1 (Fla. 2d DCA 1992).................................................................7
Hansen v. Dean Witter Reynolds, Inc., 408 So.2d 658 (Fla. 3d DCA 1981).................................................................7
King v. Thompson & McKinner Auchincloss Kohlneyer, Inc., 352 So.2d 1235 (Fla. 4th DCA 1977).................................................................7
TABLE OF AUTHORITIES – Continued
OTHER STATE COURT CASES:
Bolo Corp. v. Homes & Son Const. Co., 464 P.2d 788 (1970) .................................................. 7
Chandler v. Blue Cross Blue Shield of Utah, 833 P.2d 356 (Utah 1990) ........................................... 7
Rent-A-Ctr, W., Inc. v. Jackson, 130 S. Ct. 2772 (2010) ................................................................. 9
Saint Agnes Med. Ctr. v. PacifiCare of California, 82 P.3d 727 (2003) ............................................ 7
RULES OF THE SUPREME COURT OF THE UNITED STATES:
Rule 10(a) .......................................................................................................................... 1
Rule 10(c) .......................................................................................................................... 8
OTHER AUTHORITIES:
2 Ian R. Macneil, Richard E. Speidel & Thomas J. Stipanowich, Federal Arbitration Law: Agreements, Awards, and Remedies under the Federal Arbitration Act §21.3.3 (1994) ........ 10, 11, 12
ARGUMENT
A. There is an Undeniable Conflict Among the Circuits as to Whether a Party Must Demonstrate That it has Suffered Prejudice in Order to Establish That an Opposing Party Has Waived its Right to Arbitrate
It is well-recognized that the United States Supreme Court is more likely to grant certiorari review when there is a split of authority among the circuits, as stated explicitly by Rule 10(a) of the Rules of the Supreme Court of the United States. See Bingler v. Johnson, 394 U.S. 741 (1969); United States v. O'Malley, 383 U.S. 627 (1966). The Respondent argues, in its Brief in Opposition to the Petitioner's Petition for Writ of Certiorari, that there is no split of authority amongst the circuits with respect to the requirement that a party must demonstrate that it has suffered prejudice when attempting to establish that the opposing party has waived its right to arbitrate. This assertion is blatantly incorrect.
The Circuit Courts of Appeals themselves have recognized that there is a cognizable and profound "circuit split" with respect to the prejudice requirement with three circuits (the Seventh Circuit, the Tenth Circuit, and the D.C. Circuit) which do not require a showing of prejudice while the remaining circuits do require a showing of prejudice to varying extents.
The Seventh Circuit is the primary proponent of the notion that prejudice need not be shown in a waiver analysis. "Where it is clear that a party has foregone its right to arbitrate, a court may find waiver even if that decision did not prejudice the non-defaulting party." *St. Mary's Medical Center of Evansville, Inc. v. Disco Aluminum Products Co., Inc.*, 969 F.2d 585, 590 (7th Cir. 1992). Judge Posner, after reiterating the Seventh Circuit's rule of decision in *Cabinetree of Wisconsin, Inc. v. Kraftmaid Cabinetry, Inc.*, 50 F.3d 388 (7th Cir. 1995), stated "ours may be the minority position but it is supported by the principal treatise on arbitration 2 Ian R. Macneil, Richard E. Speidel, & Thomas J. Stipanowich, *Federal Arbitration Law: Agreements, Awards, and Remedies under the Federal Arbitration Act §21.3.3 (1994).*"
Aligned with the Seventh Circuit, the District of Columbia Court of Appeal has announced that it holds an opinion on the issue of prejudice that differs markedly from the majority view. For example, in the case of *Khan v. Parsons Global Services, Ltd.*, 521 F.3d 421 (D.C. Cir. 2008), the Court stated that "[a] finding of prejudice is not necessary in order to conclude that a right to compel arbitration has been waived, although 'a court may consider prejudice to the objecting party as a relevant factor' in its waiver analysis." (emphasis added). Similarly, in the case of *National Foundation for Cancer Research v. A.G. Edwards & Sons, Inc.*, 821 F.2d 772 (D.C. Cir. 1987) the Court stated:
This circuit has never included prejudice as a separate and independent element of the showing necessary to demonstrate waiver of the right to arbitration. See Cornell, supra. We decline to adopt such a rule today. Of course, a court may consider prejudice to the objecting party as a relevant factor among the circumstances that the court examines in deciding whether the moving party has taken action inconsistent with the agreement to arbitrate. See, e.g., Dickinson v. Heinhold Securities, Inc., 661 F.2d 638, 641 & n.5 (7th Cir. 1981); Reid Burton Construction, Inc. v. Carpenters District Council of Southern Colorado, 614 F.2d 698, 702 (10th Cir.), cert. denied, 449 U.S. 824, 101 S.Ct. 85, 66 L.Ed.2d 27 (1980). But waiver may be found absent a showing of prejudice.
(emphasis added).
The Respondent, in its Brief in Opposition, makes the argument that the distinction between the minority and majority views is more a distinction of degree rather than a "blackline" distinction. This is hardly the case. In the Circuit Courts of Appeals which hold the majority viewpoint, prejudice is absolutely required to demonstrate waiver. For example, the Fifth Circuit, perhaps the most "pro-prejudice" Circuit, has stated "[P]rejudice ... is the essence of waiver." Joseph Chris Pers. Services Inc. v. Rossi, 249 F.App'x. 988, 991 (5th Cir. 2007) (emphasis added) (quoting Miller Brewing Co. v. Fort Worth Distrib. Co., 781 F.3d 494, 497 (5th Cir. 1986)); "The proper
test is whether participation in litigation *prejudiced the other party.*" *Cargill Ferrous Int'l v. Sea Phoenix MV*, 325 F.3d 695, 700 (5th Cir. 2003); "Waiver will be found when the party seeking arbitration substantially *invokes the judicial process to the detriment or prejudice of the other party.*" *Walker v. J.C. Bradford & Co.*, 938 F.2d 575, 577 (5th Cir. 1991).
Even though courts are duly bound to follow their own precedent on the prejudice issue, some have recognized the failings of such a position, and only do so with great reluctance. For instance in *Walker v. J.C. Bradford & Co.* (5th Cir. 1991) the court opined:
In general, we do not look kindly upon parties who use federal courts to advance their causes and then seek to finish their suits in the alternate fora that they could have proceeded to immediately. Such actions waste the time of both the courts and the opposing parties. The decision whether to arbitrate is one best made at the onset of the case, and not part way through as Bradford seeks today. The attempt of Bradford's attorney to switch judicial horses in midstream either shows poor judgment, if planned, or poor foresight, if not.
Even the Court's sharp rebuke in *Walker* presupposes a benign motive and not shrewd manipulation for strategic gains, an avenue which is rife for exploitation, where a party need show a special prejudice to avoid the disruption of its suit.
In the same vein as the Fifth Circuit, the Eleventh Circuit, the Circuit from which this instant petition arises holds to the jurisprudence that waiver of the right to arbitrate will never be found where a party is unable to demonstrate that it has been prejudiced by the opposing party's failure to timely demand arbitration. As a case on point, the court in *Morewitz v. W. of England Ship Owners Mut. Prot. & Indem. Ass'n (Luxembourg)*, 62 F.3d 1356, 1366 (11th Cir. 1995) stated:
Nevertheless, the doctrine of waiver is not an empty shell. Waiver occurs when a party seeking arbitration substantially participates in litigation to a point inconsistent with an intent to arbitrate and this participation results in prejudice to the opposing party. *Price v. Drexel Burnham Lambert, Inc.*, 791 F.2d 1156, 1158 (5th Cir. 1986).
(emphasis added).
In *Ivax Corp. v. B. Braun of Am., Inc.*, 286 F.3d 1309, 1315-16 (11th Cir. 2002), the court pronounced “[i]n determining whether a party has waived its right to arbitrate, we have established a two-part test. First, we decide if, ‘under the totality of the circumstances,’ the party ‘has acted inconsistently with the arbitration right,’ and, second, *we look to see whether, by doing so, that party ‘has in some way prejudiced the other party.’ *S & H Contractors, Inc. v. A.J. Taft Coal Co.*, 906 F.2d 1507, 1514 (11th Cir. 1990).” (emphasis added).
A comparison litmus test demonstrates the above that the Respondent's position, that there is no real conflict between the circuits regarding the prejudice requirement, is simply not supported by the relevant authorities. The distinction between the circuits on this point is palpable, outcome determinative, and is not just a matter of semantics as Citibank argues in its Reply. There is no doubt that, had the litigation between the Petitioner and the Respondent taken place in the Seventh Circuit or the D.C. Circuit, where there is no required showing of special prejudice, Respondent's participation in the Florida state court litigation without raising its claimed right to arbitrate, clearly would have been construed as a waiver of that right and, therefore, the Petitioner would not be before this Court seeking relief.
B. There is a Conflict Between the Circuit Courts of Appeals and the Various States With Respect to the Prejudice Requirement
In addition to the ineluctable conflict between the various Circuit Courts of Appeals with respect to the prejudice requirement, there is a clear conflict of authority between the Circuit Courts of Appeals and the standards employed by the courts of the various states on the prejudice issue. For example, in the case at bar, the Petitioner instituted litigation in Florida state court. The Respondent answered the Petitioner's complaint, failing to raise the issue of arbitration. Under Florida state law, this alone, with no showing
of prejudice, was enough to constitute a waiver of the Respondent's right to demand arbitration. *Bared and Company, Inc. v. Specialty Maintenance and Construction, Inc.*, 610 So.2d 1, 3 (Fla. 2d DCA 1992); *Hansen v. Dean Witter Reynolds, Inc.*, 408 So.2d 658, 659 (Fla. 3d DCA 1981); *King v. Thompson & McKinnon Auchincloss Kohlneyer, Inc.*, 352 So.2d 1235, 1235 (Fla. 4th DCA 1977) (all holding that no prejudice is required to establish waiver of the right to arbitrate). A similar result would have arisen had the Petitioner instituted litigation in, for example, Arizona state court. *Bolo Corp. v. Homes & Son Const. Co.*, 464 P.2d 788, 790 (1970).
By comparison, had the same litigation been instituted by the Petitioner in other state court jurisdictions, the result would be very different. See *Saint Agnes Med. Ctr. v. PacifiCare of California*, 82 P.3d 727, 738 (2003) ("[I]n California, whether or not litigation results in prejudice also is critical in waiver determinations") (emphasis added); *Chandler v. Blue Cross Blue Shield of Utah*, 833 P.2d 356, 360 (Utah 1992) ("We therefore adopt the principle that waiver of a right of arbitration must be based on both a finding of participation in litigation to a point inconsistent with the intent to arbitrate and a finding of prejudice.") (emphasis added). Thus, with respect to the prejudice requirement, there is both a disparity of authority among the various Circuit Courts of Appeals and between the highest courts of the various states. As discussed above, this disparity is not simply a difference without a distinction. The divergence
Citibank failed to address in their Reply because it is the very premise upon which they pursued litigation in the trial court to exploit the disparate treatment of the prejudice issue between the Florida State and the Federal Court of the Eleventh Circuit. Rather, the stance on the issues of waiver, arbitration, and prejudice that a particular jurisdiction may hold will, of course, govern the outcome of litigation instituted in that jurisdiction on that issue.
After this Court takes a stance on this matter, either for or against the special prejudice requirement, the state courts will be guided by this Court's rule of decision, eliminating not just the inconsistency between the various Circuit Courts of Appeals but between the state courts as well, resulting in a uniform application of the principle.
C. The Position Taken by the Majority of the Circuit Courts of Appeals is Inconsistent with the Supreme Court's Own Precedent
Pursuant to Rule 10(c), Rules of the Supreme Court of the United States, the Supreme Court, when determining whether to grant a petition for writ of certiorari, is more likely to grant such a petition where the case in question involves a Circuit Court of Appeals which "has decided an important federal question in a way that conflicts with relevant decisions of this Court." To be sure the petition filed by the Petitioner involves such a case.
Originally, the Federal Arbitration Act (the “FAA”) was passed by Congress to counteract the courts of the various states’ longstanding judicial hostility toward enforcing contractual arbitration provisions. *Circuit City Stores, Inc. v. Adams*, 532 U.S. 105, (2001); *Harris v. Green Tree Fin. Corp.*, 183 F.3d 173, 178 (3d Cir. 1999). However, subsequently, this Court has taken pains to make its jurisprudence well defined. This Court has repeatedly held that the FAA was not intended to elevate agreements to arbitrate over other contractual agreements by ruling that the FAA was not intended to place the veritable “thumb” on the scale favoring arbitration. This Court, in the recent case of *Rent-A-Ctr., W., Inc. v. Jackson*, 130 S. Ct. 2772, 2776, (2010), has stated, with regard to the FAA, the following:
The FAA thereby places arbitration agreements on an equal footing with other contracts ... and requires courts to enforce them according to their terms. Like other contracts, however, they may be invalidated by “generally applicable contract defenses, such as fraud, duress, or unconscionability.”
(internal citations omitted).
It is this Court’s position that contracts to arbitrate should be treated in the same matter as any other contract. It stands to reason that, if any other contractual right can be waived by acting inconsistently with that right, so too can a contractual right to arbitrate be waived by acting inconsistently with that right. However, the judge-made requirement
that a party must show special prejudice beyond the prejudice of participating in an alternate forum, when asserting that a contractual counter-party has waived its right to arbitrate, exalts contracts to arbitrate outside of the contractual norm and thus places a thumb on the scale favoring contracts to arbitrate over other contractual rights. Hence, the special "prejudice requirement" of the majority position is squarely at odds with this Court's own precedent.
D. The Abolition of the Prejudice Requirement is Supported by the Principal Treatise on Arbitration
It is for good reason that the most authoritative treatise on the issue of arbitration, 2 Ian R. Macneil, Richard E. Speidel & Thomas J. Stipanowich, *Federal Arbitration Law: Agreements, Awards, and Remedies under the Federal Arbitration Act* §21.3.3 (1994), supports abolition of the special prejudice requirement. The fluidity of their logic is most persuasive:
As a matter of arbitration policy, the current judicial approach to waiver seems unsatisfactory. The requirement of prejudice, particularly in courts loathe to find prejudice, protects the federal contract right to arbitrate at considerable cost to efficiency. The current approach tends to encourage litigation of whether a waiver in fact occurred. It sometimes permits a party who has chosen to engage in litigation to stop, demand arbitration, and move to another forum. And it often permits a defending party to waste
much time and sometimes considerable effort by the other and even gain litigation advantage before demanding arbitration.
Present waiver doctrine thus allows a great deal of laxity in enforcing arbitration rights. All this appears to be the exact opposite of what parties desire when they agree to arbitrate - delay instead of speed, formality instead of informality, and complexity instead of simplicity. It is difficult to justify the present legal situation on the basis that it is implementing the much trumpeted pro-arbitration policy. On the contrary it seems to have the opposite effect.
In the short run, requiring diligence in enforcing arbitration rights rather than the present laxity would mean that some cases will have to be litigated to the end, rather than tossed to the arbitrators after the main benefits of arbitration may have been lost. In the long run it would have a beneficial effect on arbitration, because it would prevent parties from playing litigational games if they want to arbitrate.
After discussing the problems with which the current system is fraught, at least in the jurisdictions holding the majority viewpoint, Macneil, Speidel & Stipanowich suggest a solution based on the language of the FAA itself, which solution follows the viewpoint held by the minority of the Circuit Courts of Appeal:
A more sensible approach is provided by the language of FAA §§3 and 4. If, under either
section, a party is in "default" in proceeding to arbitration, the relief is denied. Default, in this context, should be defined to include both an unreasonable delay in demanding arbitration and the choice to initiate litigation or defend a lawsuit without insisting on a known right to arbitrate. In neither case would proof of prejudice be required. This approach would reduce the incentive to litigate over the content of prejudice and provide an incentive to make a timely decision to insist upon the right to arbitrate or not.
As Macneil, Speidel & Stipanowich note in this subsection of their treatise, the prejudice requirement has the practical effect of (i) elevating the contractual right to arbitrate over other contractual rights, (ii) encouraging litigation rather than discouraging litigation, (iii) wasting the resources of litigants and the judicial system as a whole, (iv) preventing the advantages of arbitration clauses from being realized, (v) thwarting any pro-arbitration policy which may exist, and (vi) encourages the employment of litigation "games" by those who seek tactical advantage over their opponents by exploiting the diverse positions on the subject as a litigation tactic. It only follows logically that the removal of the prejudice requirement will have the practical effect of ridding the judicial system of these ills and limiting the freedom of contract principles that this Court has repeatedly endorsed.
CONCLUSION
A few points should be obvious from the above discussion. Unlike the representation of the Respondent, there is an ineluctable and discernable conflict of authority between the Circuit Courts of Appeals with respect to the requirement that a litigant demonstrate special prejudice when seeking to establish that an opposing party has waived its right to arbitrate by, for example, participating in litigation in a non-arbitral forum. Not only is there such a conflict in the federal court system but this conflict extends to the state court system as well. Furthermore, the viewpoint of the majority of the Circuit Courts of Appeals, which circuits require a showing of prejudice, is in diametric conflict with the precedent of this Court because such a requirement raises contracts to arbitrate above all other agreements. Finally, the Petitioner's argument, that the prejudice requirement should be abolished, is supported by the leading academic treatise of the issue of arbitration. For all of these reasons, as well as those discussed at length in the
Petitioner's Petition, this Court should grant certiorari to the Petitioner.
Respectfully submitted,
ROBERT A. STOK*
JASON M. FOLK
JOSHUA R. KON
STOK & ASSOCIATES, P.A.
Turnberry Plaza
2875 NE 191st Street, Suite 304
Aventura, Florida 33180
Tel: (305) 935-4440
Fax: (305) 935-4470
firstname.lastname@example.org
*Counsel of Record
|
THE approaching coronation upon June 2 has focussed all eyes upon that bizarre and kaleidoscopic institution, the British Monarchy. At a time when thrones are toppling like houses on fire, and when the fashionable hotels of Europe's pleasure grounds are haunted by the shadows of bygone courts, by ageing "Kings in Exile," the British monarchy remains the centre of a sea of sycophancy, snobbery, and superstition.
The stout old Republicans of last century, Paine, Bradlaugh, Dilke, and Carlyle, must turn in their graves at the nauseating cant that fills land, sea, and, above all, the air, in this year of Grace, 1953. Is this orgy of exuberant loyalty the forerunner of a new "Elizabethan" Age, as its protagonists so loudly declaim? Or is it merely that, in society as in nature, the stricken swan sings loudest before it dies? At any rate, whatever the Future may hold, it may be of interest at the present hour to consider briefly the historic evolution of the monarchy of Great Britain.
It is customary, nowadays, to make a comparison between what is now alleged to be the dawning "Elizabethan" Age which presumably began in 1952 with the accession of the Second Elizabeth, and the so-called "golden" age of Elizabeth the First, Elizabeth Tudor, and of the great men and movements which conferred lustre upon her name and reign. It is a disturbing and rather melancholy fact that the only audible criticism of this superficial viewpoint comes from north of the Tweed, and centres around the essentially unimportant coincidence that the First Elizabeth was actually the last queen of England who did not, simultaneously, occupy the Scottish Throne also. In actual fact, this is merely a trifling difference. The character of "Elizabethan" England was due to circumstances that were far more profound than the mere coincidence that the First Elizabeth was of Welsh, whilst the Second is of Scottish, descent.
The historic "Elizabethan" Age owes its peculiar glory to the fact that, pre-eminently, it was a pioneering age, an age of origins. Most of the institutions characteristic of modern Imperial Britain began in that age (1558-1603): this applies, alike, to the British Empire, which began then in Ireland, America, and India, simultaneously; to the British Navy, which began effectively with Drake and the defeat of the Spanish Armada (1588); to British Capitalism, which, also, began to function effectively with the simultaneous defeat of Spain and the foundation of the East India and Hudson Bay Companies, to trade with the East and the New World, respectively; and, in widely dissimilar spheres, modern English literature took shape with Shakespeare and his contemporaries, and modern English science and philosophy began with Bacon and Harvey. Whatever glories and achievements may be reserved for the Age of the Second Elizabeth, they obviously could not be identical with the original Elizabethan pattern, and, except in a purely rhetorical sense, the whole comparison between the two "Elizabethan" Ages obviously falls to the ground.
In another respect, also, the reign of Elizabeth the First was widely dissimilar from that of Elizabeth the Second. For the First Elizabeth was the last English Monarch to govern as well as to reign: like her Tudor predecessors she was, in deed if not in name, an autocrat, an absolute Monarch who might, on occasions, graciously agree to consult her loyal Parliament, but, normally, felt herself under no legal obligation to do so. Her Stuart successors, James I and Charles I, were weaklings, men of political straw. Whilst, after them, "the Deluge"; the rising forces of Puritanism, Republicanism, and of the infant Capitalism of the City of London, combined to sweep away the absolute Monarchy, along with the old feudal social order which it embodied in "the Great Rebellion" of 1642-60. That "Bolshevik" Revolution of the 17th century from which modern Britain and its institutions were eventually to emerge. The broom of Cromwell, the English "Bonaparte," swept clean. Henceforth, it is both constitutional law and practice that the monarch "reigns but does not govern," a state of things, we may relevantly add, the very suggestion of which would have sent the passionate Elizabeth Tudor and her terrible father, Henry VIII, into a fit of apoplexy!
Twice since 1660, have English monarchs striven to escape from the gilded cage in which Cromwell's successors, the Whig oligarchy, enclosed them. But, in the (self-styled!) "Glorious Revolution" of 1688, the Whigs put paid to the attempt of the Catholic James the Second to restore the pre-revolutionary absolutism of his predecessors. Whilst, in the following century, the attempt of George the Third to coerce America as a preliminary to coercing England into submission to the royal absolutism, failed at the first hurdle, when the English redcoats crumbled before the volleys of the American "Resistance Movement," and before the trenchant arguments of Thomas Paine.
Since the failure of George the Third and of his satellites to reconquer Britain's "First Empire" in America, the history of the British Monarchy has been essentially that of a rubber-stamp, a figurehead at the gilded prow of the ship of state. One can accurately add that, right down to the last decades of the 19th century, it represented a most unpopular figurehead: the obituary notices of the later Hanoverian Kings would, to-day, provoke police intervention if they appeared in, say, our contemporary, *The Daily Worker*!
"To the vast majority of his subjects the late King (William the Fourth) was an object of mingled pity and contempt, the greater the pity, the greater the contempt!" Or this even more trenchant judgment: "If there is in these islands a single man or woman who had a good word to say of the late King (George the Fourth), his or her name has not yet reached us." Both the above "eulogies" of the British Monarchy appeared in the august columns of *The Times!* It will be agreed, we think, that much water has flowed down the Fleet river since the present
pillar of "respectable" society went on record with these subversive comments.
As late as 1838, the then ambassador of Tsarist Russia saluted the coronation of the young Victoria with the melancholy reflection that she was destined to be the last of the long and glorious line of British monarchs. Actually, the Court remained extremely unpopular right up to about 1870. When Charles Bradlaugh proclaimed that the monarchs of the House of Brunswick were "small breast-bastarded wanderers," he adequately and accurately voiced the opinion of most contemporary British Radicals.
From this unhappy state of things the monarchy was delivered by the rise of political Imperialism about 1870. The present British monarchy since about that date is feudal in form but Imperialist in substance. From the date when Disraeli, the effective Founder of the modern British cult of the monarchy, crowned Victoria as Empress of India (1875), down to the present day, the monarchy derives its importance as the titular symbol of the cult of Empire. As such, it has taken a new lease of life. The experienced Tory ruling-class of Great Britain are not such fools as their critics seem sometimes to imagine. Behind the feudal façade and archaic superstition of the Coronation there are solid political and economic facts. The modern monarchy is essentially a middle-class institution. That shrewd and cynical aristocrat, the late Marquis of Salisbury, Tory Premier at the turn of the century, aptly summarised the change when he delivered his obituary speech after the death of Queen Victoria (1901).
Having declared that he thought he understood both the aristocracy and the proletariat, he went on to say that he had no experience of "the great middle class," but that, when he wanted to know what that class was thinking, "I went to Her late Majesty, and she was never wrong." Incidentally, the vast personal fortune of the present dynasty is believed to have been made along lines of strictly business speculations—another bourgeois detail!
What is the Future of the Monarchy? Will the young Elizabeth, unlike the young Victoria, actually be the last of her line? It is difficult to predict, since, as we have elsewhere remarked, "the history of revolutionary change in Britain is the history of reflex actions. All English revolutions effectively begin outside England." However, both the contemporary trend of the times and the current decline of the British Empire and its present transition to more democratic forms, augur ill for the future of the British Monarchy. Probably, by 2053, the then President of the "United States of Great Britain," with his umbrella and bowler-hat, will bear little resemblance to the medieval Coronation paraphernalia, Holy Oil, gilded coach, and golden crown, on show on June the 2nd, 1953.
At least, as believers in political, no less than in biological evolution, we may be permitted to hope so.
Sex and Christian Asceticism
By G. I. BENNETT
A PURITANIC streak runs through much religious thought, but in no religion is it more evident than in the Christian. Now as in its earliest days, the Christian Church equates sex with sin and hardly ever mentions one without implying the other. Nor can this simply be ascribed to its acceptance of the doctrine of the Fall of Man—Adam's carnal surrender to the feminine charms of Eve. Rather is the holding of this doctrine a demonstration of how atavistic and savage in origin is Christian thought about sex.
The Christian attitude has always been that, as sex is in essence of this world, and it is the next that is important, a man should, by abjuring the cravings of the flesh, cut his moorings with this world so as to be prepared for the world to come. Sex is a powerful appeal, not to Heaven, but to the secular life. And, it has been reasoned, in proportion as one gives oneself to the practice of chastity one enjoys communion with God and becomes 'spiritual.' Upon thinking of this sort was monasticism founded and from it derived its chief inspiration. Particularly in the Roman Catholic Church are chastity and celibacy highly regarded, as everybody knows. The imposition by that Church of celibacy upon its priests and dignitaries has done more than ensure that they should be good servants of God; it has made them good servants of the Church, giving it their undivided time, energy, and loyalty untrammelled by the distractions of wife, family, and home.
It is interesting to trace briefly the source and the evolution of the Christian attitude to sex.
Early man was deeply conscious of the strange power, or mana, that woman exerted over him sexually. He noticed that intercourse with her enfeebled him, albeit temporarily, and that excessive intercourse unmanned him for hunting and fighting. Woman was therefore dangerous and he feared her. Moreover he believed her unclean, largely on account of her periodic losing of blood, which he did not understand. Thus he erected the first sexual taboos, which in one form or another have curiously persisted even to the present day.
This idea of woman's being dangerous and unclean prevailed into and throughout Biblical times, found expression in the Mosaic law, and was to St. Paul a monstrous and nagging obsession. If any man hated women, and saw sex as utterly loathsome and disgusting, that man was St. Paul. He put the simple teachings of Jesus into a theological mould and was in a fundamental sense the true founder of Christianity. He, more than any other man, doctrinised his own ascetic contempt of the sexual life and profoundly influenced Christian thinking in this regard.
There was no lack of followers to carry on Paul's thunder against the evils of the flesh. Tertullian was one who, to judge from his utterances, had a nature as tortured and twisted as St. Paul's. Married though he was, he found women repugnant ("the devil's gateway") and sex shameful ("voluptuous disgrace"). St. Augustine, Bishop of Hippo, who in youth had led a life of sexual indulgence, was later to hold not unsimilar views. St. Jerome talked about "axeing the roots of the sterile tree of marriage." And St. Odo of Cluny observed that beauty only lies in the skin. "If we could seen beneath the skin," he said, "women would arouse in us nothing but nausea. Their adornments are but blood and mucus and bile. If we refuse to touch dung and phlegm even with the finger-tips, how can we desire to embrace a sack of dung?"
With few exceptions the early Church Fathers lashed the lusts of the flesh, reserving their special venom for women, the fuel of such lusts. The lengths to which some of them were prepared to go to fight down and overcome their sexual nature are astonishing; prolonged withdrawal from the sight of women, severe fasting, and the expedient of standing in casks of cold water, would seem to be the milder measures.
Now if the Christian ascetics, instead of representing chastity as a means of avoiding temporal abomination and hell-fire hereafter, had counselled it purely as a moral discipline (as did pagan philosophers like Epictetus), they would have been on more defensible ground. But the life of rigorous chastity and celibacy that they commended
Homage to Bradlaugh
IT is possible that the Clerk of the Weather "double-crossed" his Almighty Master, for it is hard to believe that that All-Powerful Deity would have arranged for such a perfect day as was enjoyed by the many pilgrims who went to the "shrine" of Charles Bradlaugh on May 3 last. It was perhaps the warmest day of the year so far, and all who went to Northampton by train, coach, bus or private car from London, Leicester, Nottingham and other towns must have thoroughly enjoyed the beautiful excursion through the heart of England's loveliest lanes and villages.
The London coach arrived a few minutes late and we found a crowd awaiting us at the statue headed by the Mayor, wearing his chain of office, Mr. Paget, M.P., a number of Northampton's town councillors and citizens, as well as members of the N.S.S., the R.P.A. and the Leicester Secular Society. There was no gainsaying the enthusiasm which greeted all the speeches delivered on the spot with the splendid statue of Bradlaugh towering above us and, so to speak, overshadowing everything else. The memory of the great man has not been dimmed by time; on the contrary, indeed. His reputation—like that other great Commoner of England, Thomas Paine—has never stood higher. This was the theme of all the speakers there—they all paid tribute to Bradlaugh's determination, courage and thoroughness.
It must have been a proud moment for Charles Bradlaugh Bonner, Bradlaugh's grandson, who has for so many years carried the flag of Freethought so worthily, as well as for F. A. Ridley, as the President of the National Secular Society, which was founded by Bradlaugh in 1866. But one must not forget the humble soldiers in the Freethought ranks. The followers of other Freethought Societies like that of Leicester, the working men who contributed their pennies to the causes Bradlaugh loved, and perhaps above all, the unknown men of Northampton who voted him into Parliament time after time, though they knew he was an Atheist and many of them were Christians—all, all contributed their bit to help the cause of liberty, freedom and justice.
On behalf of the R.P.A., the N.S.S., and the Leicester Secular Society wreaths were laid at the statue by Mr. Bradlaugh Bonner, Mr. F. A. Ridley and Mr. G. A. Kirk, and after the speeches the visitors were invited by the Mayor, Mr. Adams, to tea, and needless to say, the tea rooms were packed. It was also a proud moment for His Worship, for his grandfather, Thomas Adams, had worked hard as Bradlaugh's election agent, and no doubt contributed much to his success in winning election. We all revered the memory of Charles Bradlaugh, he said, after tea—though, like so many admirers before him, he caused a mild sensation by claiming for Bradlaugh "that he was a Christian without knowing it." But Mr. Adams was right when he insisted that Bradlaugh gave his life for the causes he fought for. And later speakers fully agreed with him. They included Lord Chorley, Mr. Paget, who represents Northampton in Parliament, Mr. C. Bradlaugh Bonner, Councillors Lewis and Nutt, Prof. Florence, and a veteran of 92 years of age, Mr. Fullard, of Bedford. All paid their hearty tributes to the memory of Bradlaugh, emphasising his many fine qualities, his humanism and generosity. A mere reference to the speeches, such as this report, can do little justice to the warm, heartfelt appreciation of Bradlaugh's work felt, not only by his Freethought followers, but also by those who still held to the old beliefs and who yet admired the sturdy integrity of the great iconoclast. It was the same story, whether stalwarts like Kirk and Hassell of Leicester, Mosley of Nottingham, Ridley and Bonner of London, or Christians like Mr. Adams and others who spoke—they all shared their admiration for what Charles Bradlaugh had done for humanity in his comparatively short life.
Many photographs of the event were later taken near the statue which, one hopes, will help to commemorate our loyalty and affection for our great leader. And it was with positive regret that we had to leave Northampton, in a blaze of sunshine, with memories only of a perfect day to be long treasured as one of the happiest days it has been our good fortune to enjoy.
H.C.
This Believing World
Believers in Faith-healing, whether through the agency of spooks or the Church will find the latest "miracle" most intriguing. Crippled in her arms and legs for 15 weeks, a girl of fifteen, Freda Pogmore, was carried on a stretcher to the Shrine of Our Lady of Walshingham to pray, and was at once completely cured. Her mother claimed it a miracle, and the priest, Fr. Hulme, is going to lay the matter before "the appropriate authorities." Query—did Our Lady cure the girl, or was it not merely a cure such as repeatedly happens through herbal remedies, patent medicines, osteopathy, and electric belts? Or to put it another way: are not all these cures mostly within the patient, and have nothing to do with the "curative" agencies such as spooks, prayers or miracles?
The Roman Catholic Bishop of Leeds has been letting the cat out of the bag in Sydney. Lecturing to the university students there, he stunned them with the dreadful news that "the overwhelming majority of Englishmen have no religion at all." The News Chronicle is even more stunned: "If this is true," it moans, "or even nearly true, it is a sorry state of affairs." But there is one consolation, or rather two. First, it all depends on what is meant by religion; and second, the B.B.C. assures the News Chronicle that eighteen million people regularly listen in to the religious broadcasts; while hymn-singing attracts a bigger audience than "Take It From Here." So there you are—things are not so bad, and the deeply religious journal comes to the conclusion that religion is "a deep instinct" and so we all are still very religious.
The truth is, of course, that religious people and religious journals are thankful for small mercies. If the deep instinct is so shallow that it cannot induce the pious to go to church or sacrifice even a broadcast, then religion has gone a long way from what it was even fifty years ago. Everybody knows that the atmosphere in a sitting room with people "listening in" is very far from reverent. Nobody is on his knees and, more likely than not, meals washed down with beer or other drinks take place while the parson is droning his interminable drivel. There is about as much "true" religion in all this as there is in a game of darts. No, the Bishop of Leeds is right. The overwhelming majority of Englishmen have no religion at all—thank God!
We often wonder how many of the marvellous stories told us by Christian missionaries are true. Of course, some did have dangerous adventures, and some died for their religion. But it is most difficult to test the truth of their narratives. One of the "Undefeated," as the B.B.C. called her, was a servant girl who did pretty badly at school, but who insisted on going to China and help to convert the 500 millions there to Christ. Her name is Gladys Aylward, and her adventures, in bringing the obstinate Chinese to believe her own Fundamentalist stupidities are, to say the least of it, fantastic.
Most, if not all, the Chinese who spoke to her appear from her narrative to speak English with the utmost ease; while she herself had not the slightest difficulty in learning at an adult age Chinese, perhaps the most difficult of all languages. She actually understood it better after a year than English! This takes our breath away. It generally takes twenty years for a foreigner to get even an elementary knowledge of Chinese. But then, in the case of Miss Aylward, Jesus Christ took a hand. Being God Almighty he, no doubt, knew Chinese perfectly, and naturally taught his fervent missionary so well that she could even read English fairy tales perfectly to Chinese villagers. Oh, to be a Christian now!
Just like new "lives" of Jesus Christ, so do veritable "portraits" pop up, as well as bona fide descriptions of the Son of God. The latest comes from Canada reporting from Rome that "Jesus Christ was more than six feet tall, long limbed and finely muscled"; though it is more than curious that Jesus is never made to look like a Palestinian Hebrew. He generally looks like or is described as a fair-haired Saxon. It appears that an Italian sculptor has put in 21 years hard study of the white linen cloth Joseph of Arimathea wrapped round Jesus which—of course—still exists in Turin. By wrapping the Holy Relic round "live models," the sculptor was able to find out exactly what Jesus looked like and the world now knows. And we might add the world will continue to swallow all these fairy tales for centuries to come—in spite of science, history, and common sense.
Theatre
Airs on a Shoestring is a new intimate revue directed by Laurier Lister at the Royal Court Theatre.
It consists of a number of mostly inconsequential sketches of high entertainment value, but although their execution is almost impeccable they do not contain the satire and wit of some recent revues. The versatility of the cast is amazing, as is the humour of Moyra Fraser, Betty Marsden, Sally Rogers and Max Adrian. Patricia Lancaster pleases us with her excellent singing.
To summarise separately the numerous turns would take too much space, but let it suffice that this is light entertainment worth seeing. The music by various composers is tuneful, and the production is slick.
The Seagull, by Anton Chekhov, has come to the Arts Theatre in the style of comedy, which is how Chekhov intended it to be. But to-day we judge a play by its ending, and one cannot conceal the tragedy of a suicide just before the last curtain.
However, John Fernald—aided by a capable cast—has squeezed out every atom of humour in the play. The result is that we feel that Masha (beautifully played by Jennie Laird) revels in her frustrated love, and are amused that she married Medvedyenko (Richard Warner) whom she did not love. We smile that Peter Nikolayevich Sorin (Fredrick Leister) appears so old and helpless at sixty—a kind of malade imaginaire—and that Irina Arkadina is so carefree and self-centred that she cannot see what passes in the mind of her son. This is a brilliant performance by Catherine Lacey, and the son (played with great feeling by Michael Gwynn) is so bored with life and melancholic in his love for Nina Zarechnaya (played with charm by Jane Griffiths) that his life becomes intolerable. John Arnatt gives a fine performance of Dorn, a philosophical and cynical doctor, and Noel Hood gives much character to the rather slender role of Paulina. Boris Trigorin is played by Alan MacNaughtan with a lightness of touch that hardly suggests the villain who caused Nina's downfall, but if we can overlook this the remaining characters under Mr. Fernald's capable production come well within the accepted style of Chekhov.
This, as you may well conclude, is a play of character more than a plot, which is what we have to expect of Chekhov. And he, we may conclude, having written it as a comedy, has shown us how to take our pleasure miserably.
RAYMOND DOUGLAS
To Correspondents
The Freethinker will be forwarded direct from the Publishing Office at the following rates (Home and Abroad): One year, £1 4s. (in U.S.A., $3.50); half-year, 12s.; three months, 6s.
Correspondents are requested to write on one side of the paper only and to make their letters as brief as possible.
Lecture Notices should reach the Secretary of the N.S.S. at this Office by Friday morning.
Orders for literature should be sent to the Business Manager of the Pioneer Press, 41, Gray's Inn Road, London, W.C.I., and not to the Editor.
Lecture Notices, Etc.
OUTDOOR
Blackburn Branch N.S.S. (Market Place).—Sunday, 7 p.m.: Jack Clayton: A Lecture.
Bradford Branch N.S.S. (Broadway Car Park).—Every Sunday, 7.30 p.m.: H Day and A. H. Wharrad.
Manchester Branch N.S.S. (Deansgate Bomb Site).—Every weekday, 1 p.m.: Messrs. Woodcock and Barnes.
North London Branch N.S.S. (White Stone Pond, Hampstead Heath).—Sunday, 12 noon: F. A. Ridley.
Nottingham Branch N.S.S. (Old Market Square).—Saturday, May 16, 7 p.m.: Messrs. T. W. Mosley and A. Elsmere.
Sheffield Branch N.S.S. (Barker’s Pool).—Sunday, 7 p.m.: Mr. A. Samms.
West London Branch N.S.S. (Marble Arch).—Every Sunday at 3 p.m. and 8 p.m., Sunday, May 10, 8.30 p.m.: F. A. Ridley and other speakers.
INDOOR
Bristol Rationalist Group (Crown and Dove Hotel, Bridewell St.).—Wednesday, May 13, 7-30 p.m.: A Lecture, “It Stands to Reason.”
South Place Ethical Society (Conway Hall, Red Lion Square, W.C.1).—Sunday, 11 a.m.: Archibald Robertson, M.A., “The Prospect of Peace.”
West Ham Branch N.S.S. (Community Centre, Wanstead, two minutes from Wanstead Station).—The fourth Thursday every month at 8 p.m. Open meeting.
NOTES AND NEWS
Congratulations to Mr. Emrys Hughes, M.P., at a time when pulpit, press, and B.B.C. are combining to stir up mass-hysteria on the subject of the approaching Coronation. Mr. Hughes has produced an admirable pamphlet entitled The Crown and the Cash. Our intrepid author is a veteran of the Scottish Labour Movement and a son-in-law of the redoubtable Keir Hardie. In The Crown and the Cash [26, Civic St., Glasgow, C.4; 6d.] Mr. Hughes gives many interesting details about the shocking waste of money and of urgently needed building materials on what is, after all, merely a glorified circus. Our author does not demand the abolition of Monarchy as such. Contrarily, he gives the Crown a perhaps not altogether deserved testimonial for political impartiality in recent years, which readers of, say, Mr. Harold Nicholson’s biography of the late King George V, might not altogether endorse. But he makes many excellent points regarding the excessive cost of our monarchy as compared with other surviving European monarchies, and he draws attention to the rampant snobbery and commercialism associated with what is supposed to be a purely patriotic display.
Mr. Hughes makes the effective debating point that the monarchy ought not logically to object to Socialism, since it is one of our oldest nationalised institutions! But, he relevantly adds, it is most illogical of the present government which is always denouncing the extravagance allegedly associated with nationalisation, to pile up expenses on this particular one. The same people, declares Mr. Hughes, who objected to the educational “Festival of Britain” on the ground of expense, are now voting the taxpayer’s money recklessly for the Westminster ceremony. We are sure that our readers will appreciate Mr. Hughes’s hard-hitting arguments and useful citations. After reading The Crown and the Cash we feel like asking the organisers of the June 2 procession:
“Is your journey really necessary?”
Our Liberal contemporary, The Manchester Guardian, last week published an interesting account of the Bradlaugh commemoration ceremony, reported elsewhere in this issue of The Freethinker. Amongst the organisations represented at the ceremony special mention is made of the National Secular Society of which Bradlaugh was founder and first president. Comments the Guardian: “Although it is 63 years since his death, Bradlaugh is still a legend to the Northampton people whose grandparents with such obstinate loyalty re-elected him again and again.” Our contemporary reminds us of the pioneer role of Bradlaugh in the formation of Co-operative Building Societies designed to enable his working-class supporters to buy their own homes. This, what the Guardian calls the “practical, if not beautiful, reminder of Bradlaugh,” may also usefully remind us that the great Radical was not the fanatical Individualist that he is sometimes represented as being in current political propaganda.
This week a band of intrepid mountaineers will again attempt the conquest of the world’s highest peak, Mount Everest. We wish them luck! Modern Everest, like ancient Olympus, is a sacred abode of the gods. When the explorers reach the top they will find nothing except space. Another superstition will have been exploded. The gods will have been chased out of yet another hide-out.
When Heart Grows Black
When heart grows black with passion
And eye untrue with lust,
Turn I in soldier fashion
And take the road I must.
Bear I away to stillness
The sins not all my own;
If life be but an illness
’Tis best to ail alone.
If life be but an error
Who knows the certain cure?
Still holds the dark in terror
The shadows that endure.
Far better ditch and hedgerow
And morning sharp and red,
Than living shapes that borrow
Their living from the dead.
For me the weed and nettle
That cloy not as the rose;
For me the bravest metal
To bear against my foes.
John O’Hare.
Robert Taylor
The Devil's Chaplain (1784-1844) By H. CUTNER
(Continued from page 152)
LECTURES were arranged for and delivered in a small theatre; but these soon attracted the attention of the religious bigots in Dublin as the discourses became more and more Deistical. So outrageous became the attacks egged on by gangs of theological students that the theatre was eventually almost wrecked, and Taylor himself in danger of his life. His friends felt that Dublin was hopeless for any Society of "Universal Benevolence" and thought he might better try to establish it in London. A subscription was raised and he returned to the capital in the summer of 1824.
On November 24, he held the first meeting of the Christian Evidence Society "under the principles of the Association of Universal Benevolence" for free inquiry and fair discussion. It will come as a surprise to many people to learn that it was Robert Taylor who was responsible for the title, "The Christian Evidence Society," as there is still in existence a Society with the same name but not the same object. Taylor's object in founding his was a vastly different one from that of the present Society. He wished to show what the evidence for Christianity was really worth, and as a matter of fact devoted the next ten years of his life to proving that there was no evidence whatever for the supernatural claims made for their religion by Christians. We shall go more fully into this when dealing in detail with his literary work.
The Christian Evidence Society commenced its work at the Globe Tavern in Fleet Street, the usual procedure being a reading from some standard Christian author like Paley, Leslie, Doddridge, and others, an "oration" by Taylor in which he dealt with and criticised the reading followed by a discussion in which members of the audience were invited to join. Taylor delivered ninety-five orations, and the meetings were very well supported—so well, indeed, that it was decided to make them a regular Sunday series. In the spring of 1826 a "chapel" was obtained in the Founder's Hall, and so successful was the "service" that a move was made into better quarters in Salter's Hall, Cannon Street. Thirty-eight discourses were here delivered; they will be found reprinted in the Lion published later by Richard Carlile.
This interpid Freethinker, writer and publisher deserves a volume to himself. He had already become notorious as a fervent disciple of a free press; no one more than he, in fact, had striven to achieve its accomplishment, and he suffered many years imprisonment in his set defiance of authority. It is not surprising, therefore, that later he and Robert Taylor should join forces, although this did not take place till 1828. Carlile will be referred to more when we come to that year.
It need hardly be said that the popularity of Taylor's discourses, added to the fact that he was making "heretics" by the score, was not at all to the liking of the authorities. It is true that they had all they could cope with in Richard Carlile; but Taylor was spreading the gospel of infidelity by word of mouth and doing almost, if not quite, as much damage to Christianity as Richard Carlile was doing by his numerous and widely circulated publications. The Aldermen of the City of London, backed by the Mayor, decided it was time to stop the all-successful ex-clergyman; and under their charge he was at last arrested for "blasphemy" early in 1827. He was, almost at the same time, sued for £100 on a note which had got into the hands of a Quaker banker named Wright as a result of Taylor being swindled out of £300 years before. Mr. Wright, with good Christian charity—for he well knew that the "debtor" was quite guiltless, and had been actually himself a victim—kept him in prison for a few months; but Taylor was released eventually by the Insolvent Debtor's Court. He was then tried for blasphemy. A verbatim account of the Trial was published by Carlile in 1828 and is still worth reading, it proves that the authorities were quite wide-awake to the fact that in their prisoner there was not just a common illiterate "blasphemer" but an extremely able scholar who, on that account, was all the more dangerous.
The trial of Robert Taylor took place on October 24, 1827, at the Guildhall Court of King's Bench, before Lord Chief Justice Tenterden. It may be remarked that, far as it was possible, Lord Tenterden acted with fairness in a trial which was evidently to his distaste and which was not exactly in his line. He was actually a great authority on marine and mercantile law.
The case attracted a very great deal of attention and the court was crowded, arousing, we are told, eager curiosity in the number present of "well-dressed and youthful females." This is how the reporter describes the prisoner:
"His appearance attracted all eyes: he was arrayed in a flowing gown of a clergyman; his neat clerical hat was conspicuously borne in his hand, an eyeglass depended from his neck, and the little finger of either hand was ornamented with a sumptuous ring; his hair was arranged in the most fashionable style; and a pair of light kid gloves completed the elegant decorations of his person."
Taylor obviously did not believe in being dressed as if he were a miserable specimen of a "blasphemer," the kind of "unhappy man" that people were told to expect in an infidel. The rings and the eyeglass must have come rather as a shock, as well as the rather contemptuous smile borne by Taylor, brought out so well in the portrait drawn by W. Hunt.
To make quite sure of their case, and to report the exact words used by Taylor in his "blasphemous" discourses, one of the aldermen had sent a beadle to the meetings, and he was called as a witness after the speech for the prosecution had been made by the Attorney General. This speech followed the usual lines—the horror caused by treating religion "with levity and contempt," the plea that if Christianity had to be assailed it should be "only by sober discussion and legitimate reasoning," the fact that he himself was all for "toleration" and "serious arguments" on both sides and, finally, that he would not sully his mouth with the words used by the defendant, "the words of mockery which he [the prisoner] has introduced into his addresses, assailing both the forms and the personages which are most reverenced and sacred amongst us."
As for the beadle, whose name was Collins, he was made to read out the blasphemous statements attributed to Taylor. Here are a few of the shocking specimens:
"There was no authority for the title page of the New Testament.
St. Paul has denied the miracles of Christ. His ghost appeared to 500 at once, but they were asleep.
Christ rose again, but it is according to the Scriptures.
The wonder-working God—that is the name which the Deist never uses but with awe!
(Continued on page 159)
J. W. Hauer’s “Germanic Faith”—3
By ARTHUR WILD
HAUER’S Faith is really a work of art and attempting an analysis one fears that one dissects a poem—a romantic poem, not one of an age of reason—instead of being ravished and captured by it as those thousands of young people were who adhered to this Faith.
Hauer is very much influenced by his stay in India, where he worked as a teacher-missionary from 1907 to 1911, and by his studies of Indian thought. He is especially a great admirer of the Bhagavad-Gita. The general plan, such as his teaching about the last reality which is never really found, his method of metaphysical self-vision, his pan-entheism, much of his ethics, are no doubt due more to the influence of Indian than to that of any pre-Christian or other non-Christian European thought. Old Teutonic and German sources are responsible for many examples of moral behaviour and for most other quotations. Also Hauer’s philosophy of history is a European creation. The Indians—like thinkers of the classical Europe—see the events in this world “sub specie aeternitatis.” Therefore they do not ascribe any philosophical value to history and did not create any philosophy of it like the Christian and modern European thinkers did. The general spirit of Germany in the last decades before 1933 and immediately after Hitler’s revolution with its theosophy and anthropology, its nationalism, neo-paganism and racialism is the third main influence.
The Bhagavad-Gita had been admired in Germany long before Hauer. In 1823 W. von Schlegel published an edition with translation into Latin. Herder, W. von Humboldt, Hegel, Schopenhauer (Indian ideas occur not only in Schopenhauer’s philosophy, but also in thinkers who were under his influence) studied it. Count H. Keyserling praised it and various theosophists have used it as a holy book. Therefore the ground for Hauer’s original interpretation and synthesis with European thought and for his creating a mass movement adhering to it was not quite unprepared.
Much more general in Germany and elsewhere in Europe is the revival and glorification of own national past. In the Renaissance Europe discovered its Graeco-Roman past. In the Romantic period culminating in the beginning of the 19th century the Rationalist French discovered their Catholic, the Catholic Czechs their Protestant and the Rationalist and Christian Germans their glorious pagan past. Some German Romantic poets became indeed almost converted to a kind of paganism, though they landed later safely in another religion which had flourished in the past—Roman Catholicism. Goethe’s paganism had been essentially Greek, the religion of the Romantics contained monotheistic and pantheistic, Christian and pagan elements, combined with mythologies, superstitions and folklore of all times and nations. F. von Schlegel wanted even to found a new religion whose Paul he intended to be, Novalis’s role being that of Christ, the philosopher. Arndt calls for a unified national religion, Fichte for an enthusiastic history of the Germans which would become really their new Bible. (Also other leading thinkers of this epoch—Hegel, Schelling and Schleiermacher—pre-shadow many an idea found in Hauer’s Faith.) The admiration of the past is reflected even in specialised sciences, e.g., in comparative Indo-Germanic linguistics—a German science par excellence!—and later also in archaeology. For many of these attitudes there are certain objective grounds, but all of them are to a certain degree subjective, evoked by the love of devoted scholars for their study and by the fact that sometimes many generations have been at work idealising those distant and no doubt very barbaric times. The glorification of the past resulted in many countries even in the forging of old remains and documents.
The second half of the 19th century and the first years of the 20th mark in Germany the flowering of science and in philosophy of the scientific conception of the world and of life. However, science and rationalism do not satisfy the Romanticists of this period who search for other sources of inspiration and faith. How general the flight from empiricism and rationalism is, can be seen even in the evolution of certain individual writers, e.g., of the dramatist G. Hauptmann, who exchanges his naturalism of the ‘eighties for a kind of mysticism and symbolism. Many of these irrationalists find their inspiration in nationalism, pan-Germanism and paganism traceable to the Romantic period of the beginning of the 19th century. The composer R. Wagner and his son-in-law H. S. Chamberlain add the idea of race based on the traditional prejudice and on the teaching of Gobineau. The orientalist Paul de Lagarde, known for his anti-semitism, calls for a German legal system and a German faith instead of the foreign ones adopted in the last centuries. Nietzsche creates his ethics of Superman. Also these attitudes are, of course, reflected in specialised sciences, e.g., in anthropology and folk psychology.
The Christian orthodoxy goes on struggling for survival, but there are also attempts at more or less radical reforms. A very strong influence upon Hauer, whilst Hauer was still a Christian, was the so-called “literal theology” of A. von Harnack who tried to reconcile Lutheranism with modern ideas. A comparable movement among the Catholics was the Modernist movement. A. Bonus wanted to Germanise Christianity.
In the 20th century the adherents of neo-pagan ideologies begin to form organisations. L. Fahrenkrog’s “Germanic Faith Community” (Germanische Glaubensgemeinschaft) was founded in 1908. There followed several more, particularly after the First Great War. Among the leaders of these movements let us mention Erich and Mathilde Ludendorff. During Hitler’s era there existed, apart from the traditional Churches and various less important groups, two movements: the “German Christians” (Deutsche Christen) wanting radically to Germanise (Protestant) Christianity, to whom Hauer originally offered unsuccess-
(Continued on page 160)
Robert Taylor (Continued from page 158)
I should like to know who was the eye-witness between the devil and Christ when he spent his holidays in the wilderness. The pigs were the first martyrs for Christ. Did the devil drown the pigs, or did the pigs drown the devil? Christianity is a wicked and mischievous fable, and they know it to be so.”
For this kind of mild criticism of their faith Christians had put men and women to torture and death in preceding ages, and in Taylor’s time they were given years in prison—"horrible dungeons," he himself called them. Actually, he must have been allowed some kind of liberty, as he—and Carlyle—managed to get through quite a deal of literary work in confinement. Imprisonment meant for them loss of liberty and the hardship that entails; but it was not the kind suffered by criminals which, in those days, was very severe and would not now be tolerated.
(To be continued)
fully his collaboration, and the "Germanic Faith Movement." The programme for a unified National Reich Church published in 1942 was not put into practice. Hauer's Deputy in the movement was Count E. zu Reventlow. The Breslau Professor E. Bergmann formed a religion resembling in certain respects that of Hauer. The youth leader and poet Baldur von Schirach and certain other prominent National Socialists were also associated with these tendencies. The Bible of many National Socialists, A. Rosenberg's "Myth of the Twentieth Century" (Mythus des 20. Jahrhunderts), with its anti-semitism, attacks on Slavs and many other nations, tried to show how to Aryanise radically Christianity. Also Hitler himself disagreed more or less with Christianity in its present form, his party standing for "positive Christianity." On the whole, however, he was much less explicit about religious matters in his "My Struggle" (Mein Kampf) and in his public speeches than A. Rosenberg was in his Myth. The lack of criticism of certain Romanticists and the innocent forgeries of others were, in this era, much surpassed by what is usually called political propaganda.
Despite his statements concerning the leading idea of our time which we must accept (i.e., the idea of race), Hauer seems to have been really honest in his views—a great difference from many a German Christian.
The scornful abuse, characterising so many authors of this era, is absent from Hauer's diction. His passages concerning toleration contrast very clearly with the German policy of those years. Though he does not speak on the whole about other religions as worse or lower than his own, his racial theory of religion certainly contributed to the racial hatred sweeping Germany.
Eleven years before he founded his organisation, Hauer wrote that there are three factors bringing about any new spiritual movement: the need of the epoch, the spiritual inheritance of the past and a strong personality who experiences or at least understands enough the need of the time. Whether we mean by the strong personality Hauer or Hitler, we must say that the understanding of the needs, wishes and reactions of other nations and races was missing (to a different degree and in different spheres, of course) though they caught remarkably well with the spirit of a strong section of their own nation.
Correspondence
DIALECTICAL MATERIALISM
Sir,—I thank Edwin Crouch for pointing out, in my article on dialectics, the looseness of the phrase "dialectical not mechanistic"; I should have said "Nature is essentially dialectical not mechanistic." In nature there are no unchangeable things; all are in process of change. Fundamentally, nature is a complex of processes. It was in this sense I compared the two conceptions.
It is quite possible to state that mechanistic interpretation is more limited than that of dialectics, and I agree that many phenomena can be explained mechanically, and a knowledge of mechanics is extremely useful. Nevertheless, the limitations of mechanics are found when we try to apply this system universally. It would be difficult to give a satisfactory explanation of action at a distance, with no intervening medium, between two bodies from a mechanical point of view. Motion cannot be expressed mechanically, although every movement is a movement of matter, but movement of matter is change in general and includes many aspects of change.
In essence nature is dialectical and includes mechanical motion. For example, Newton showed the orbital motion of a planet to be the resultant of two forces, one causing the planet to fall towards the sun, and the other causing the planet to fly away from the sun. The resolution of these two forces showed the direction of the motion of the planet. This can be simply demonstrated mechanically, but mechanics cannot explain the contradiction of the planet flying away from the sun and at the same time falling towards it. Even the simple change of place cannot be expressed within the limits of mechanistic theory; yet the continuous assertion and the simultaneous solution of this contradiction is precisely what motion is.
Edwin Crouch agrees with me regarding the necessity of dialectic in social processes, that is why I have not selected any illustration here. I also approve of his illustration of the use of mathematics in acquiring a better understanding. I should add to this, however, for his consideration; algebra was limited when it came to expressing continuous change. The calculus was invented and used for this purpose, but it deals with continuity, and cannot be used to show the contradiction between the continuous and the discrete. It cannot deal with the discrete.
Where the interruption occurs, that is the change from quantity to quality, is known as the dialectical leap. The emergence of the new from the change of quantity into quality is not continuous and only within quantitative limits can mathematics be applied. While it is a grand thing to know the power of our tools, it is of tremendous importance to know their limitations.
I am afraid it would take up too much space to deal with the mechanical account of thinking; I may have taken up too much space as it is. I am pleased that my contribution has attracted some notice, and I shall try to deal with thinking in an article in the near future.—Yours, etc.,
JIM GRAHAM.
THE GEOGRAPHY OF HUNGER
Sir,—I am not altogether astonished at Mr. McHattie's outburst—but I hold to every word I wrote in my articles. What meaning Mr. McHattie gives to the word "production," does not himself meant "food" production, and nothing else. He was writing about the "production" of pins or radio sets or motor cars, but about food. "The effort to make everybody on the face of the earth productive" means exactly what I said.—Yours, etc.,
H. CUMBER.
Sir,—Mr. Bayard Simmons quotes Malthus in your issue of January 25. The population increase is likened to a geometric progression—1, 2, 4, 8, 16, and the food supply to a mathematical progression. This is not a theory but a fact.
The world population progression is geometrical. It doubles itself every generation. It has only been held back from succeeding in this in all history by war, famine, disease, and hatred. Disease has been greatly mitigated by Science and Medicine. Hence only famine, war and hatred remain to check world population! It is not civilisation but bestial barbarism, and universal conversion is the only answer.
The world food supply cannot be doubled every generation for perpetuity. No Scientist has ever made such a claim. Can anyone even show that world food supply can increase in a mathematical ratio? If it can and we introduce time and amount then we shall have: first year, one ton; two years, two tons; three years, three tons; four years, four tons; five years, five tons. That is, in five years, five times the amount! This has never been so, and never will be so in perpetuity.
It is, therefore, plain that famine and disease and war eliminate or reduced, world population will increase far faster than food supply and go on doing so in perpetuity. Therefore, universal contraception must take the place of the check of famine, disease and war, if we are to ever obtain a better civilisation. Thus, the population check can be effected by total abstinence, "safe periods," etc., is a superstitious fairy tale of the Religionist, the superstitious person, and the so-called Moralist. Neither will it be any effective check to confine contraception to the married in a world or society where only a few are given the material means whereby they can marry.
Neither can the force of Evolution, nor the powers of Nature be denied, defied, or mocked by Man, his Governments, his Political or Economic Systems, or his gods—or even his Science. Yours, etc.,
RUPERT L. HUMPHREYS.
THE BIBLE: WHAT IS IT WORTH? By Colonel R. Ingersoll. Price 2d.; postage 1½d.
NOW READY
THE FREETHINKER
VOL. 72
Bound green cloth, lettered gold PRICE 24/- Postage 1d.
BLANKENBERGE (Belgian Coast).—Hotel Astoria. Seven days £7 10s. inclusive; English spoken; special terms for parties.
|
Agglomeration, Structural Embeddedness, and Enterprises’ Innovation Performance: An Empirical Study of Wuhan Biopharmaceutical Industrial Cluster Network
Jingjing Zeng 1, Dingjie Liu 1 and Hongtao Yi 2,3,*
1 School of Public Administration, Zhongnan University of Economic and Law, Wuhan 430000, China
2 School of Public Administration and Policy, Renmin University of China, Beijing 100872, China
3 John Glenn College of Public Affairs, The Ohio State University, 1810 College Rd, Columbus, OH 43221, USA
* Correspondence: email@example.com
Received: 12 June 2019; Accepted: 9 July 2019; Published: 18 July 2019
Abstract: Industry cluster’s agglomeration effects facilitate higher productivity for enterprises located in the industry cluster. This paper examines the agglomeration effects of industry cluster on firm’s innovation performance through studying the network embeddedness of the biopharmaceutical companies using cross-sectional data from 2011 to 2015. Measuring the technological cooperation network with text analysis of the interfirm agreement among core enterprises, we found that betweenness centrality and clustering coefficient have statistically significant and positive effects on enterprise’s ability for technological innovation, while the influence from the constraint of structural holes is negative. Our results suggest that government should allow the leading enterprises to establish professional technology cooperation platforms and provide additional support to promote cooperation among firms.
Keywords: industry cluster; agglomeration effects; structural embeddedness; innovation performance; biopharmaceutical industry
1. Introduction
Over the past several decades, economists have examined the critical role that industrial cluster plays in generating innovation performance by asking how and why industrial cluster could accelerate overall innovation performance within the cluster. Extant studies on this question have mainly focused on two aspects. The first stream of literature has focused on the industry cluster as an integrated object, and researchers following this tradition study competitive advantages of industry clustering. Porter has been one of the pioneers in studying the competitive edge of industry cluster at the national level [1]. It is increasingly popular to combine the theory of new economic geography with the development of industry cluster to study the effects of geographical proximity on regional economic output [2]. Another stream of research that investigates industry cluster’s impacts on innovation performance is centered on the roles of enterprises within the industrial cluster. An increasing number of studies have paid attention to the agglomeration effect of enterprise’s innovation performance at the micro-level [3,4]. Other studies have been conducted to examine the interaction between enterprises or even different actors within and beyond clusters [5–7]. The research on agglomeration effects of industry cluster and enterprises’ innovation performance has generated an international literature and academic dialogue. In particular, it becomes highly relevant in China as China intensifies its industrial reconstructing. How to strengthen enterprises’ innovation capability in an increasingly globalized and industrialized network remains one of the main tasks for firms and government in China.
As a representative high-tech industry, the biopharmaceutical industry demonstrates features of agglomeration in technology and capital. Under the market economy with Chinese characteristics, biomedical industrial parks of different sizes have nurtured during the past ten years. Influenced by policy guidance and industry planning by local governments, biopharmaceutical industry has taken off rapidly. Examples include Zhangjiang Park of Shanghai and Optical Valley Park of Wuhan, both of which have begun to generate agglomeration effect. With more frequent joint technology R&D and industry–university–research cooperation, biomedicine enterprises developed close and reciprocal cooperation relationships, through signing technological collaboration contracts between and among biomedicine enterprises, universities, and research institutions. Whether this microcontract cooperative mechanism has promoted innovation performance of biomedicine enterprises has significant practical implications for these enterprises. An equal and reciprocal cooperative contract will likely lead to quality cooperation, and induce more technology sharing and spatial agglomeration of enterprises. Therefore, we analyze whether the agglomeration effects of biomedicine industry have produced impacts on innovation performance. The practical implications of this study are to inform enterprises of the technological development patterns and to facilitate evidence-based policy making on science and technology policy.
This study presents a novel perspective to the study of local industrial cluster in the following ways. First, we collected unique network data on Wuhan Biopharmaceutical Industrial Cluster Network, which was one of the first in the literature to focus on industrial innovation in central China region. Second, we are able to predict corporate innovation performance with network indicators, as informed by social capital theories. This study helps build a bridge between social science theories and innovation research.
The next section focuses on the technological connections among the enterprises in the industry cluster network. Then we discuss inherent logic connections among market transactions, agglomeration effects and innovation performance, based on which we formulate hypotheses that agglomeration effect measured by structure embeddedness of the core enterprises would promote firms’ technological innovation ability and economic performance. We then collect data from various sources, including websites of these enterprises, the Patent Star (China’s official patent website) and CNKI (China National Knowledge Infrastructure). After coding these data, we run social network analysis to measure the scale of cluster’s agglomeration effects. Then two empirical models are estimated to test the hypotheses and examine subnetwork structures. The results showed that betweenness centrality and clustering coefficient are positively related to enterprises’ technological innovation capability, while structural holes produce negative impacts. We conclude with policy implications that professional platforms should be established by core enterprises, and that government should provide a supportive environment to promote close cooperation among enterprises.
2. Literature Review
With interdependent enterprises, science and research institutions, intermediary organizations and consumers, industry cluster takes the shape of a closely connected network, which leads to agglomeration effects [8]. Agglomeration effects of industry cluster presents a process of knowledge and information disseminating, human resource sharing, cooperative and compatible matchmaking among different enterprises. This leads to an external economy for enterprises to acquire higher productivity [9]. Enterprises’ connection degree in industry cluster network can be measured by the degree of spillover in network resources and the strength of embeddedness in network structure.
The spillover of resources, including knowledge, information, personnel, capital, and technology in and between clusters, has become an emerging research topic [10,11]. Knowledge spillover was particularly emphasized by Powell [10] and Simona [12]. Knowledge connections were further divided into component knowledge and architectural knowledge to study their differentiated effects on cluster knowledge spillover and competitive edge [13]. Financial transaction network among film businesses in Potsdam, Germany was examined [14]. Formal and informal managerial connections between
cluster enterprises were utilized to study the effects of Toronto mutual fund cluster network connections on enterprises’ innovation performance [15]. It is argued that information advantage enjoyed by core enterprises enhances knowledge spillover in the cluster [16].
Recently, researchers started to evaluate cluster network’s agglomeration degree through its structural embeddedness [17–19]. Qian took structural hole and centrality as two major indicators to measure the adjustment effects, while paying close attention to effects in economic output brought by enterprises’ position in Shenzhen ICT cluster network [20]. Research on Chengdu furniture manufacturing in China showed that cluster network’s structural embeddedness generated pronounced impacts on the enterprise’s outputs, finding positive effects of degree centrality and betweenness centrality [19]. Another study on the textile industry cluster with economic performance demonstrated that science and technology research institutions and leading businesses play a key role in cluster network. At the same time, a good network position would enhance the innovation capability of an enterprise, facilitating multiple outputs [21]. Resource acquisition ability of enterprises in technological knowledge was taken into consideration to build a more comprehensive model in testing network centrality and intermediary effect on innovation improvement [22–25]. Further analysis investigated the impact of clustering coefficient on output efficiency of equipment manufacturing in Northeast China [26].
In this paper, we combine technological spillover with cluster’s network structure embeddedness. The scale of agglomeration effects can be measured by the degree of network structural embeddedness, operationalized with the number of technological cooperation contracts and agreements among enterprises. At the same time, economic attributes and self-learning property are highlighted in our paper, while emphasis is placed on interactions among organizations in the cluster network [27].
3. Theoretical Framework and Hypotheses
Network structure is determined by enterprises’ relative positions, the functions they possess, and the types of links between these enterprises and other actors. The features of network structure have significant impact on agglomeration, enhancing the firms’ economic outputs.
3.1. Agglomeration Effect of Industry Cluster
Relationships existed in a cluster network can be used as a valuable resource [28] and innovative activities usually accompany efficient resource collection and accurate processing. Effective absorption and efficient use of external resources may give enterprise more opportunities to survive in intense market competition [29]. Through mechanisms to enhance trust-building, information exchange, and joint problem-solving, industry cluster regulates enterprises’ market operation and expectations [30]. Market transactions, agglomeration effects and innovation performance are closely correlated, when firms are clustered in a network.
First of all, market transactions promote agglomeration effects within the industry cluster. After repeated market trading operations, each enterprise in the cluster network would observe the cooperation contracts and respect the interests of other firms. As a result, ad hoc cooperation or speculative trading is replaced by long-term cooperation based on mutual benefits [31,32]. The so-called “habitual” trust relationship becomes a credible commitment for each party. Any actions that break the contract will suffer immense loss in capital financing, information access and reputation, impeding further cooperation within the cluster. Over the long run, this will turn an enterprise into an active cooperator who prefers a collaborative strategy. Under the environment with high mutual trust, incidences to break the contract will drop, while transaction costs and fees paid by enterprises will be minimized.
Secondly, a spillover cluster environment is facilitated by agglomeration effects. A higher level of agglomeration effect means more frequent exchange of knowledge, information, capital, and personnel. All these will have great impacts on enterprises’ innovation performance through network structural embeddedness [33]. General equilibrium model of moral hazard problem confronted by heterogeneous
groups in market transaction demonstrated the sustainable growth effect in outputs brought by trustful environment [34,35].
3.2. Betweenness Centrality and Innovation Performance
Betweenness centrality for individual enterprise refers to proportion of the shortest paths (or geodesics) that must be passed through, and it accounts for all the shortest paths in the cluster network. Active and efficient involvement in information transmission leads to a higher betweenness centrality score for an enterprise. An enterprise may enjoy a dominant position if it possesses multiple shortest information paths. The formula for betweenness centrality can be shown as [36]
\[
\text{betweenness centrality}_{it} = \sum_{j<k} g_{jk}(n_i)/g_{jk}
\]
of which \(g_{jk}(n_i)\) means the number of the shortest paths passed through enterprise \(i\) in all paths between enterprise \(j\) and \(k\). \(g_{jk}\) represents the total number of paths between enterprise \(j\) and \(k\).
**Hypothesis 1:** Betweenness centrality of the enterprise is positively related to its level of technological innovation.
**Hypothesis 2:** Betweenness centrality of the enterprise is positively related to its economic performance.
3.3. Structural Holes and Innovation Performance
Structural holes will be developed if two adjacent market actors (both are indirectly connected through a third enterprise \(i\)) are not directly connected. Heterogeneous connections are possessed by enterprise \(i\) who acts as a “bridge” that control diverse information exchange paths. This would provide enterprise \(i\) with an advantage in facilitating industrial application of their hi-tech research findings. However, for the two actors who have to connect with enterprise \(i\) in the industry cluster, structural holes will likely constrain technological knowledge acquisition and other market activities. We focus on the aggregate constraint of structural hole on enterprise’s innovation outputs. So the constraint degree of enterprise \(i\) affected by enterprise \(j\) can be demonstrated as [37]
\[
C_{ij} = \left( p_{ij} + \sum_q P_{iq}m_{qj} \right)^2
\]
\(P_{iq}\) presents the proportion of the paths that will get through or is affected by enterprise \(q\), accounting for the number of connections to enterprises within the cluster. \(m_{qj}\) means the marginal strength of the connection from enterprise \(j\) to enterprise \(q\) or the value of the connection from enterprise \(j\) to enterprise \(q\) plus the highest values in all the connections that are linked with enterprise \(j\). So the formula of enterprise \(i\) in limitation degree can be described as
\[
\text{structural holes}_{it} = \sum_j C_{ij}
\]
Huge cost will likely be incurred in maintaining a healthy relationship with enterprises or other market actors for a pharmaceutical enterprise. Under this condition, redundant links must be cut off and resources in management must be put into key connections to achieve efficient output both for the enterprise and the community. A higher structural hole constraint would result in a lower degree of freedom for the enterprise to seek opportunities for cooperation.
**Hypothesis 3:** Structural holes in the constraint are negatively related to an enterprise’s technological innovation.
Hypothesis 4: Structural holes in the constraint are negatively related to an enterprise’s economic performance.
3.4. Clustering Coefficient and Innovation Performance
Clustering coefficient measures the quality or frequency of one enterprise’s connection with its neighboring actors. The definition is [38]
\[ C_i = \frac{2e_i}{k_i(k_i - 1)} \]
\( C_i \) is the clustering coefficient of enterprise \( i \), of which \( k_i \) represents the degree centrality of enterprise \( i \) and \( e_i \) means the number of links between its cooperators. Further, \( CC_t \), the clustering coefficient of the cluster network can be defined as
\[ \text{clustering coefficients}_t = \left[ \sum_{i=1}^{N} C_i \right] / N \]
High quality links and frequent cooperation among enterprises located in a cluster network enhance their relationship, trust, and stability and improve efficiency in resource sharing, driving innovative activities.
Hypothesis 5: Clustering coefficient of an enterprise is positively related to its technological innovation.
Hypothesis 6: Clustering coefficient of an enterprise is positively related to its economic performance.
4. Research Design
4.1. Network Boundary
Establishing network boundary is key in applying social network analysis to industrial cluster study, the process of which includes defining the cluster boundary, cluster actor selection, and measurement of network links.
4.1.1. Why Study Biopharmaceutical Industry Cluster in Wuhan
With a comparative advantage in land, labor and transportation costs and favorable industrial policies, Wuhan has become an emerging city in biopharmaceutical industry. More than 500 enterprises have been established within six industry parks in Wuhan East Lake High-tech Zone, with a total corporate income over 500 billion RMB yuan, the third place in China. Since the year of 2008, cooperation between enterprises, universities and research institutions has become more frequent and agglomeration effects have emerged. Biopharmaceutical industry features high degree of agglomeration in capital and high-tech talents, and agglomeration effects of enterprise reflects cluster level environmental change.
4.1.2. Defining Network Actors and Measuring Network Connections
The roles the enterprises play are embedded in their network relationships in the cluster network. In this study, we focus on connections between core biopharmaceutical enterprises and other enterprises which show similar features with core enterprises, universities and research institutions. The network relationships capture both industry–university–research cooperation and interfirm cooperation within the Wuhan East Lake High-Tech Zone.
To measure the cluster network, we focus on technological cooperation that spans new technological innovation, patent application and product development. All these can be clearly measured by inter-enterprise contracts or agreements, which were collected by searching enterprises’ websites, the Patent Star, China’s official patent website and CNKI. Contracts or cooperative texts are available online, and numbers of cooperation and duration of such cooperation are coded. For example, if a
contract or a cooperative agreement was signed between enterprises \(i\) and \(j\), then, we assume that an equal, reciprocal and undirected link was established between them.
4.1.3. The Process of Identifying Core Enterprise Network
As discussed above, technological cooperation data of a hundred and one biopharmaceutical enterprises within Wuhan East Lake High-Tech Zone were collected and coded based on technical cooperation contracts among these enterprises from 2011 to 2015. Then we conduct two rounds of interviews (one conducted in 2 June 2015 and another in 2 December 2015) with biopharmaceutical industry park leaders of Wuhan East Lake High-Tech Zone and senior managers of nine leading profitmaking enterprises, including chemical drug enterprises; traditional Chinese medicine enterprises; and biological product enterprises. The network data for forty core enterprises are collected from the interviews. With the interview data, we are able to capture the core network of the biopharmaceutical industry cluster. See Table 1 for descriptions of the number of links for the core enterprises.
**Table 1.** Numbers of links for the core enterprises.
| ID | NL | ID | NL | ID | NL | ID | NL | ID | NL | ID | NL |
|------|----|------|----|------|----|------|----|------|----|------|----|
| C1 | 40 | C8 | 7 | C15 | 8 | C22 | 11 | C29 | 4 | C36 | 9 |
| C2 | 21 | C9 | 7 | C16 | 9 | C23 | 7 | C30 | 11 | C37 | 11 |
| C3 | 38 | C10 | 7 | C17 | 12 | C24 | 6 | C31 | 4 | C38 | 2 |
| R4 | 16 | C11 | 4 | C18 | 26 | C25 | 8 | C32 | 2 | C39 | 3 |
| C5 | 10 | C12 | 15 | C19 | 11 | C26 | 3 | C33 | 5 | C40 | 6 |
| C6 | 8 | C13 | 6 | C20 | 25 | C27 | 8 | C34 | 4 | | |
| C7 | 12 | C14 | 2 | C21 | 18 | R28 | 45 | C35 | 6 | | |
Note: ID: ID of enterprises; NL: Number of Links.
A visualization of the core enterprises in the industrial cluster was performed with Pajek4.0. Core enterprises are located in the center of structure graph, while universities, research institutions, and enterprises of the same type are distributed in the periphery, as show in Figure 1.

4.2. Variables
4.2.1. Research Method
Betweenness centrality, constraint of structural holes and clustering coefficient of structure embeddedness in agglomeration effects are calculated with Pajek4.0. A linear regression model is employed to test the effect of structural embeddedness on its innovation performance. Firstly, only control variables are included in the regression model, then, explanatory variables are included to test
their effects on patent outputs. Secondly, explanatory variables are added to test the effects on profits of new products from 2012 to 2015, separately.
4.2.2. Description of Explanatory Variables
Table 2 provides descriptive statistics for the explanatory variables used in this study.
| ID | BC | SH | CC | ID | BC | SH | CC |
|------|--------|--------|--------|------|--------|--------|--------|
| C1 | 0.157617 | 0.035807 | 0.11626 | C21 | 0.093471 | 0.076411 | 0.022222 |
| C2 | 0.076787 | 0.053872 | 0.019858 | C22 | 0.04013 | 0.124327 | 0.010864 |
| C3 | 0.169376 | 0.028493 | 0.043678 | C23 | 0.024969 | 0.148234 | 0.002469 |
| R4 | 0.056544 | 0.071986 | 0.015715 | C24 | 0.012952 | 0.176842 | 0.002222 |
| C5 | 0.029577 | 0.10644 | 0.005817 | C25 | 0.0218 | 0.143745 | 0.004134 |
| C6 | 0.022385 | 0.126493 | 0.000961 | C26 | 0.001691 | 0.347779 | 0.000533 |
| C7 | 0.031276 | 0.122398 | 0.010816 | C27 | 0.020495 | 0.143745 | 0.003419 |
| C8 | 0.013175 | 0.166026 | 0.003509 | R28 | 0.215283 | 0.030901 | 0.140625 |
| C9 | 0.011427 | 0.165484 | 0.004159 | C29 | 0.000599 | 0.32136 | 0.002133 |
| C10 | 0.030958 | 0.161061 | 0.005761 | C30 | 0.039597 | 0.095454 | 0.00388 |
| C11 | 0.012777 | 0.25 | 0 | C31 | 0.008239 | 0.260417 | 0.000855 |
| C12 | 0.052635 | 0.080458 | 0.007133 | C32 | 0.006237 | 0.5 | 0 |
| C13 | 0.019409 | 0.166667 | 0.000002 | C33 | 0.018782 | 0.2 | 0 |
| C14 | 0.006237 | 0.5 | 0 | C34 | 0.003452 | 0.25 | 0 |
| C15 | 0.03727 | 0.125 | 0 | C35 | 0.016303 | 0.170278 | 0.00112 |
| C16 | 0.027626 | 0.111111 | 0 | C36 | 0.023234 | 0.125668 | 0.003846 |
| C17 | 0.044649 | 0.089968 | 0.007273 | C37 | 0.051374 | 0.090909 | 0.000003 |
| C18 | 0.08384 | 0.04622 | 0.031515 | C38 | 0.00016 | 0.5 | 0 |
| C19 | 0.041511 | 0.111118 | 0.006984 | C39 | 0.006237 | 0.357108 | 0.000855 |
| C20 | 0.084763 | 0.041662 | 0.009107 | C40 | 0.015736 | 0.166667 | 0.000002 |
Note: ID: code of enterprises; BC: betweenness centrality; SH: constraint of structural holes; CC: clustering coefficient;
All values were calculated with Pajek4.0.
4.2.3. Dependent Variables
Innovation performance outputs can be measured in different ways, with indicators on technological, financial, and management performance. We select patent output to evaluate innovation capability of enterprise based on previous studies. Patents authorized from 2011 to 2015 are archived and coded. R & D investment, average annual profits in new products and net profits can be used to evaluate economic performance at the micro-level. Here we choose average annual profits in new products as our measure. All these profits data are collected through our interview with administrative staff in the biopharmaceutical industry park.
4.2.4. Control Variables
Capital investment in innovation and management has impacts on the outputs of new products. Capital investment represents comprehensive strengths and strategic management in development. Although biopharmaceutical industry possesses high degree of agglomeration in capital and high-tech talents, it takes a long period for technology development and patent application. Thus time capital also plays a key role in innovation. The registered capital assets and establishment of joint stock enterprise time span were taken into consideration as two control variables. See Table 3 for detailed information for all the variables.
Table 3. Descriptive statistics.
| VAR | Max | Min | Mean | N |
|-------|------|------|------|----|
| BC | 0.22 | 0.00 | 0.03 | 40 |
| SH | 0.5 | 0.03 | 0.17 | 40 |
| CC | 0.14 | 0 | 0.01 | 40 |
| Lnyp11| 12.62| 6.22 | 8.82 | 39 |
| Lnyp12| 13.22| 6.36 | 9.30 | 39 |
| Lnyp13| 13.43| 6.36 | 9.70 | 39 |
| Lnyp14| 13.58| 6.90 | 9.79 | 39 |
| Lnyp15| 13.64| 7.22 | 9.87 | 39 |
| Patents| 50 | 1 | 10.13| 40 |
| RT | 22 | 3 | 10.28| 39 |
| RC | 10.82| 6.22 | 8.19 | 39 |
Note: Lnyp: annual profits from 2011–2015; P: patents; RT: registered time; RC: registered capital. Data were log-transformed.
5. Empirical Analysis
5.1. Pearson Correlation Result
Pearson correlation coefficient was calculated between explanatory variables and dependent variables. From the results, Pearson correlation values between betweenness centrality, constraint of structural holes, clustering coefficient and patent outputs are 0.828, −0.556, and 0.795, which are all statistically significant at the 0.01 level. Pearson correlation values between betweenness centrality, structural holes, clustering coefficient and economic performance from 2012 to 2015 are also very strong. The results are presented in Table 4.
Table 4. Pearson correlation result.
| Control Variables | P | Lnyp11 | Lnyp12 | Lnyp13 | Lnyp14 | Lnyp15 |
|-------------------|-------|--------|--------|--------|--------|--------|
| RT & RC | | | | | | |
| BC | Correlation | 0.828 *** | −0.03 | 0.556 *** | 0.736 *** | 0.761 *** | 0.712 *** |
| p-value | 0.000 | 0.86 | 0.000 | 0.000 | 0.000 | 0.000 |
| SH | Correlation | −0.556 *** | −0.329 | −0.587 *** | −0.667 *** | −0.767 *** | −0.713 *** |
| p-value | 0.000 | 0.047 | 0.000 | 0.000 | 0.000 | 0.000 |
| CC | Correlation | 0.795 *** | 0.014 | 0.376 | 0.609 ** | 0.628 *** | 0.69 *** |
| p-value | 0.000 | 0.934 | 0.022 | 0.000 | 0.000 | 0.000 |
Note: 2-tailed, ***: significant at 0.01 level.
5.2. Testing the Hypotheses
5.2.1. OLS Regression Analysis
We employ ordinary least square (OLS) regressions to test our hypotheses. We use OLS due to the fact that our dependent variable is continuous and that our data is cross-sectional. Two empirical models are estimated to test the hypotheses:
Model 1: \( P = \beta_1 + \beta_2 BC + \beta_3 SH + \beta_4 CC + \beta_5 RT + \beta_6 RC + \varepsilon \)
Model 2: \( Lnyp_{(t)} = \beta_1 + \beta_2 BC + \beta_3 SH + \beta_4 CC + \beta_5 RT + \beta_6 RC + \varepsilon \)
Four submodels are estimated with explanatory variables BC, SH, CC and control variables RT and RC based on model 1 with linear regression method.
Explanatory variables BC, BH and CC are included separately in submodels 2 to 4 based on submodel 1, which added additional control variables. Regression results show relatively large and positive coefficients for BC and CC, which are 208.383 and 391.705, F(BC) = 34.02, P(BC) < 0.01; F(CC) = 27.29, P(CC)< 0.01). Regression coefficient is negative and F(SH) = 9.11, P(SH)< 0.01 in S-M3, which are also significant. A higher goodness-of-fit in Model 1(Adj-R² = 0.77) showed a better explanation for the effect of agglomeration effects on innovation capacity improvement. From
all of the above, we can draw a conclusion that agglomeration effects have accelerated innovation capacity to some degree. Hypotheses 1, 3, and 5 have been supported. Table 5 reports results for the regression models.
**Table 5.** Regression analysis between structure embeddedness and patent outputs.
| VAR | Sub Model 1 | Sub Model 2 | Sub Model 3 | Sub Model 4 | Model 1 |
|-----|-------------|-------------|-------------|-------------|---------|
| RT | 0.809 ** | 0.419 ** | 0.697 ** | 0.245 * | 0.287 * |
| | (2.3) | (2.05) | (2.34) | (1.08) | (1.49) |
| RC | 0.689 | −0.439 | 0.241 | −0.059 | −0.375 |
| | (0.75) | (−0.81) | (0.31) | (−0.1) | (−0.76) |
| BC | 208.383 *** | | | | 95.7 ** |
| | (8.74) | | | | (2.18) |
| SH | | −40.741 *** | | | −12.32 ** |
| | | (−3.95) | | | (−1.47) |
| CC | | | 391.705 *** | | 215.06 *** |
| | | | (7.74) | | (2.99) |
| N | 39 | 39 | 39 | 39 | 39 |
| F | 4.15 *** | 34.02 *** | 9.11 *** | 27.29 *** | 26.55 *** |
| Adj-R² | 0.14 | 0.72 | 0.39 | 0.67 | 0.77 |
Note: * Significance at the $p < 0.1$ level; ** Significance at the 0.05 level; *** Significance at the 0.01 level. S-M: submodel.
Regression analysis of explanatory variables on innovation capacity tests whether stronger ability in technological knowledge learning and information processing has been developed in the cluster network. We study whether technological knowledge acquisition and processing have been applied in industrialization, and whether they have improved economic performance.
Regression results of S-M 5 to 7 showed BC and CC didn’t promote profits of enterprise and constraint of SH has negative relationship with profit achievement. Negative regression coefficient of constraint of SH is present in 2012 to 2015, of which $p(2012) < 0.05$, $p(2013) < 0.05$, $p(2014) < 0.01$, $p(2015) < 0.05$, which is also statistically significant. These results are supportive for hypothesis 4, but not for hypotheses 2 and 6.
### 5.2.2. Multicollinearity Test
After testing our hypotheses, we have conducted multicollinearity test with STATA 13. The results showed that VIF values for all explanatory variables are less than 10 and the average VIF value is 3.64. The VIF value for BC, SH, CC, RT, RC is 7.48, 1.9, 6.09, 1.29, and 1.43. Thus we can conclude that we do not observe serious multicollinearity among the independent variables.
### 5.2.3. Heteroscedasticity Test and Robust Regression Analysis
We conducted heteroscedasticity tests, both White test and BP test, for estimated model 1 and estimated model 2. In estimated model 1, $p$-value is 0.24 > 0.05 in White test, and in BP test the $p$-value is 0.17 > 0.05. In estimated model 2, with Lnyp (2015) as the explained variable, the results showed that $p$-value is 0.01 < 0.05 in White test and 0.00 < 0.05 in BP test. From the test above we find no evidence of heteroscedasticity in model 1 and that the OLS regression analysis is efficient. For the estimated model 2, we do find heteroscedasticity and that further robust regression analysis is needed.
The robust regression analysis also shows that BC and CC didn’t promote enterprise’ profits in 2015, and also the same to the year of 2012/2013/2014 we have tested. But the constraint of SH has negative relationship with profit achievement in 2015 which presents the same results in Table 6. Negative regression coefficient of constraint of SH in 2015 is $p(2015) < 0.05$, which means that it is statistically significant. So from the results of estimated model 1 and model 2, we can conclude that the agglomeration effects of Wuhan biopharmaceutical industry cluster, measured by network structural embeddedness dimensions of betweenness centrality, constraint of structure holes, and clustering coefficient in our paper, emerged directly and had positive effects on enterprises’ innovation
as represented by patents, while the agglomeration effects didn’t facilitate significant effects on profits. In other words, the technological cooperation among the cluster have produced significant innovation achievements, in generating patents or other technological improvements, while not directly promoted enterprise’s profits, which makes practical sense. Because technical cooperation is a long-term process and may take a long time to bring profits for enterprise and more importantly, enterprise’s profits are mainly determined by the operation strategy and management methods of the company. We will analyze the cluster network in the following to better understand the overall cooperation status of the cluster. This is reported in Table 6.
**Table 6.** Regression analysis of structure embeddedness on economic performance.
| VAR | Sub Model 5 | Sub Model 6 | Sub Model 7 | Model 2 |
|-----|-------------|-------------|-------------|--------|
| SJ | 0.149 ** | 0.093 | 0.088 | 0.076 |
| | (2.01) | (1.55) | (1.54) | (1.28) |
| K | 0.33 ** | 0.356 *** | 0.377 *** | 0.406 *** |
| | (2.01) | (2.66) | (2.96) | (3.03) |
| BC | 14.433 | 8.195 | 4.438 | 4.544 |
| | (0.84) | (0.59) | (0.34) | (0.33) |
| SH | −6.773 ** | −5.029 ** | −6.436 *** | −6.444 ** |
| | (−2.08) | (−1.9) | (−2.56) | (−2.43) |
| CC | −44.758 ** | −33.973 | −30.204 | −29.567 |
| | (−1.76) | (−1.64) | (−1.54) | (−1.42) |
| N | 40 | 40 | 40 | 40 |
| F | 6.61 *** | 6.98 *** | 8.56 *** | 7.91 *** |
| Adj-R² | 0.42 | 0.43 | 0.49 | 0.47 |
Note: * Significance at the $p < 0.1$ level; ** Significance at the 0.05 level; *** Significance at the 0.01 level. S-M: submodel.
### 5.3. Cluster Network Structure Analysis
Betweenness centrality and clustering coefficient would produce positive impacts on innovation outputs based on regression analysis between structure embeddedness and patent outputs. Every one percent increase in strength of betweenness centrality and clustering coefficient would lead to an increase of 95.7% and 215.6%, respectively, in innovation capability. This means paths are controlled by some core enterprises where information and resources are exchanged. However, positive effects are ameliorated due to the constraint of structure holes.
Subnetwork structures of the core industry cluster (Figures 2–4) are formulated to make comparative analysis that gives us an overall perspective on the industry cluster. Seventy-five universities, 41 research institutions, and 137 enterprises have established cooperative relationships with core enterprises respectively, which developed 124, 159, and 218 edges, respectively, in three different subnetwork structures.

**Figure 2.** Network structure between core enterprises and universities.
From Table 7, we can observe that interactive links among different actors are more close and intimate in subnetwork structure of core enterprises (40) and other enterprises (Figure 4), thus producing the largest average betweenness centrality and clustering coefficient (Figure 4) compared to overall subnetwork (Figures 2 and 3). E & R, which represents network of enterprise and research institution (Figure 3), shows the relative low degree centrality of network and weak relationship between core enterprises and research institutions. Overall average clustering coefficient is smaller in both Figures 2 and 3, and cooperation between core enterprises and research institutions are loose as they are distributed in small groups. They are more likely to connect with research institutions in small groups or individually instead of being linked in large groups. This would not intensify the connection for the overall network in resource exchange. This has important implications for choosing technical cooperators. Because small group connections may benefit individuals as enterprises can maintain sustainable cooperation with lower cost. In Figure 4, interactive cooperation between enterprises is highly intensive, which indicates core enterprises enjoy a stronger relationship in the cluster network. These results are reported in Tables 7–9.
Table 7. Robust regression analysis of structure embeddedness on economic performance.
| Lnys(2015) | Coef. | Robust Std. Err.t | t | p > |t| |
|------------|---------|-------------------|-------|-----|----|
| SI | 0.077 | 0.045 | 1.67 | 0.103 |
| K | 0.406 | 0.329 | 1.23 | 0.226 |
| BC | 4.544 | 14.054 | 0.32 | 0.748 |
| SH | −6.443 | 1.948 | −3.31 | 0.002 *** |
| CC | −29.568 | 37.832 | −0.78 | 0.440 |
Note: * Significance at the p < 0.1 level; ** Significance at the 0.05 level; *** Significance at the 0.01 level.
Table 8. Network Structure Features for Subgraphs.
| Network | DC | BC | CC |
|------------------|------|------|------|
| C & E (Figure 1) | 0.14 | 0.22 | 0.05 |
| E & U (Figure 2) | 0.18 | 0.30 | 0.00 |
| E & R (Figure 3) | 0.07 | 0.04 | 0.00 |
| E & E (Figure 4) | 0.17 | 0.45 | 0.02 |
Note: DC: degree centrality of network; BC, and CC are the same as Table 2. C&E: core enterprise and other three types of vectors: university, research institution, and other enterprises (Figure 1); E&U: enterprise and university (Figure 2); E&R: enterprise and research institution (Figure 3); and E&E: enterprise other enterprises (Figure 4).
Table 9. Structure features of C1, C3, C20, D1, and D2 in Figure 2.
| NAME | DC | BC | CC | SH |
|------|----|----|----|----|
| C1 | 12 | 0.09 | 0.36 | 0.08 |
| C3 | 13 | 0.12 | 0.39 | 0.08 |
| C20 | 16 | 0.14 | 0.38 | 0.06 |
| D1 | 24 | 0.32 | 0.45 | 0.04 |
| D2 | 24 | 0.29 | 0.43 | 0.04 |
Note: DC: degree centrality, CC: closeness centrality. Others are the same as Table 2.
Next, we focus on Figure 2, some enterprises and universities, for example, Humanwell Healthcare Group Co. Ltd. (C1), WuXi App Tec (C3), Ma yinglong Pharm (C20), Wuhan University (D1), and Huazhong University of Science and Technology (D2) possessed larger degree centrality, betweenness centrality and closeness centrality while smaller aggregate constraint of structure holes. This also contributes to a high degree centrality and betweenness centrality of network structure between enterprise and university (Figure 2).
Further, individual enterprise network structure graph is extracted from cluster network structure graph of core enterprises (40) for a micro-level analysis. In Figure 5, many SMEs have established relationship with C1 and R28 while they experience a lack of cooperation among themselves. This point has also been demonstrated in Figures 3 and 4. Therefore, C1 and R28 both control knowledge flow paths and act as intermediary actors. Financial and time costs have been spent in market transaction for SMEs. Therefore, a better business and policy environment is needed to ensure cooperation between SMEs.
Figure 5. Network structure of C1 and R28.
6. Discussion and Conclusions
In this study, we examine whether technological spillover demonstrated by embeddedness have impacts on enterprises’ outputs. We measured the degree of network structure embeddedness through collecting technological cooperation texts and calculating the scale of technological spillover with network analysis. Further, we have analyzed the characteristics of three different subnetworks. The results showed that with the fast development of biopharmaceutical industry in Wuhan East
Lake High-Tech Zone, the positive spillover effects will become more prominent if more frequent and efficient cooperation developed not only among core enterprises, but also between SMEs and other types of network actors. The overall network structure can be optimized by strengthening intermediary function of core enterprises and institutions, by tightening exchanges among SMEs, and by removing constraint of core enterprises. The following suggestions may be helpful for decision makers both from the supply-side (government) and demand-side (enterprises) in promoting cluster development, and offer a reference for regions sharing similar conditions in rapid industrialization.
First of all, this article’s findings confirm intermediary function of some core enterprises and research institutions in innovation diffusion. Wuhan, a central city in Central Region of China, with rich educational resources and research institutions, policy-makers (government) should encourage building multistakeholder technical service platforms, particularly resource integration and information transmission platforms, including Instruments Sharing Platform and Key Technology Service Platform. The involvement of enterprises in these platforms are essential.
Secondly, we learned from the analysis that cooperation among SMEs and SMEs with other types of network actors need to be strengthened. Regression analysis between structure embeddedness and economic performance demonstrated agglomeration effects on economic performance are not consistent with what we predicted. This suggests that policies should be provided to SMEs in the areas of capital, knowledge and technology and a better institutional environment is needed.
Finally, regression analysis between structure embeddedness and innovation outputs and economic performance demonstrated that aggregate constraint of structure holes generate negative impacts on core enterprises, and degree centrality play a limited role in improving outputs. This means core enterprises should decrease their redundant links with other actors. Policy-makers should slow down their efforts in supporting the “key controlled enterprise”.
This study provides insights regarding paths for future research. First, future studies can collect data on innovation cluster from other regions in China, so that scholars are able to generate conclusions that are generalizable across geographic locations. Second, this study mainly used patents as the measure for innovation performance, future studies should also focus on more nuanced measurement of innovation performance.
**Author Contributions:** J.Z. contributes to the overall development of the idea, research design and writing of the article. D.L. contributes to the statistical analysis, literature review and initial draft. H.Y. contributes to theoretical framing, writing and revisions.
**Funding:** The authors acknowledge financial support from the youth fund project of the National Natural Science Foundation of China (Grant No: 71503268), the Fundamental Research Funds for the Central Universities from Zhongnan University of Economics and Law (Grant No: 2722019CG061), Zhongnan University of Economics and Law Graduate Education Achievement Award Cultivation Project (Grant No: CGPY201904), Interdisciplinary Innovation Research Project (Grant No: 2722019JX002), and the National Social Science Foundation of China (Grant No: CFA150151).
**Conflicts of Interest:** The authors declare no conflict of interest.
**References**
1. Porter, M.E. Clusters and the New Economics of Competition. *Harv. Bus. Rev.* **1998**, *98*, 77–91.
2. Crafts, N.; Venables, A. Globalization in History: A Geographical Perspective. In *Globalization in Historical Perspective*; University of Chicago Press: Chicago, IL, USA, 2001.
3. Caniels, M.C.J.; Romijn, H.A. Firm-level knowledge accumulation and regional dynamics. *Ind. Corp. Chang.* **2003**, *12*, 1253–1278. [CrossRef]
4. Zaheer, A.; Bell, G.G. Benefiting from network position: firm capabilities, structural holes, and performance. *Strat. Manag. J.* **2005**, *26*, 809–825. [CrossRef]
5. Huggins, R.; Johnston, A. Knowledge Flow and Inter-firm Networks: The Influence of Network Resources, Spatial Proximity and Firm Size. *Entrep. Reg. Dev.* **2010**, *22*, 457–483. [CrossRef]
6. Seppänen, R.; Blomqvist, K.; Sundqvist, S. Measuring inter-organizational trust—A critical review of the empirical research in 1990–2003. *Ind. Mark. Manag.* **2007**, *36*, 249–265. [CrossRef]
7. Tsai, W. Knowledge Transfer in Intraorganizational Networks: Effects of Network Position and Absorptive Capacity on Business Unit Innovation and Performance. *Acad. Manag. J.* **2001**, *44*, 996–1004.
8. Organisation for Economic Co-operation and Development (OECD). *Innovative Clusters—Drivers of National Innovation Systems*; OECD: Paris, France, 2001.
9. Melo, P.C.; Graham, D.J.; Noland, R.B. A meta-analysis of estimates of urban agglomeration economies. *Reg. Sci. Urban Econ.* **2009**, *39*, 332–342. [CrossRef]
10. Owen-Smith, J.; Powell, W.W. Knowledge Networks as Channels and Conduits: The Effects of Spillovers in the Boston Biotechnology Community. *Organ. Sci.* **2004**, *15*, 5–21. [CrossRef]
11. Powell, W.W.; Koput, K.W.; Smith-Doerr, L. Interorganizational collaboration and the locus of innovation: Network of learning in biotechnology. *Adm. Sci. Q.* **1996**, *41*, 116–145. [CrossRef]
12. Iammarino, S.; McCann, P. The structure and evolution of industrial clusters: Transactions, technology and knowledge spillovers. *Res. Policy* **2006**, *35*, 1018–1036. [CrossRef]
13. Tall-man, S.; Jenkins, M.; Henry, N.; Pinch, S. Knowledge, Cluster, and Competitive Advantage. *Acad. Manag. Rev.* **2004**, *29*, 258–271. [CrossRef]
14. Krätke, S. Network Analysis of Production Clusters: The Potsdam/Babelsberg Film Industry as an Example. *Eur. Plan. Stud.* **2002**, *10*, 27–54. [CrossRef]
15. Bell, G.G. Clusters, networks, and firm innovativeness. *Strat. Manag. J.* **2005**, *26*, 287–295. [CrossRef]
16. Wang, W. Does controllability of core enterprise in industrial innovation network will promote the spillover of knowledge. *J. Manag. World* **2015**, *6*, 101–109.
17. Chi, R. A study on node links of SMEs’ innovation network in region and its effectiveness. *J. Manag. World* **2007**, *1*, 105–121.
18. Fan, Q. A study on structure embeddedness on cluster enterprises’ innovation performance. *J. Sci. Stud.* **2010**, *12*, 1891–1900.
19. Ahuja, G. Collaboration networks, structural holes, and innovation: A logistical Study. *Adm. Sci. Q.* **2000**, *45*, 425–453. [CrossRef]
20. Qian, X. Firm Network Position, Indirect Ties, and Innovative Performance. *China Ind. Econ.* **2010**, *2*, 78–88.
21. Jiang, T. Network Site Technological Learning and Innovation Performance of Cluster Enterprises—Based on Empirical Study of Shaoxing Textile Industry Cluster. *J. Econ. Geogr.* **2012**, *7*, 87–106.
22. Caputo, F.; Livieri, B.; Venturelli, A. Intangibles and Value Creation in Network Agreements: analysis of Italian firms. *Manag. Control* **2014**, *45*, 45–70. [CrossRef]
23. De Bernardi, P.; Bertello, A.; Venuti, F. Online and On-Site Interactions within Alternative Food Networks: Sustainability Impact of Knowledge-Sharing Practices. *Sustainability* **2019**, *11*, 1457. [CrossRef]
24. Liu, X. Network Embeddedness, Knowledge Acquisition and Firms’ Innovation Capabilities. *J. Econ. Manag.* **2015**, *3*, 150–159.
25. Veltri, S.; Venturelli, A.; Mastroleo, G. Measuring intellectual capital in a firm belonging to a strategic alliance. *J. Intellect. Cap.* **2015**, *16*, 174–198. [CrossRef]
26. Chen, W. Empirical Research on Innovation Networks Consisting of Industry-University-Research Institute in Regional Equipment Manufacturing Industry: A Perspective of Network Structure and Network Cluster. *China Soft Sci.* **2012**, *2*, 96–107.
27. Romano, A.; Passiante, G.; Elia, V. A Model of Connectivity for Regional Development in the Learning Economy. In Proceedings of the European Regional Science Association (40th) European Monetary Union and Regional Policy, Barcelona, Spain, 29 August–1 September 2000.
28. Gulati, R. Network location and learning: the influence of network resources and firm capabilities on alliance formation. *Strat. Manag. J.* **1999**, *20*, 397–420. [CrossRef]
29. Zahra, S.A.; Geogrge, G. Absorptive Capability: A Review, Reconceptualization and Extension. *Acad. Manag. Rev.* **2002**, *27*, 185–203. [CrossRef]
30. Uzzi, B. Social Structure and Competition in Interfirm Net-work: The Paradox of Embeddedness. *Adm. Sci. Q.* **1997**, *42*, 35–67. [CrossRef]
31. Dore, R. Goodwill and the Spirit of Market Capitalism. *Br. J. Social.* **1983**, *34*, 459–482. [CrossRef]
32. Romo, F.P.; Schwart, M. Structural Embeddeddedness of Business Decisions: A Sociological Assessment of Migration Behavior of Plants in New York State between 1960 and 1985. *Am. Sociol. Rev.* **1985**, *60*, 874–907. [CrossRef]
33. Lee, T.W.; Mitchell, T.R.; Sablynski, C.J.; Burton, J.P.; Holtom, B.C. The Effects of Job Embeddedness on Organizational Citizenship, Job Performance, Volitional Absences, and Voluntary Turnover. *Acad. Manag. J.* **2004**, *47*, 711–722.
34. Zak, P.J.; Knack, S. Trust and Growth. *Econ. J.* **2001**, *111*, 295–321. [CrossRef]
35. Zeng, J.; Liu, T.; Feiock, R.; Li, F. The impacts of China’s provincial energy policies on major air pollutants: A spatial econometric analysis. *Energy Policy* **2019**, *132*, 392–403. [CrossRef]
36. Schilling, M.A.; Phelps, C.C. Interfirm Collaboration Networks: The Impact of Large-Scale Network Structure on Firm Innovation. *Manag. Sci.* **2007**, *53*, 1113–1126. [CrossRef]
37. Burt, R.S. *Structure Holes: The Social of Competition*; Harvard University Press: Cambridge, MA, USA, 1992.
38. Watts, D.J. Networks, Dynamics, and the Small-World Phenomenon. *Am. J. Sociol.* **1999**, *105*, 493–527. [CrossRef]
|
The International George Papatheodorou Symposium
Patras, September 17–18, 1999
Proceedings
FAST ION CONDUCTING GLASSES
C.P. Varsamis, E.I. Kamitsos and G.D. Chryssikos
Theoretical and Physical Chemistry Institute,
National Hellenic Research Foundation,
48 Vass. Constantinou Ave., Athens 116 35, Greece
AgI-doped fast ion conducting borate glasses $x\text{AgI}-(1-x)[\text{Ag}_2\text{O-nB}_2\text{O}_3]$ ($n=2$, 0.5) in bulk and thin film forms were studied by infrared spectroscopy. The short-range order structure of the network was found to be directly affected by AgI, and this was accounted for by isomerisation and disproportionation reactions between triangular and tetrahedral borate species for the diborate ($n=2$) and pyroborate ($n=0.5$) families, respectively. This effect is reflected in the decrease of the glass transition and fictive temperature upon AgI addition. Analysis of the transmission spectra of thin films for diborate glasses showed that optical and thermal history effects lead to spectral differences between thin films and bulk samples. Far-infrared analysis for pyroborate glasses revealed two distinct environments for silver ions formed by oxygen atoms and iodide anions. At high AgI contents the silver-iodide sites are probably growing into AgI-like clusters. In the diborate family the presence of a range of oxide, iodide and mixed O/I environments for silver ions is supported by the far-infrared data. Thus, the nature of sites exploited by silver ions in AgI-containing borate glasses depends on both $\text{Ag}_2\text{O}$ and AgI content.
Introduction
Fast ion conducting (FIC) glasses are challenging materials due to their high ionic conductivity and the possibility for applications in electrochemical devices [1-3]. AgI-doped borate glasses $x\text{AgI}-(1-x)[\text{Ag}_2\text{O-nB}_2\text{O}_3]$ have been widely studied as model FIC systems. These glasses are stable, exhibit high ionic conductivity and can be prepared easily in a wide glass-forming region [4,5]. Despite the numerous studies devoted to such glasses, their structure and ion conduction mechanism are still a subject of debate [6-15].
It has been suggested that the short-range order (SRO) of the borate network changes in a systematic way upon increasing the amount of AgI while keeping the $\text{Ag}_2\text{O}/\text{B}_2\text{O}_3$ ratio constant [5,6], while other studies indicate that AgI addition does not induce modifications of the SRO of the glass [7-9]. The state of AgI in the borate network and its effect on the ion transport process are also matters of controversy. It has been proposed that AgI forms a pseudophase within the glass matrix, and that silver ions migrate along pathways established by iodide ions [1,10,11]. On the contrary, other studies suggest that AgI is dispersed in interstices of the host matrix [9,12,13], and that the conductivity enhancement is due to the expansion of the glass network caused by AgI-doping [14,15].
The purpose of this paper is to investigate both the network structure and the silver ions hosting sites by infrared (IR) reflectance and transmittance spectroscopy of two families of $x\text{AgI}-(1-x)[\text{Ag}_2\text{O-nB}_2\text{O}_3]$ glasses; with $n=2$ (diborate) and $n=0.5$ (pyroborate). Analysis of the mid-IR spectra reveals details of the SRO of the borate network, while the far-IR profiles yield information on the nature of sites hosting silver cations. The simultaneous analysis of transmittance and reflectance spectra allows comparison of the results of the two different techniques, in relation with the findings of previous works reported in the literature [6,8].
Experimental
Glasses were prepared using stoichiometric amounts of reagent grade AgI, AgNO$_3$ and B$_2$O$_3$, mixed in Pt crucibles and heated at 650°C until the NO$_x$ gas had dissipated. Then, the mixture was melted at 850-950°C for 25-30 min, depending on composition. Bulk samples for specular reflectance measurements were obtained by splat quenching techniques in the composition range $0 \leq x \leq 0.65$ for the diborate, and $0.4 \leq x \leq 0.6$ for the pyroborate family. From the same batch used to prepare the bulk samples, thin films were obtained in the diborate family only for $0.1 \leq x \leq 0.60$ by employing the procedure described by Liu and Angell [16].
Infrared spectra were measured on a Fourier-transform vacuum spectrometer (Bruker 113v) in the range 25-5000 cm$^{-1}$. Specular reflectance spectra were measured in an 11° off-normal mode and transmittance spectra were recorded with the plane of the film placed normal to the incident beam.
Results and discussion
Infrared spectra of bulk samples
The specular reflectance spectra of bulk samples were analyzed by the Kramers-Kronig transformation to calculate the absorption coefficient spectra, $\alpha(\nu)$. The $\alpha(\nu)$ spectra of $x\text{AgI}-(1-x)[\text{Ag}_2\text{O-2B}_2\text{O}_3]$ glasses reported in Fig. 1 are dominated in the mid-IR region by strong absorption envelopes at ca. 1350 and 1000 cm$^{-1}$. These features are attributed to the asymmetric stretching mode of borate triangles B$\mathcal{O}_3$ and B$\mathcal{O}_2\mathcal{O}^-$ (1350 cm$^{-1}$) and tetrahedral units, B$\mathcal{O}_4^-$ (1000 cm$^{-1}$), respectively ($\mathcal{O}=$oxygen atom bridging two boron atoms) [6,17]. Though the AgI content varies in a rather broad range, $0 \leq x \leq 0.65$, such spectra have quite similar bandshapes, suggesting that AgI addition does not cause the formation of new structural units in the borate network. However, it is noted that AgI addition induces a change in the relative intensities of the envelopes at ca. 1350 and 1000 cm$^{-1}$ in favor of the latter, suggesting the occurrence of AgI-induced structural rearrangements mostly related to changes in the relative population of triangular and tetrahedral borate units. The short-range order (SRO) of the binary $\text{Ag}_2\text{O-2B}_2\text{O}_3$ glass contains B$\mathcal{O}_3$ and B$\mathcal{O}_2\mathcal{O}^-$ triangular units and B$\mathcal{O}_4^-$ tetrahedra [18]. To quantify the network structure of AgI-doped diborate glasses the fractions $X_3$, $X_2$ and $X_4$ of the local structural units B$\mathcal{O}_3$, B$\mathcal{O}_2\mathcal{O}^-$ and B$\mathcal{O}_4^-$, respectively, were calculated. For this purpose we employed the recently introduced model [19,20] that involves the deconvolution of absorption profiles (see inset of Fig. 1) and the assignment of component bands to borate polyhedra. In Fig. 2 are reported the calculated fractions $X_2$ and $X_4$ as a function of AgI content, with $X_3$ being composition independent ($X_3=0.5$). As $x$ increases from 0 to 0.65, $X_4$ varies from 0.34 to 0.39, while $X_2$ from ca. 0.16 to 0.11 and these changes are considerably larger than the estimated experimental error (ca. 3%). The result for the $\text{Ag}_2\text{O-2B}_2\text{O}_3$ glass ($X_4=0.34$) is in excellent agreement with the corresponding value obtained from NMR spectroscopic data ($N_4=0.35$) [21].
These results demonstrate the effect of AgI on the SRO of the glass, and this can be understood in terms of the chemical equilibrium between the isomeric B$\mathcal{O}_2\mathcal{O}^-$ and B$\mathcal{O}_4^-$ units:
$$\text{B}\mathcal{O}_4^- \rightleftharpoons \text{B}\mathcal{O}_2\mathcal{O}^- \quad (1)$$
The fractions reported in Fig. 2 show that AgI addition shifts the equilibrium to the left. This SRO change can be quantified by estimating the quasi-equilibrium constant, $K_{eq}$, at constant pressure for reaction (1):
$$K_{eq} = X_2 / X_4 \quad (2)$$
It was found that $K_{eq}$ and the corresponding glass transition temperature, $T_g$, of diborate glasses show the same dependence on AgI content [20]. Such an interrelation between $T_g$ and SRO allowed us to estimate, from the slope of the ln$K_{eq}$ vs. 1/$T_g$ plot, the enthalpy change associated with reaction (1), $\Delta H=32 \pm 5$ kJ/mol of boron. This result is in good agreement with reported data for the reaction B$\mathcal{O}_4^- \rightleftharpoons$ B$\mathcal{O}_3 + \text{NBO}$ in sodium borate melts ($\Delta H=35 \pm 12$ kJ/mol of boron, NBO = non-bridging oxygen) [22].
The corresponding $\alpha(v)$ spectra of pyroborate glasses, $x\text{AgI}-(1-x)[\text{Ag}_2\text{O-0.5B}_2\text{O}_3]$, were found to be dominated by strong envelopes at ca. 955, 1030, 1245 and 1315 cm$^{-1}$ [19]. These features manifest the presence of borate tetrahedra, B$\text{\textit{O}}_4^-$ (955 and 1030 cm$^{-1}$), orthoborate triangles, BO$_3^{3-}$ (1245 cm$^{-1}$), and pyroborate units, B$_2$O$_5^{4-}$ (1315 cm$^{-1}$) [17]. Though in pyroborate glasses the fractions cannot be determined unambiguously [20], it was found that AgI affects the relative intensity of mid-IR bands. In particular, all glasses present strong bands due to B$\text{\textit{O}}_4^-$ tetrahedra, despite the fact that the fraction of B$\text{\textit{O}}_4^-$ units is negligible in binary pyroborate glasses [17]. Thus, the effect of AgI in the pyroborate family can be related to the co-existence of B$_2$O$_5^{4-}$, BO$_3^{3-}$ and B$\text{\textit{O}}_4^-$ species, according to the disproportionation reaction:
$$\text{B}_2\text{O}_5^{4-} \rightleftharpoons \text{BO}_3^{3-} + \text{B}\text{\textit{O}}_4^-$$
(3)
It appears that the presence of AgI results in the stabilization of pyroborate species, even though the corresponding silver-pyroborate crystal or glass ($\text{Ag}_2\text{O-0.5B}_2\text{O}_3$) can not be synthesized to the best of our knowledge.
**Infrared spectra of thin films**
Transmittance spectra of thin films in the diborate family are reported in Fig.3 and show the same absorption features as the corresponding bulk samples (Fig.1). In addition, these spectra exhibit interference fringes above ca. 1500 cm$^{-1}$ where the films become quasi-transparent, which extend also to lower frequencies and overlap with the absorption peaks of the glass network. Hence, a careful analysis of the transmittance spectral data requires taking into account the interference pattern.
Transmittance spectra, T(v), were simulated by using a rigorous expression which depends on the real, n(v), and imaginary, k(v), parts of the complex refractive index and on the thickness, d, of the film [23]. As discussed in detail elsewhere [20] the n(v) and k(v) functions obtained by the Kramers-Kronig transformation of the bulk sample reflectivity can be employed to fit the experimental T(v) spectra having the film thickness as the only variable parameter. Typical results of the simulation are shown in Fig.3 for comparison with the experimental spectra. It turns that an appropriate analysis of optical measurements on thin films leads to similar conclusions for the SRO structure. It should be mentioned however that the crucial parameter in the case of thin film analysis is the film thickness. In fact, the measured spectrum of a finite film is a convolution of the spectrum of a semi-infinite sample and the constructive interference due to multiple reflections between the two interfaces of the film. Such interference introduces an oscillatory term with period and amplitude dependent on thickness, which can alter the shape of the spectrum in a rather complicated way controlled by film thickness [20]. So, a quantitative comparison between the results obtained on bulk and thin film samples can be made only when the latter are carefully analyzed to eliminate the dependence on the film thickness.
In addition, it is underlined that samples in bulk and thin film forms of the same composition may have different fictive temperatures due to their different thermal histories. This affects directly equilibrium (1) and consequently the SRO structure of the glass. While, bulk samples were obtained by splat quenching the melt immediately after removal from the furnace, thin films were blown from a melt.
drop that was cooled to obtain an appropriate viscosity. It follows that thin films may have lower fictive temperatures compared to bulk samples, and larger $X_4$ according to equilibrium (1). This may also explain the observed small differences between experimental and simulated spectra in Fig.3.
**Far-IR spectra of xAgI-(1-x)[Ag$_2$O-nB$_2$O$_3$] glasses**
In Fig.4 are shown the expanded far-IR absorption coefficient profiles of xAgI-(1-x)[Ag$_2$O-nB$_2$O$_3$] glasses. These are dominated by a broad asymmetric profile due to the vibration of Ag cations in their hosting sites [6,20]. The nature of cationic sites can be studied by deconvoluting the far-IR spectra into Gaussian component bands. Typical deconvolution results by using the minimum number of components are included in Fig.4. The binary silver diborate glass (x=0) was fitted by three component bands at ca. 52 ($\nu_L$), 145 ($\nu_H$) and 272 cm$^{-1}$, as in previous studies [18]. The $\nu_L$ and $\nu_H$ bands are attributed to Ag-O vibrations in oxide environments and the third component is assigned to a deformation mode of the borate network [18,19]. The profiles of AgI-containing diborate glasses were fitted also with three components (Fig.4), with frequencies between 45-50 cm$^{-1}$ ($\nu_L$), 128-146 cm$^{-1}$ ($\nu_H$) and 266-272 cm$^{-1}$. AgI addition results in the relative increase of the intensity of the $\nu_H$ band, and in the progressive shift of its frequency towards lower values (146 cm$^{-1}$ for x=0 to 125 cm$^{-1}$ for x=0.65).
The pyroborate far-IR spectra present remarkable differences compared to the diborate ones and were described by four distinct components. Three components at ca. 40, 175 and 260 cm$^{-1}$ are analogous to those found in the diborate family, while the one at ca. 100 cm$^{-1}$ is a new feature. This new component increases in intensity and shifts to higher frequencies upon AgI addition, suggesting that it could be assigned to vibrations of Ag cations in primarily iodide environment [6,20]. This assignment is further supported by the fact that crystalline AgI has a strong infrared absorption at ca. 110 cm$^{-1}$, attributed to the Ag-I stretching mode, $\nu_{Ag-I}$, in a tetrahedral environment of I$^-$ anions [24]. The increase of $\nu_{Ag-I}$ frequency from 96 cm$^{-1}$ (x=0.4) to 107 cm$^{-1}$ (x=0.6) may indicate a progressive organization and growth of silver iodide sites into a separate AgI-like pseudophase though its size cannot be estimated from the present infrared data. However, a recent field emission scanning electron microscopy study of orthoborate glasses 0.75AgI-0.25[Ag$_2$O-0.33B$_2$O$_3$] revealed the presence of AgI-rich amorphous particles with diameter 40-60 nm [25].
In view of the above results, the downshift of $\nu_H$ upon AgI addition in diborate glasses can be explained in two ways. First, the band with frequency $\nu_H$ is assumed to be a convolution of two bands, at ca. 146 cm$^{-1}$ (oxide band) and 110 cm$^{-1}$ (iodide band). AgI addition enhances the intensity of the latter band and the peak maximum of the convoluted band shifts naturally towards lower frequency values. A second explanation is based on a progressive substitution of oxygen by iodide ions in the coordination sphere of Ag cations as x increases. This would lead to the formation of mixed oxyiodide sites, and the combined effect of the increasing reduced mass and the decreasing force constant would be a net decrease of $\nu_H$ with AgI addition [20]. Both models, i.e. (a) formation of separate Ag-oxide and Ag-iodide sites, and (b) formation of mixed Ag-O/I environments, predict the observed trend of $\nu_H$ with AgI addition. It appears that for the diborate family an intermediate situation is perhaps more realistic for the nature of sites occupied by silver cations.
Concluding remarks
Superionic borate glasses $x\text{AgI}-(1-x)[\text{Ag}_2\text{O-nB}_2\text{O}_3]$ in bulk and thin film forms were studied by infrared spectroscopy to investigate the short-range order (SRO) of the network and the nature of sites occupied by silver ions.
The network structure of diborate glasses ($n=2$) was found to consist of $\text{BO}_2^-$ and $\text{BO}_3$ triangles and $\text{BO}_4^-$ tetrahedra. The calculated fractions of $\text{BO}_2^-$ and $\text{BO}_4^-$ units, $X_2$ and $X_4$, depend on AgI content and this was explained in terms of the equilibrium $\text{BO}_4^- \leftrightarrow \text{BO}_2^-$. This direct influence of AgI on the SRO of glass was correlated with the glass transition and fictive temperature at which the supercooled liquid is frozen into the glassy state. Similarly, pyroborate glasses ($n=0.5$) contain orthoborate triangles, $\text{BO}_3^{3-}$, pyroborate dimers, $\text{B}_2\text{O}_5^{4-}$, and $\text{BO}_4^-$ tetrahedra. AgI addition was found to favor the reaction $\text{B}_2\text{O}_5^{4-} \leftrightarrow \text{BO}_3^{3-} + \text{BO}_4^-$, and to lead to the stabilization of pyroborate species in the glassy state.
The study of the transmittance spectra of thin films in the diborate composition showed that a direct comparison between bulk samples and thin films should be avoided. This is due to the fact that the shape of the transmittance spectra of films depends on both film thickness and differences in thermal history between bulk samples and films.
The far-IR analysis of pyroborate glasses revealed the presence of distinct oxide and iodide environments for silver ions, without excluding the presence of silver sites with mixed O/I coordination. It was also found that an AgI-like pseudophase might develop at high AgI contents. In the diborate family the situation is rather complicated and a realistic scenario involves the existence of a range of sites for silver cations, i.e. oxide, iodide and mixed oxyiodide sites. Thus, the nature of silver ionic sites appears to depend on both $\text{Ag}_2\text{O}/\text{B}_2\text{O}_3$ ratio, i.e. the degree of modification of the borate network, and AgI content.
References
[1] T. Minami, *J. Non-Cryst. Solids* **73**, 273 (1985).
[2] C.A. Angell, *Ann. Rev. Phys. Chem.* **43**, 693 (1992).
[3] M.D. Ingram, *Current Opinion in Solid State and Materials Science* **2**, 399 (1997).
[4] A. Magistris, G. Chiodelli, A. Schiraldi, *Electrochim. Acta* **23**, 585 (1978).
[5] T. Minami, Y. Ikeda, T. Tanaka, *J. Non-Cryst. Solids* **52**, 159 (1982).
[6] E.I. Kamitsos, J.A. Kapoutsis, G.D. Chryssikos, J.M. Hutchinson, A.J. Pappin, M.D. Ingram, J.A. Duffy, *Phys. Chem. Glasses* **36**, 141 (1995).
[7] G. Chiodelli, A. Magistris, M. Villa, J.L. Bjorkstam, *J. Non-Cryst. Solids* **51**, 143 (1982).
[8] J.J. Hudgens, S.W. Martin, *Phys. Rev. B* **53**, 5348 (1996).
[9] J. Swenson, L. Borjesson, R.L. McGreevy, W.S. Howells, *Phys. Rev. B* **55**, 11236 (1997).
[10] G. Carini, M. Cutroni, M. Frederico, G. Galli, G. Tripodo, *Phys. Rev. B* **30**, 7212 (1984).
[11] C. Rousselot, J.P. Malugani, R. Mercier, M. Tachez, P. Chieux, A.P. Pappin, M.D. Ingram, *Solid State Ionics* **78**, 211 (1995).
[12] G. Licheri, A. Musinu, G. Paschina, G. Piccaluga, G. Pinna, A. Magistris, *J. Chem. Phys.* **85**, 500 (1986).
[13] F. Rocca, G. Dalba, P. Fornasini, A. Tomasi, *Solid State Ionics* **53-56**, 1253 (1992).
[14] J. Swenson, L. Borjesson, *Phys. Rev. Lett.* **77**, 3569 (1996).
[15] J. Swenson, L. Borjesson, *J. Non-Cryst. Solids* **232-234**, 658 (1998).
[16] C. Liu, C.A. Angell, *J. Chem. Phys.* **93**, 7378 (1990).
[17] E.I. Kamitsos, A.P. Patsis, G.D. Chryssikos, *J. Non-Cryst. Solids* **152**, 246 (1993).
[18] J.A. Kapoutsis, E.I. Kamitsos, G.D. Chryssikos, in *Borate Glasses Crystals and Melts*, Eds. A.C. Wright, S.A. Feller and A.C. Hannon, Soc. Glass Technology (1997) p.p. 303-312.
[19] C.P. Varsamis, E.I. Kamitsos, G.D. Chryssikos, J.A. Kapoutsis, in *Proc. 18th Inter. Congress on Glass*, Eds. M.K. Choudham, N.T. Huff and C.H. Drummond (Am. Ceram. Soc. 1998) p.p. 39-44.
[20] C.P. Varsamis, E.I. Kamitsos, G.D. Chryssikos, *Phys. Rev. B* in press.
[21] K.S. Kim, P.J. Bray, *J. Non-Cryst. Solids* **111**, 67 (1989).
[22] S. Sen, Z. Xu, J.F. Stebbins, *J. Non-Cryst. Solids* **226**, 29 (1998).
[23] S. Cunsolo, P. Dore, C.P. Varsamis, *Appl. Optics* **31**, 4554 (1992).
[24] G. Burns, F.H. Dacol, M.W. Shafer, *Phys. Rev. B* **16**, 1416 (1977).
[25] M. Tatsumisago, N. Itakura, T. Minami, *J. Non-Cryst. Solids* **232-234**, 267 (1998).
|
Multiple Unfoldings of Orbifold Singularities:
Engineering Geometric Analogies to Unification
Jacob L. Bourjaily*
Joseph Henry Laboratories, Princeton University, Princeton, NJ 08544
(Dated: 2nd April 2007)
Katz and Vafa [1] showed how charged matter can arise geometrically by the deformation of ADE-type orbifold singularities in type IIA, M-theory, and F-theory compactifications. In this paper we use those same basic ingredients, used there to geometrically engineer specific matter representations, here to deform the compactification manifold itself in a way which naturally compliments many features of unified model building. We realize this idea explicitly by deforming a manifold engineered to give rise to an $SU_5$ grand unified model into a one giving rise to the Standard Model. In this framework, the relative local positions of the singularities giving rise to Standard Model fields are specified in terms of the values of a small number of complex structure moduli which deform the original manifold, greatly reducing the arbitrariness of their relative positions.
I. INTRODUCTION
One of the ways in which a gauge theory with massless charged matter can arise in type IIA, M-theory, and F-theory is known as geometrical engineering. In this framework, gauge theory at low energy arises from co-dimension four singular surfaces in the compactification manifold [2] and charged matter arises as isolated points (curves in F-theory) on these surfaces over which the singularity is enhanced. Katz and Vafa [1] constructed explicit examples of local geometry which would give rise to different representations of various gauge groups. Their work was presented explicitly in the language of type IIA or F-theory, but the general results have been shown to apply much more broadly to M-theory as well [3, 4, 5, 6, 7].
The picture of matter and gauge theory arising from pure geometry via singular structures has been used very fruitfully in much of the progress of M-theory phenomenology. In [8] Witten engineered an interesting phenomenological model in M-theory which could possibly solve the Higgs doublet-triplet splitting problem; this model was explored in great detail together with Friedmann in [9]. There, the explicit topology of the ADE-singular surface and the relative locations of all the isolated conical singularities was motivated by phenomenology—the description of the geometry of the singularities themselves was taken for granted.
Unlike model building with D-branes, for example, geometrical engineering as it has been understood provides little information about the number, type, and relative locations of the many different singularities needed for any phenomenological model. This information must either come \textit{a posteriori} from phenomenological success or via duality to a concrete string model. But recent successes in M-theory model building (for example, [10, 11]) motivate a new look at how to describe the relative structure of singularities—at least locally—within the framework of geometrical engineering itself.
In this paper, we reduce the apparent arbitrariness of the number and relative positions of the singularities required by low-energy phenomenology by showing how they can be obtained from deforming a smaller number of singularities in a more unified model. In section II we review the ingredients of geometrical engineering as described in [1]. The basic framework is then interpreted in a novel way in section III to relate manifolds with matter singularities to those with more or less symmetry. The idea is used explicitly to deform an $SU_5$ grand unified model into the Standard model.
To be clear, as in [1] our results apply only strictly to $\mathcal{N} = 2$ models from type IIA compactifications or $\mathcal{N} = 1$ models from F-theory compactifications\footnote{We essentially describe non-compact Calabi-Yau threefolds which are $K3$-fibrations over $\mathbb{C}^1$. If the $\mathbb{C}^1$-base is fibred over $\mathbb{CP}^1$ as an $\mathcal{O}(-2)$ bundle, for example, then the total space will be a Calabi-Yau four-fold upon which F-theory can be compactified, giving rise to an $\mathcal{N} = 1$ theory.}, but we suspect that this framework has an M-theory analogue in the spirit of [7].
II. GEOMETRICAL ENGINEERING
In the framework of geometrical engineering the compactification manifold is described as a fibration of (singular) $K3$ surfaces over a base space of appropriate dimension. The collection of point-like (co-dimension four) singularities of the $K3$ fibres would then be a co-dimension four surface in the compactified manifold, giving rise to gauge theory of type corresponding to the singularities on each $K3$ fibre. Table I lists polynomials in $\mathbb{C}^3$ whose solutions can be (locally) taken to be the fibres for each corresponding gauge group.
One of the strengths of this framework is its generality: the local geometry is specified in terms of
TABLE I: Hypersurfaces in $\mathbb{C}^3$ giving rise to the desired orbifold singularities.
| Gauge group | Polynomial |
|-------------|------------|
| $SU_n$ ($\equiv A_{n-1}$) | $xy = z^n$ |
| $SO_{2n}$ ($\equiv D_n$) | $x^2 + y^2 z = z^{n-1}$ |
| $E_6$ | $x^5 = y^4 + z^4$ |
| $E_7$ | $x^2 + y^3 = yz^3$ |
| $E_8$ | $x^2 + y^6 = z^5$ |
the $K3$ fibres, so that the description applies equally well to compactifications in type IIa, M-theory, and F-theory—the difference being the dimension of the space over which the surfaces in Table I are fibred.
To obtain massless charged matter, however, additional structure is necessary. Specifically, at isolated points (in type IIa or M-theory) on the co-dimension four singular surface, the type of singularity of the $K3$ fibres must be enhanced by one rank. Mathematically, this requires that one can describe how the various polynomials in Table I can be deformed into each other; and the possible two-dimensional deformations have been classified [12].
For example, to describe the embedding of a massless 5 of $SU_5$ in type IIa, you would need to start with a $K3$-fibred Calabi-Yau where each of the fibres are of the type giving rise to $SU_5$ gauge theory. From Table I we see that these four-dimensional fibres are locally the set of solutions to the equation
$$xy = z^5,$$
(1)
in $\mathbb{C}^3$. Now, to obtain matter in the 5 representation, there would need to be an isolated point somewhere on the two-dimensional base space where the fibre is enhanced to $SU_6$, [1]. A description of the local geometry can be given by
$$xy = (z + 5t)(z - t)^5,$$
(2)
where $t$ is a complex coordinate on the base over which the $K3$'s are fibred. Notice that when $t = 0$ the equation describes precisely the fibre which would have given rise to $SU_6$ gauge theory if it were fibred over the entire base manifold. However, because it is the fibre only over the origin in the complex $t$-plane, there is no $SU_6$ gauge theory. Equation (2) is said to describe the ‘resolution’ $SU_6 \rightarrow SU_5$, which is found to give rise to $SU_5$ gauge theory at low energy with a single massless 5 at $t = 0$. This and many other explicit examples of such resolutions and the matter representations obtained are given in [1].
One subtlety which makes the description above not automatically apply to M-theory constructions, however, is that in equation (2) the complex parameter $t$ is two-dimensional: taken as a coordinate over which the $K3$ surfaces are fibred, it gives rise to a six-dimensional compactification manifold. In M-theory, co-dimension four singularities are three-dimensional and chiral matter would live at isolated points on these three dimensional orbifold singularities. So in M-theory the resolution $SU_6 \rightarrow SU_5$ would need a three-dimensional deformation. Morally, the structure is identical to that described in equation (2), but the parameter $t$ must be upgraded to describe three-dimensional deformations. This can be done in terms of hyper-Kähler quotients. We suspect that all the resolutions described explicitly for type IIa here and in [1] can be upgraded to three-dimensional deformations needed in M-theory, and in many cases these generalizations have already been given [4, 5, 7].
III. ENGINEERING GEOMETRIC ANALOGIES TO UNIFICATION
The main result of this paper is that distinct conical singularities on a surface with some gauge symmetry can be deformed into each other in ways analogous to unification; and conversely, that a description of a single matter field in a unified theory can be ‘unfolded’ into distinct matter fields in a theory of lower gauge symmetry. Because the tools used to perform these unification-like deformations are precisely the same as those used to describe the singularities themselves, some care must be taken to avoid unnecessary confusion.
We will start by reinterpreting the tools used above to engineer charged matter, and then we will use both interpretations simultaneously to construct explicit examples of the geometric analogue to unified model building.
Consider again the resolution $SU_6 \rightarrow SU_5$ described by
$$xy = (z + 5s)(z - s)^5,$$
(3)
where we have replaced $t \mapsto s$ from equation (2) to make a interpretative distinction that will soon become clear. We propose to momentarily discuss pure gauge theory and ignore any description of matter. With this in mind, take a fixed (real) two-dimensional neighborhood over which every point is fibred by the solutions to equation (3) for any fixed value of $s$. Because the fibres are the same everywhere on the manifold, there is no matter: for any $s$ the geometry would give rise to pure gauge theory at low energy. For $s \neq 0$ solutions to equation (3) are $SU_5$ fibres and so the compactification manifold would give rise to pure $SU_5$. However, when $s = 0$ the fibres are all $SU_6$ and so the low-energy theory would be pure $SU_6$. Therefore $s$ is a ‘global’ parameter which deforms the gauge content of the theory, such that for arbitrary values of $s \neq 0$ the theory is pure $SU_5$ and for $s = 0$ it is pure $SU_6$. That this deformation is ‘smooth’ is apparent at least when $s \neq 0$.
An obvious question to ask is how this framework applies when conical singularities are present. We
will show that when the ADE-surface singularity changes because of some complex structure modulus such as $s$ above, the conical singularities giving rise to charged matter (often) behave as one would expect from unified model building intuition. This is best demonstrated with explicit examples.
Suppose that the singular $K3$ surfaces are fibred over a two-dimensional base space with local complex coordinate $t$. And say the four-dimensional fibre over the point $t$ is given by the solutions to
$$xy = (z + 5t)(z - t + 3s)^2(z - t - 2s)^3,$$
for a given value of $s$, which is now to be interpreted as a complex structure modulus deforming the entire local geometry near $t = 0$. When $s = 0$ the geometry is of course identical to our previous description of $SU_6 \to SU_5$ and so the theory would be $SU_5$ with a single massless $\mathbf{5}$ located at $t = 0$.
Consider now $s$ to be fixed at some non-zero value. The gauge theory is then $SU_3 \times SU_2 \times U_1$; for generic values of $t$, the fibres given by equation (4) have two singular points, at $x = y = z - t + 3s = 0$ and $x = y = z - t - 2s = 0$, and so the union of these points over the base manifold coordinatized by $t$ will be two distinct, two-dimensional singular surfaces: one giving rise to $SU_2$ and the other $SU_3$. These surfaces become coincident as a single $SU_5$ surface when $s = 0$.
Along the complex $t$-plane, there are two isolated points over which the singularities are enhanced: at $t = s/2$ the fibre is visibly $SU_3 \times SU_3$, and at $t = -s/3$ the fibre is $SU_4 \times SU_2$. Therefore the theory has two charged, massless fields, in the $(\mathbf{1}, \mathbf{2})_{-1/2}$ and $(\mathbf{3}, \mathbf{1})_{1/3}$ representations of $SU_3 \times SU_2 \times U_1$ at $t = s/2$ and $t = -s/3$, respectively. Figure 1 indicates the singularity structure as a function of $s$.
Notice how this description parallels unified model building: the $s = 0$ theory of one $\mathbf{5}$ of $SU_5$ deforms smoothly into one with $(\mathbf{3}, \mathbf{1})_{1/3} \oplus (\mathbf{1}, \mathbf{2})_{-1/2}$ of $SU_3 \times SU_2 \times U_1$.
Similarly, we may ask how a $\mathbf{10}$ of $SU_5$ would deform into distinct singularities supporting Standard Model matter fields. The fibre structure giving rise to a massless $\mathbf{10}$ of $SU_5$ is given as follows. Let $t$ be a local coordinate on the base space over which fibres are given by solutions to
$$x^2 + y^2z + 2yt^5 = \frac{1}{z}\left((z + t^2)^5 - t^{10}\right);$$
at $t = 0$, equation (5) describes an $SO_{10}$ fibre, while for $t \neq 0$ the fibres are $SU_5$—although in this case the result is harder to read off. This resolution, $SO_{10} \to SU_5$, gives rise to a $\mathbf{10}$ of $SU_5$ [1].
Following the same idea as before, the deformation of this singularity into $SU_3 \times SU_2$ is given by
$$x^2 + y^2z + 2y(t + s)^3(t - s)^2 = \frac{1}{z}\left((z + (t - s)^2)^2(z + (t + s)^2)^2 - (t - s)^4(t + s)^6\right),$$
where $s$ is again interpreted as a complex structure modulus deforming the geometry near the singularity. Notice as before that $s = 0$ describes an $SU_5$ theory with one massless $\mathbf{10}$ located at $t = 0$. However, for $s \neq 0$ there are again two orbifold singularities corresponding to $SU_3 \times SU_2 \times U_1$ gauge theory. At three distinct points on the complex $t$ plane the rank of the fibre is enhanced: $t = -s$, $t = 0$, and $t = s$ give rise to matter in the $(\mathbf{3}, \mathbf{1})_{-2/3}$, $(\mathbf{3}, \mathbf{2})_{1/6}$, and $(\mathbf{1}, \mathbf{1})_1$ representations of $SU_3 \times SU_2 \times U_1$, respectively. The structure of the deformation achieved by varying $s$ is shown in Figure 2.
Again, our intuition from unified model building is realized naturally in this framework.
IV. DISCUSSION
One of the primary reasons why geometrical engineering had not been more widely used phenomenologically is because the number, type, and relative locations of the singularities giving rise to various matter fields were explicitly *ad hoc*: the inherent local framework prevented relationships between distinct singularities from being discussed. In this paper, we have shown a framework in which these questions can be addressed concretely, systematically reducing the arbitrariness of these models.
Of course, the local nature of geometrical engineering is still inherent in this framework, and continues to prevent us from addressing questions about the global structure such as stability, quantum gravity, and the quantization of seemingly continuous parameters like $s$. However, in the spirit of [13], we think that local engineering is a good step toward realistic string phenomenology, and may perhaps offer new insights.
In this paper we explicitly illustrated the geometric unfolding of the matter content of an $SU_5$ grand unified model into the Standard Model. But the procedure can easily be generalized. It is not difficult to see how this will work for a more unified theory. For example, one can envision how an entire family could unfold out of a single $E_6 \to SO_{10}$ resolution (which starts as a $\mathbf{16}$ of $SO_{10}$), or how all three families of the Standard Mode could be unfolded out of a single $E_8 \to SO_{10} \times SU_3$ or $E_8 \to E_6 \times SU_2$ resolution. However, these examples require more sophisticated tools of analysis, and so we have chosen to describe these in a forthcoming work.
V. ACKNOWLEDGEMENTS
This work originated from discussions with Malcolm Perry whose insights drove this work forward in its earliest steps. The author also appreciates helpful discussions, comments, and suggestions from Herman Verlinde, Sergei Gukov, Gordon Kane, Edward Witten, Paul Langacker, Bobby Acharya, Dmitry Malyshev, Matthew Buican, Piyush Kumar, and Konstantin Bobkov.
This research was supported in part by the Michigan Center for Theoretical Physics and a Graduate Research Fellowship from the National Science Foundation.
---
[1] S. Katz and C. Vafa, “Matter from Geometry,” *Nucl. Phys.*, vol. B497, pp. 146–154, 1997, hep-th/9606086.
[2] A. Klebanov, W. Lerche, and P. Mayr, “K3 Fibrations and Heterotic Type II String Duality,” *Phys. Lett.*, vol. B357, pp. 313–322, 1995, hep-th/9506112.
[3] M. Atiyah and E. Witten, “M-theory Dynamics on a Manifold of $G_2$ Holonomy,” *Adv. Theor. Math. Phys.*, vol. 6, pp. 1–106, 2003, hep-th/0107177.
[4] E. Witten, “Anomaly Cancellation on $G_2$ Manifolds,” 2001, hep-th/0108165.
[5] B. Acharya and E. Witten, “Chiral Fermions from Manifolds of $G_2$ Holonomy,” 2001, hep-th/0109152.
[6] B. S. Acharya and S. Gukov, “M-theory and Singularities of Exceptional Holonomy Manifolds,” *Phys. Rept.*, vol. 392, pp. 121–189, 2004, hep-th/0409191.
[7] P. Berglund and A. Brandhuber, “Matter from $G_2$ Manifolds,” *Nucl. Phys.*, vol. B641, pp. 351–375, 2002, hep-th/0205184.
[8] E. Witten, “Deconstruction, $G_2$ Holonomy, and Doublet-Triplet Splitting,” 2001, hep-ph/0201018.
[9] T. Friedmann and E. Witten, “Unification Scale, Proton Decay, and Manifolds of $G(2)$ Holonomy,” *Adv. Theor. Math. Phys.*, vol. 7, pp. 577–617, 2003, hep-th/0211268.
[10] B. Acharya, K. Bobkov, G. Kane, P. Kumar, and D. Vaman, “An M Theory Solution to the Hierarchy Problem,” *Phys. Rev. Lett.*, vol. 97, p. 191601, 2006, hep-th/0606262.
[11] B. S. Acharya, K. Bobkov, G. L. Kane, P. Kumar, and J. Shao, “Explaining the Electroweak Scale and Stabilizing Moduli in M-Theory,” 2007, hep-th/0701034.
[12] S. Katz and D. Morrison, “Gorenstein Threefold Singularities with Small Resolutions via Invariant Theory for Weyl Groups,” *J. Algebraic Geometry*, vol. 1, pp. 449–530, 1992.
[13] H. Verlinde and M. Wijnholt, “Building the Standard Model on a D3-Brane,” *JHEP*, vol. 01, p. 106, 2007, hep-th/0508089.
|
String Sanitization: A Combinatorial Approach
Giulia Bernardini, Huiping Chen, Alessio Conte, Roberto Grossi, Grigorios Loukides, Nadia Pisanti, Solon Pissis, Giovanna Rosone
To cite this version:
Giulia Bernardini, Huiping Chen, Alessio Conte, Roberto Grossi, Grigorios Loukides, et al.. String Sanitization: A Combinatorial Approach. Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2020, Würzburg, Germany. pp.627-644, 10.1007/978-3-030-46150-8_37. hal-03085832
HAL Id: hal-03085832
https://hal.inria.fr/hal-03085832
Submitted on 22 Dec 2020
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
String Sanitization: A Combinatorial Approach
Giulia Bernardini\textsuperscript{1}, Huiping Chen\textsuperscript{2}, Alessio Conte\textsuperscript{3}, Roberto Grossi\textsuperscript{3,4}, Grigorios Loukides\textsuperscript{2} (✉), Nadia Pisanti\textsuperscript{3,4}, Solon P. Pissis\textsuperscript{4,5}, and Giovanna Rosone\textsuperscript{3}
\textsuperscript{1} Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy, email@example.com
\textsuperscript{2} Department of Informatics, King’s College London, London, UK [huiping.chen,grigorios.loukides]@kcl.ac.uk
\textsuperscript{3} Department of Computer Science, University of Pisa, Pisa, Italy [conte,grossi,pisanti]@di.unipi.it, firstname.lastname@example.org
\textsuperscript{4} ERABLE Team, INRIA, Lyon, France
\textsuperscript{5} CWI, Amsterdam, The Netherlands, email@example.com
Abstract. String data are often disseminated to support applications such as location-based service provision or DNA sequence analysis. This dissemination, however, may expose sensitive patterns that model confidential knowledge (\textit{e.g.}, trips to mental health clinics from a string representing a user’s location history). In this paper, we consider the problem of sanitizing a string by concealing the occurrences of sensitive patterns, while maintaining data utility. First, we propose a time-optimal algorithm, TFS-ALGO, to construct the shortest string preserving the order of appearance and the frequency of all non-sensitive patterns. Such a string allows accurately performing tasks based on the sequential nature and pattern frequencies of the string. Second, we propose a time-optimal algorithm, PFS-ALGO, which preserves a partial order of appearance of non-sensitive patterns but produces a much shorter string that can be analyzed more efficiently. The strings produced by either of these algorithms may reveal the location of sensitive patterns. In response, we propose a heuristic, MCSR-ALGO, which replaces letters in these strings with carefully selected letters, so that sensitive patterns are not reinstated and occurrences of spurious patterns are prevented. We implemented our sanitization approach that applies TFS-ALGO, PFS-ALGO and then MCSR-ALGO and experimentally show that it is effective and efficient.
1 Introduction
A large number of applications, in domains ranging from transportation to web analytics and bioinformatics feature data modeled as \textit{strings}, \textit{i.e.}, sequences of letters over some finite alphabet. For instance, a string may represent the history of visited locations of one or more individuals, with each letter corresponding to a location. Similarly, it may represent the history of search query terms of one or more web users, with letters corresponding to query terms, or a medically important part of the DNA sequence of a patient, with letters corresponding
to DNA bases. Analyzing such strings is key in applications including location-based service provision, product recommendation, and DNA sequence analysis. Therefore, such strings are often disseminated beyond the party that has collected them. For example, location-based service providers often outsource their data to data analytics companies who perform tasks such as similarity evaluation between strings [14], and retailers outsource their data to marketing agencies who perform tasks such as mining frequent patterns from the strings [15].
However, disseminating a string intact may result in the exposure of confidential knowledge, such as trips to mental health clinics in transportation data [20], query terms revealing political beliefs or sexual orientation of individuals in web data [17], or diseases associated with certain parts of DNA data [16]. Thus, it may be necessary to sanitize a string prior to its dissemination, so that confidential knowledge is not exposed. At the same time, it is important to preserve the utility of the sanitized string, so that data protection does not outweigh the benefits of disseminating the string to the party that disseminates or analyzes the string, or to the society at large. For example, a retailer should still be able to obtain actionable knowledge in the form of frequent patterns from the marketing agency who analyzed their outsourced data; and researchers should still be able to perform analyses such as identifying significant patterns in DNA sequences.
**Our Model and Setting.** Motivated by the discussion above, we introduce the following model which we call *Combinatorial String Dissemination* (CSD). In CSD, a party has a string $W$ that it seeks to disseminate, while satisfying a set of *constraints* and a set of desirable *properties*. For instance, the constraints aim to capture privacy requirements and the properties aim to capture data utility considerations (*e.g.*, posed by some other party based on applications). To satisfy both, $W$ must be transformed to a string $X$ by applying a sequence of edit operations. The computational task is to determine this sequence of edit operations so that $X$ satisfies the desirable properties subject to the constraints.
Under the CSD model, we consider a specific setting in which the sanitized string $X$ must satisfy the following constraint **C1**: for an integer $k > 0$, no given length-$k$ substring (also called pattern) modeling confidential knowledge should occur in $X$. We call each such length-$k$ substring a *sensitive pattern*. We aim at finding the shortest possible string $X$ satisfying the following desired properties: (**P1**) the order of appearance of all other length-$k$ substrings (*non-sensitive patterns*) is the same in $W$ and in $X$; and (**P2**) the frequency of these length-$k$ substrings is the same in $W$ and in $X$. The problem of constructing $X$ in this setting is referred to as TFS (Total order, Frequency, Sanitization). Clearly, substrings of arbitrary lengths can be hidden from $X$ by setting $k$ equal to the length of the shortest substring we wish to hide, and then setting, for each of these substrings, any length-$k$ substring as sensitive.
Our setting is motivated by real-world applications involving string dissemination. In these applications, a *data custodian* disseminates the sanitized version $X$ of a string $W$ to a *data recipient*, for the purpose of analysis (*e.g.*, mining). $W$ contains confidential information that the data custodian needs to hide, so that it does not occur in $X$. Such information is specified by the data custodian based on
domain expertise, as in [1, 4, 11, 15]. At the same time, the data recipient specifies **P1** and **P2** that $X$ must satisfy in order to be useful. These properties map directly to common data utility considerations in string analysis. By satisfying **P1**, $X$ allows tasks based on the sequential nature of the string, such as blockwise $q$-gram distance computation [12], to be performed accurately. By satisfying **P2**, $X$ allows computing the frequency of length-$k$ substrings [19] and hence mining frequent length-$k$ substrings with no utility loss. We require that $X$ has minimal length so that it does not contain redundant information. For instance, the string which is constructed by concatenating all non-sensitive length-$k$ substrings in $W$ and separating them with a special letter that does not occur in $W$, satisfies **P1** and **P2** but is not the shortest possible. Such a string $X$ will have a negative impact on the efficiency of any subsequent analysis tasks to be performed on it.
Note, existing works for sequential data sanitization (*e.g.*, [4, 11, 13, 15, 22]) or anonymization (*e.g.*, [2, 5, 7]) cannot be applied to our setting (see Section 7).
**Our Contributions.** We define the TFS problem for string sanitization and a variant of it, referred to as PFS (Partial order, Frequency, Sanitization), which aims at producing an even shorter string $Y$ by relaxing **P1** of TFS. Our algorithms for TFS and PFS construct strings $X$ and $Y$ using a separator letter $\#$, which is not contained in the alphabet of $W$. This prevents occurrences of sensitive patterns in $X$ or $Y$. The algorithms repeat proper substrings of sensitive patterns so that the frequency of non-sensitive patterns overlapping with sensitive ones does not change. For $X$, we give a deterministic construction which may be easily reversible (*i.e.*, it may enable a data recipient to construct $W$ from $X$), because the occurrences of $\#$ reveal the exact location of sensitive patterns. For $Y$, we give a construction which breaks several ties arbitrarily, thus being less easily reversible. We further address the reversibility issue by defining the MCSR (Minimum-Cost Separators Replacement) problem and designing an algorithm for dealing with it. In MCSR, we seek to replace all separators, so that the location of sensitive patterns is not revealed, while preserving data utility. We make the following specific contributions:
1. We design an algorithm for solving the TFS problem in $O(kn)$ time, where $n$ is the length of $W$. In fact we prove that $O(kn)$ time is worst-case optimal by showing that the length of $X$ is in $\Theta(kn)$ in the worst case. The output of the algorithm is a string $X$ consisting of a sequence of substrings over the alphabet of $W$ separated by $\#$ (see Example 1 below). An important feature of our algorithm, which is useful in the efficient construction of $Y$ discussed next, is that it can be implemented to produce an $O(n)$-sized representation of $X$ with respect to $W$ in $O(n)$ time. See Section 3.
*Example 1.* Let $W = aabaaababbbaab$, $k = 4$, and the set of sensitive patterns be \{aaaa, baaa, bbaa\}. The string $X = aaba\#\text{aababbb}\#\text{baab}$ consists of three substrings over the alphabet \{a, b\} separated by $\#$. Note that no sensitive pattern occurs in $X$, while all non-sensitive substrings of length 4 have the same frequency in $W$ and in $X$ (*e.g.*, aaba appears twice) and appear in the same order in $W$ and in $X$ (*e.g.*, babb precedes abbb). Also, note that any shorter string than $X$
would either create sensitive patterns or change the frequencies (\textit{e.g.}, removing the last letter of $X$ creates a string in which $\texttt{baab}$ no longer appears). \hfill \Box
2. We define the PFS problem relaxing \textbf{P1} of TFS to produce shorter strings that are more efficient to analyze. Instead of a \textit{total order} (\textbf{P1}), we require a \textit{partial order} (\textbf{P1}) that preserves the order of appearance only for sequences of successive non-sensitive length-$k$ substrings that overlap by $k-1$ letters. This makes sense because the order of two successive non-sensitive length-$k$ substrings with no length-$(k-1)$ overlap has anyway been “interrupted” (by a sensitive pattern). We exploit this observation to shorten the string further. Specifically, we design an algorithm that solves PFS in the optimal $\mathcal{O}(n + |Y|)$ time, where $|Y|$ is the length of $Y$, using the $\mathcal{O}(n)$-sized representation of $X$. See Section 4.
\textit{Example 2. (Cont’d from Example 1)} Recall that $W = \texttt{aabaaaabbbbbaab}$. A string $Y$ is $\texttt{aaabbabbb}\#\texttt{aabab}$. The order of $\texttt{babb}$ and $\texttt{abbb}$ is preserved in $Y$ since they are successive, non-sensitive, and with an overlap of $k-1=3$ letters. The order of $\texttt{abaa}$ and $\texttt{aaab}$, which are successive and non-sensitive, is not preserved since they do not have an overlap of $k-1=3$ letters. \hfill \Box
3. We define the MCSR problem, which seeks to produce a string $Z$, by deleting or replacing all separators in $Y$ with letters from the alphabet of $W$ so that: no sensitive patterns are reinstated in $Z$; occurrences of spurious patterns that may not be mined from $W$ but can be mined from $Z$, for a given support threshold, are prevented; the distortion incurred by the replacements in $Z$ is bounded. The first requirement is to preserve privacy and the next two to preserve data utility. We show that MCSR is NP-hard and propose a heuristic to attack it. See Section 5.
4. We implemented our combinatorial approach for sanitizing a string $W$ (\textit{i.e.}, all aforementioned algorithms implementing the pipeline $W \rightarrow X \rightarrow Y \rightarrow Z$) and show its effectiveness and efficiency on real and synthetic data. See Section 6.
\section{Preliminaries, Problem Statements, and Main Results}
\textbf{Preliminaries.} Let $T = T[0]T[1]\ldots T[n-1]$ be a \textit{string} of length $|T| = n$ over a finite ordered alphabet $\Sigma$ of size $|\Sigma| = \sigma$. By $\Sigma^*$ we denote the set of all strings over $\Sigma$. By $\Sigma^k$ we denote the set of all length-$k$ strings over $\Sigma$. For two positions $i$ and $j$ on $T$, we denote by $T[i..j] = T[i]\ldots T[j]$ the \textit{substring} of $T$ that starts at position $i$ and ends at position $j$ of $T$. By $\varepsilon$ we denote the \textit{empty string} of length 0. A \textit{prefix} of $T$ is a substring of the form $T[0..j]$, and a suffix of $T$ is a substring of the form $T[i..n-1]$. A \textit{proper} prefix (suffix) of a string is not equal to the string itself. By $\text{Freq}_V(U)$ we denote the number of occurrences of string $U$ in string $V$. Given two strings $U$ and $V$ we say that $U$ has a \textit{suffix-prefix overlap} of length $\ell > 0$ with $V$ if and only if the length-$\ell$ suffix of $U$ is equal to the length-$\ell$ prefix of $V$, \textit{i.e.}, $U[|U| - \ell .. |U| - 1] = V[0 .. \ell - 1]$.
We fix a string $W$ of length $n$ over an alphabet $\Sigma = \{1, \ldots, n^{O(1)}\}$ and an integer $0 < k < n$. We refer to a length-$k$ string or a \textit{pattern} interchangeably. An occurrence of a pattern is uniquely represented by its starting position. Let $\mathcal{S}$ be a set of positions over $\{0, \ldots, n-k\}$ with the following closure property: for every $i \in \mathcal{S}$, if there exists $j$ such that $W[j..j+k-1] = W[i..i+k-1]$, then
$j \in S$. That is, if an occurrence of a pattern is in $S$ all its occurrences are in $S$. A substring $W[i..i+k-1]$ of $W$ is called sensitive if and only if $i \in S$. $S$ is thus the set of occurrences of sensitive patterns. The difference set $\mathcal{I} = \{0, \ldots, n-k\} \setminus S$ is the set of occurrences of non-sensitive patterns.
For any string $U$, we denote by $\mathcal{I}_U$ the set of occurrences of non-sensitive length-$k$ strings over $\Sigma$. (We have that $\mathcal{I}_W = \mathcal{I}$.) We call an occurrence $i$ the t-predecessor of another occurrence $j$ in $\mathcal{I}_U$ if and only if $i$ is the largest element in $\mathcal{I}_U$ that is less than $j$. This relation induces a strict total order on the occurrences in $\mathcal{I}_U$. We call $i$ the p-predecessor of $j$ in $\mathcal{I}_U$ if and only if $i$ is the t-predecessor of $j$ in $\mathcal{I}_U$ and $U[i..i+k-1]$ has a suffix-prefix overlap of length $k-1$ with $U[j..j+k-1]$. This relation induces a strict partial order on the occurrences in $\mathcal{I}_U$. We call a subset $\mathcal{J}$ of $\mathcal{I}_U$ a t-chain (resp., p-chain) if for all elements in $\mathcal{J}$ except the minimum one, their t-predecessor (resp., p-predecessor) is also in $\mathcal{J}$. For two strings $U$ and $V$, chains $\mathcal{J}_U$ and $\mathcal{J}_V$ are equivalent, denoted by $\mathcal{J}_U \equiv \mathcal{J}_V$, if and only if $|\mathcal{J}_U| = |\mathcal{J}_V|$ and $U[u..u+k-1] = V[v..v+k-1]$, where $u$ is the $j$th smallest element of $\mathcal{J}_U$ and $v$ is the $j$th smallest of $\mathcal{J}_V$, for all $j \leq |\mathcal{J}_U|$.
**Problem Statements and Main Results.**
**Problem 1 (TFS).** Given $W$, $k$, $S$, and $\mathcal{I}$ construct the shortest string $X$:
C1 $X$ does not contain any sensitive pattern.
P1 $\mathcal{I}_W \equiv \mathcal{I}_X$, i.e., the t-chains $\mathcal{I}_W$ and $\mathcal{I}_X$ are equivalent.
P2 $\text{Freq}_X(U) = \text{Freq}_W(U)$, for all $U \in \Sigma^k \setminus \{W[i..i+k-1] : i \in S\}$.
TFS requires constructing the shortest string $X$ in which all sensitive patterns from $W$ are concealed (C1), while preserving the order (P1) and the frequency (P2) of all non-sensitive patterns. Our first result is the following.
**Theorem 1.** Let $W$ be a string of length $n$ over $\Sigma = \{1, \ldots, n^{O(1)}\}$. Given $k < n$ and $S$, TFS-ALGO solves Problem 1 in $O(kn)$ time, which is worst-case optimal. An $O(n)$-sized representation of $X$ can be built in $O(n)$ time.
P1 implies P2, but P1 is a strong assumption that may result in long output strings that are inefficient to analyze. We thus relax P1 to require that the order of appearance remains the same only for sequences of successive non-sensitive length-$k$ substrings that also overlap by $k-1$ letters (p-chains).
**Problem 2 (PFS).** Given $W$, $k$, $S$, and $\mathcal{I}$ construct a shortest string $Y$:
C1 $Y$ does not contain any sensitive pattern.
P1’ For any p-chain $\mathcal{J}_W$ of $\mathcal{I}_W$, there is a p-chain $\mathcal{J}_Y$ of $\mathcal{I}_Y$ such that $\mathcal{J}_W \equiv \mathcal{J}_Y$.
P2 $\text{Freq}_Y(U) = \text{Freq}_W(U)$, for all $U \in \Sigma^k \setminus \{W[i..i+k-1] : i \in S\}$.
Our second result, which builds on Theorem 1, is the following.
**Theorem 2.** Let $W$ be a string of length $n$ over $\Sigma = \{1, \ldots, n^{O(1)}\}$. Given $k < n$ and $S$, PFS-ALGO solves Problem 2 in the optimal $O(n + |Y|)$ time.
To arrive at Theorems 1 and 2, we use a special letter (separator) $\# \notin \Sigma$ when required. However, the occurrences of $\#$ may reveal the locations of sensitive patterns. We thus seek to delete or replace the occurrences of $\#$ in $Y$ with letters from $\Sigma$. The new string $Z$ should not reinstate any sensitive pattern. Given an integer threshold $\tau > 0$, we call pattern $U \in \Sigma^k$ a $\tau$-ghost in $Z$ if and only if $\text{Freq}_W(U) < \tau$ but $\text{Freq}_Z(U) \geq \tau$. Moreover, we seek to prevent $\tau$-ghost occurrences in $Z$ by also bounding the total weight of the letter choices we make to replace the occurrences of $\#$. This is the MCSR problem. We show that already a restricted version of the MCSR problem, namely, the version when $k = 1$, is NP-hard via the Multiple Choice Knapsack (MCK) problem [18].
**Theorem 3.** The MCSR problem is NP-hard.
Based on this connection, we propose a non-trivial heuristic algorithm to attack the MCSR problem for the general case of an arbitrary $k$.
### 3 TFS-ALGO
We convert string $W$ into a string $X$ over alphabet $\Sigma \cup \{\#\}$, $\# \notin \Sigma$, by reading the letters of $W$, from left to right, and appending them to $X$ while enforcing the following two rules:
**R1:** When the last letter of a sensitive substring $U$ is read from $W$, we append $\#$ to $X$ (essentially replacing this last letter of $U$ with $\#$). Then, if $V$ is the longest proper prefix of the succeeding non-sensitive substring (in the t-predecessor order), we append the longest proper suffix $V$ of $U$ right after $\#$.
**R2:** When the $k - 1$ letters before $\#$ are the same as the $k - 1$ letters after $\#$, we remove $\#$ and the $k - 1$ succeeding letters (inspect Fig. 1).

**R1** prevents $U$ from occurring in $X$, and **R2** reduces the length of $X$ (i.e., allows to protect sensitive patterns with fewer extra letters). Both rules leave unchanged the order and frequencies of non-sensitive patterns. It is crucial to observe that applying the idea behind **R2** on more than $k - 1$ letters would decrease the frequency of some pattern, while applying it on fewer than $k - 1$ letters would create new patterns. Thus, we need to consider just **R2 as-is**.
Let $C$ be an array of size $n$ that stores the occurrences of sensitive and non-sensitive patterns: $C[i] = 1$ if $i \in \mathcal{S}$ and $C[i] = 0$ if $i \in \mathcal{I}$. For technical
reasons we set the last $k - 1$ values in $C$ equal to $C[n - k]$; i.e., $C[n - k + 1] := \ldots := C[n - 1] := C[n - k]$. Note that $C$ is constructible from $\mathcal{S}$ in $\mathcal{O}(n)$ time. Given $C$ and $k < n$, TFS-ALGO efficiently constructs $X$ by implementing **R1** and **R2** concurrently as opposed to implementing **R1** and then **R2** (see the proof of Lemma 1 for details of the workings of TFS-ALGO and Fig. 1 for an example). We next show that string $X$ enjoys several properties.
**Lemma 1.** Let $W$ be a string of length $n$ over $\Sigma$. Given $k < n$ and array $C$, TFS-ALGO constructs the shortest string $X$ such that the following hold:
1. There exists no $W[i \ldots i + k - 1]$ with $C[i] = 1$ occurring in $X$ (**C1**).
2. $\mathcal{I}_W \equiv \mathcal{I}_X$, i.e., the order of substrings $W[i \ldots i + k - 1]$, for all $i$ such that $C[i] = 0$, is the same in $W$ and in $X$; conversely, the order of all substrings $U \in \Sigma^k$ of $X$ is the same in $X$ and in $W$ (**P1**).
3. $\text{Freq}_X(U) = \text{Freq}_W(U)$, for all $U \in \Sigma^k \setminus \{W[i \ldots i + k - 1] : C[i] = 1\}$ (**P2**).
4. The occurrences of letter $\#$ in $X$ are at most $\lfloor \frac{n-k+1}{2} \rfloor$ and they are at least $k$ positions apart (**P3**).
5. $0 \leq |X| \leq \lceil \frac{n-k+1}{2} \rceil \cdot k + \lfloor \frac{n-k+1}{2} \rfloor$ and these bounds are tight (**P4**).
**Proof.** Proofs of **C1** and **P1-P4** can be found in [3]. We prove here that $X$ has minimal length. Let $X_j$ be the prefix of string $X$ obtained by processing the first $j$ letters of string $W$. Let $j_{\min} = \min\{i | C[i] = 0\} + k$. We will proceed by induction on $j$, claiming that $X_j$ is the shortest string such that **C1** and **P1-P4** hold for $W[0 \ldots j], \forall j_{\min} \leq j \leq |W| - 1$. We call such a string *optimal*.
*Base case:* $j = j_{\min}$. By Lines 3-4 of TFS-ALGO, $X_j$ is equal to the first non-sensitive length-$k$ substring of $W$, and it is clearly the shortest string such that **C1** and **P1-P4** hold for $W[0 \ldots j]$.
*Inductive hypothesis and step:* $X_{j-1}$ is optimal for $j > j_{\min}$. If $C[j - k] = C[j - k + 1] = 0$, $X_j = X_{j-1}W[j]$ and this is clearly optimal. If $C[j - k + 1] = 1$, $X_j = X_{j-1}$ thus still optimal. Finally, if $C[j - k] = 1$ and $C[j - k + 1] = 0$ we have two subcases: if $W[f \ldots f + k - 2] = W[j - k + 1 \ldots j - 1]$ then $X_j = X_{j-1}W[j]$, and once again $X_j$ is evidently optimal. Otherwise, $X_j = X_{j-1}\#W[j - k + 1 \ldots j]$. Suppose by contradiction that there exists a shorter $X'_j$ such that **C1** and **P1-P4** still hold: either drop $\#$ or append less than $k$ letters after $\#$. If we appended less than $k$ letters after $\#$, since TFS-ALGO will not read $W[j]$ ever again, **P2-P3** would be violated, as an occurrence of $W[j - k + 1 \ldots j]$ would be missed. Without $\#$, the last $k$ letters of $X_{j-1}W[j - k + 1]$ would violate either **C1** or **P1** and **P2** (since we suppose $W[f \ldots f + k - 2] \neq W[j - k + 1 \ldots j - 1]$). Then $X_j$ is optimal. □
**Theorem 1.** Let $W$ be a string of length $n$ over $\Sigma = \{1, \ldots, n^{O(1)}\}$. Given $k < n$ and $\mathcal{S}$, TFS-ALGO solves Problem 1 in $\mathcal{O}(kn)$ time, which is worst-case optimal. An $\mathcal{O}(n)$-sized representation of $X$ can be built in $\mathcal{O}(n)$ time.
**Proof.** For the first part inspect TFS-ALGO. Lines 2-4 can be realized in $\mathcal{O}(n)$ time. The *while* loop in Line 5 is executed no more than $n$ times, and every
TFS-ALGO($W \in \Sigma^n$, $C, k, \# \notin \Sigma$)
1 $X \leftarrow \varepsilon$; $j \leftarrow |W|$; $\ell \leftarrow 0$;
2 $j \leftarrow \min\{i|C[i] = 0\}$; /* $j$ is the leftmost pos of a non-sens. pattern */
3 if $j + k - 1 < |W|$ then /* Append the first non-sens. pattern to $X$ */
4 $X[0..k - 1] \leftarrow W[j..j + k - 1]$; $j \leftarrow j + k$; $\ell \leftarrow \ell + k$;
5 while $j < |W|$ do /* Examine two consecutive patterns */
6 $p \leftarrow j - k$; $c \leftarrow p + 1$;
7 if $C[p] = C[c] = 0$ then /* If both are non-sens., append the last letter of the leftmost one to $X$ */
8 $X[\ell] \leftarrow W[j]$; $\ell \leftarrow \ell + 1$; $j \leftarrow j + 1$;
9 if $C[p] = 0 \land C[c] = 1$ then /* If the rightmost is sens., mark it and advance $j$ */
10 $f \leftarrow c$; $j \leftarrow j + 1$;
11 if $C[p] = C[c] = 1$ then $j \leftarrow j + 1$; /* If both are sens., advance $j$ */
12 if $C[p] = 1 \land C[c] = 0$ then /* If the leftmost is sens. and the rightmost is not */
13 if $W[c..c + k - 2] = W[f..f + k - 2]$ then /* If the last marked sens. pattern and the current non-sens. overlap by $k - 1$, append the last letter of the latter to $X$ */
14 $X[\ell] \leftarrow W[j]$; $\ell \leftarrow \ell + 1$; $j \leftarrow j + 1$;
15 else /* Else append $\#$ and the $(k - 1)$-length suffix of the current non-sens. pattern to $X$ */
16 $X[\ell] \leftarrow \#$; $\ell \leftarrow \ell + 1$;
17 $X[\ell.. \ell + k - 1] \leftarrow W[j - k + 1..j]$; $\ell \leftarrow \ell + k$; $j \leftarrow j + 1$;
18 report $X$
operation inside the loop takes $O(1)$ time except for Line 13 and Line 17 which take $O(k)$ time. Correctness and optimality follow directly from Lemma 1 (P4).
For the second part, we assume that $X$ is represented by $W$ and a sequence of pointers $[i, j]$ to $W$ interleaved (if necessary) by occurrences of $\#$. In Line 17, we can use an interval $[i, j]$ to represent the length-$k$ substring of $W$ added to $X$. In all other lines (Lines 4, 8 and 14) we can use $[i, i]$ as one letter is added to $X$ per one letter of $W$. By Lemma 1 we can have at most $\lfloor \frac{n-k+1}{2} \rfloor$ occurrences of letter $\#$. The check at Line 13 can be implemented in constant time after linear-time pre-processing of $W$ for longest common extension queries [9]. All other operations take in total linear time in $n$. Thus there exists an $O(n)$-sized representation of $X$ and it is constructible in $O(n)$ time.
4 PFS-ALGO
Lemma 1 tells us that $X$ is the shortest string satisfying constraint C1 and properties P1-P4. If we were to drop P1 and employ the partial order II1 (see Problem 2), the length of $X = X_1\#\ldots\#X_N$ would not always be minimal: if a
permutation of the strings $X_1, \ldots, X_N$ contains pairs $X_i, X_j$ with a suffix-prefix overlap of length $\ell = k - 1$, we may further apply R2, obtaining a shorter string while still satisfying H1.
We propose PFS-ALGO to find such a permutation efficiently constructing a shorter string $Y$ from $W$. The crux of our algorithm is an efficient method to solve a variant of the classic NP-complete Shortest Common Superstring (SCS) problem [10]. Specifically our algorithm: (I) Computes string $X$ using Theorem 1. (II) Constructs a collection $B'$ of strings, each of two symbols (two identifiers) and in a one-to-one correspondence with the elements of $B = \{X_1, \ldots, X_N\}$: the first (resp., second) symbol of the $i$th element of $B'$ is a unique identifier of the string corresponding to the $\ell$-length prefix (resp., suffix) of the $i$th element of $B$. (III) Computes a shortest string containing every element in $B'$ as a distinct substring. (IV) Constructs $Y$ by mapping back each element to its distinct substring in $B$. If there are multiple possible shortest strings, one is selected arbitrarily.
Example 3 (Illustration of the workings of PFS-ALGO). Let $\ell = k - 1 = 3$ and
$$X = aabbc\#bccaab\#bbca\#aaabac\#aabcbbc.$$
The collection $B$ is aabbc, bccaab, bbca, aaabac, aabcbbc, and the collection $B'$ is 24, 62, 45, 13, 24 (id of prefix · id of suffix). A shortest string containing all elements of $B'$ as distinct substrings is: 13 · 24 · 6245 (obtained by permuting the original string as 13, 24, 62, 24, 45 then applying R2 twice). This shortest string is mapped back to the solution $Y = aaabac\#aabb\#bccaabcbbca$. For example, 13 is mapped back to aaabac. Note, $Y$ contains two occurrences of # and has length 24, while $X$ contains 4 occurrences of # and has length 32. □
We now present the details of PFS-ALGO. We first introduce the Fixed-Overlap Shortest String with Multiplicities (FO-SSM) problem: Given a collection $B$ of strings $B_1, \ldots, B_{|B|}$ and an integer $\ell$, with $|B_i| > \ell$, for all $1 \leq i \leq |B|$, FO-SSM seeks to find a shortest string containing each element of $B$ as a distinct substring using the following operations on any pair of strings $B_i, B_j$:
1. concat($B_i, B_j$) = $B_i \cdot B_j$;
2. $\ell$-merge($B_i, B_j$) = $B_i[0 \ldots |B_i| - \ell]B_j[0 \ldots |B_j| - 1] = B_i[0 \ldots |B_i| - \ell] \cdot B_j$.
Any solution to FO-SSM with $\ell := k - 1$ and $B := X_1, \ldots, X_N$ implies a solution to the PFS problem, because $|X_i| > k - 1$ for all $i$'s (see Lemma 1, P3)
The FO-SSM problem is a variant of the SCS problem. In the SCS problem, we are given a set of strings and we are asked to compute the shortest common superstring of the elements of this set. The SCS problem is known to be NP-Complete, even for binary strings [10]. However, if all strings are of length two, the SCS problem admits a linear-time solution [10]. We exploit this crucial detail positively to show a linear-time solution to the FO-SSM problem in Lemma 3. In order to arrive to this result, we first adapt the SCS linear-time solution of [10] to our needs (see Lemma 2) and plug this solution to Lemma 3.
**Lemma 2.** Let \( Q \) be a collection of \( q \) strings, each of length two, over an alphabet \( \Sigma = \{1, \ldots, (2q)^{\mathcal{O}(1)}\} \). We can compute a shortest string containing every element of \( Q \) as a distinct substring in \( \mathcal{O}(q) \) time.
**Proof.** We sort the elements of \( Q \) lexicographically in \( \mathcal{O}(q) \) time using radixsort. We also replace every letter in these strings with their *lexicographic rank* from \( \{1, \ldots, 2q\} \) in \( \mathcal{O}(q) \) time using radixsort. In \( \mathcal{O}(q) \) time we construct the de Bruijn multigraph \( G \) of these strings [6]. Within the same time complexity, we find all nodes \( v \) in \( G \) with in-degree, denoted by \( \text{IN}(v) \), smaller than out-degree, denoted by \( \text{OUT}(v) \). We perform the following two steps:
**Step 1:** While there exists a node \( v \) in \( G \) with \( \text{IN}(v) < \text{OUT}(v) \), we start an arbitrary path (with possibly repeated nodes) from \( v \), traverse consecutive edges and delete them. Each time we delete an edge, we update the in- and out-degree of the affected nodes. We stop traversing edges when a node \( v' \) with \( \text{OUT}(v') = 0 \) is reached: whenever \( \text{IN}(v') = \text{OUT}(v') = 0 \), we also delete \( v' \) from \( G \). Then, we add the traversed path \( p = v \ldots v' \) to a set \( P \) of paths. The path can contain the same node \( v \) more than once. If \( G \) is empty we halt. Proceeding this way, there are no two elements \( p_1 \) and \( p_2 \) in \( P \) such that \( p_1 \) starts with \( v \) and \( p_2 \) ends with \( v \); thus this path decomposition is minimal. If \( G \) is not empty at the end, by construction, it consists of only cycles.
**Step 2:** While \( G \) is not empty, we perform the following. If there exists a cycle \( c \) that intersects with any path \( p \) in \( P \) we splice \( c \) with \( p \), update \( p \) with the result of splicing, and delete \( c \) from \( G \). This operation can be efficiently implemented by maintaining an array \( A \) of size \( 2q \) of linked lists over the paths in \( P \): \( A[\alpha] \) stores a list of pointers to all occurrences of letter \( \alpha \) in the elements of \( P \). Thus in constant time per node of \( c \) we check if any such path \( p \) exists in \( P \) and splice the two in this case. If no such path exists in \( P \), we add to \( P \) any of the path-linearizations of the cycle, and delete the cycle from \( G \). After each change to \( P \), we update \( A \) and delete every node \( u \) with \( \text{IN}(u) = \text{OUT}(u) = 0 \) from \( G \).
The correctness of this algorithm follows from the fact that \( P \) is a minimal path decomposition of \( G \). Thus any concatenation of paths in \( P \) represents a shortest string containing all elements in \( Q \) as distinct substrings. \( \square \)
Omitted proofs of Lemmas 3 and 4 can be found in [3].
**Lemma 3.** Let \( B \) be a collection of strings over an alphabet \( \Sigma = \{1, \ldots, ||B||^{\mathcal{O}(1)}\} \). Given an integer \( \ell \), the FO-SSM problem for \( B \) can be solved in \( \mathcal{O}(|B|) \) time.
Thus, PFS-ALGO applies Lemma 3 on \( B := X_1, \ldots, X_N \) with \( \ell := k - 1 \) (recall that \( X_1 \# \ldots \# X_N = X \)). Note that each time the concat operation is performed, it also places the letter \( \# \) in between the two strings.
**Lemma 4.** Let \( W \) be a string of length \( n \) over an alphabet \( \Sigma \). Given \( k < n \) and array \( C \), PFS-ALGO constructs a shortest string \( Y \) with \( C1, \Pi1, \) and \( P2-P4 \).
**Theorem 2.** Let \( W \) be a string of length \( n \) over \( \Sigma = \{1, \ldots, n^{\mathcal{O}(1)}\} \). Given \( k < n \) and \( S \), PFS-ALGO solves Problem 2 in the optimal \( \mathcal{O}(n + |Y|) \) time.
Proof. We compute the \(O(n)\)-sized representation of string \(X\) with respect to \(W\) described in the proof of Theorem 1. This can be done in \(O(n)\) time. If \(X \in \Sigma^*\), then we construct and return \(Y := X\) in time \(O(|Y|)\) from the representation. If \(X \in (\Sigma \cup \{\#\})^*\), implying \(|Y| \leq |X|\), we compute the LCP data structure of string \(W\) in \(O(n)\) time [9]; and implement Lemma 3 in \(O(n)\) time by avoiding to read string \(X\) explicitly: we rather rename \(X_1, \ldots, X_N\) to a collection of two-letter strings by employing the LCP information of \(W\) directly. We then construct and report \(Y\) in time \(O(|Y|)\). Correctness follows directly from Lemma 4. \(\square\)
5 The MCSR Problem and MCSR-ALGO
The strings \(X\) and \(Y\), constructed by TFS-ALGO and PFS-ALGO, respectively, may contain the separator \#, which reveals information about the location of the sensitive patterns in \(W\). Specifically, a malicious data recipient can go to the position of a \# in \(X\) and “undo” Rule R1 that has been applied by TFS-ALGO, removing \# and the \(k - 1\) letters after \# from \(X\). The result will be an occurrence of the sensitive pattern. For example, applying this process to the first \# in \(X\) shown in Fig. 1, results in recovering the sensitive pattern \(abab\). A similar attack is possible on the string \(Y\) produced by PFS-ALGO, although it is hampered by the fact that substrings within two consecutive \#s in \(X\) often swap places in \(Y\).
To address this issue, we seek to construct a new string \(Z\), in which \#s are either deleted or replaced by letters from \(\Sigma\). To preserve privacy, we require separator replacements not to reinstate sensitive patterns. To preserve data utility, we favor separator replacements that have a small cost in terms of occurrences of \(\tau\)-ghosts (patterns with frequency less than \(\tau\) in \(W\) and at least \(\tau\) in \(Z\)) and incur a bounded level of distortion in \(Z\), as defined below. This is the MCSR problem, a restricted version of which is presented in Problem 3. The restricted version is referred to as MCSR\(_{k=1}\) and differs from MCSR in that it uses \(k = 1\) for the pattern length instead of an arbitrary value \(k > 0\). MCSR\(_{k=1}\) is presented next for simplicity and because it is used in the proof of Lemma 5 (see [3] for the proof). Lemma 5 implies Theorem 3.
**Problem 3 (MCSR\(_{k=1}\)).** Given a string \(Y\) over an alphabet \(\Sigma \cup \{\#\}\) with \(\delta > 0\) occurrences of letter \#, and parameters \(\tau\) and \(\theta\), construct a new string \(Z\) by substituting the \(\delta\) occurrences of \# in \(Y\) with letters from \(\Sigma\), such that:
(I) \[
\sum_{i : Y[i] = \#, \ Freq_Y(Z[i]) < \tau \atop Freq_Z(Z[i]) \geq \tau} Ghost(i, Z[i]) \text{ is minimum, and }
\]
(II) \[
\sum_{i : Y[i] = \#} Sub(i, Z[i]) \leq \theta.
\]
The cost of \(\tau\)-ghosts is captured by a function \(Ghost\). This function assigns a cost to an occurrence of a \(\tau\)-ghost, which is caused by a separator replacement at position \(i\), and is specified based on domain knowledge. For example, with a cost equal to 1 for each gained occurrence of each \(\tau\)-ghost, we penalize more heavily a \(\tau\)-ghost with frequency much below \(\tau\) in \(Y\) and the penalty increases with the number of gained occurrences. Moreover, we may want to penalize positions
towards the end of a temporally ordered string, to avoid spurious patterns that would be deemed important in applications based on time-decaying models [8].
The replacement distortion is captured by a function $Sub$ which assigns a weight to a letter that could replace a $\#$ and is specified based on domain knowledge. The maximum allowable replacement distortion is $\theta$. Small weights favor the replacement of separators with desirable letters (e.g., letters that reinstate non-sensitive frequent patterns) and letters that reinstate sensitive patterns are assigned a weight larger than $\theta$ that prohibits them from replacing a $\#$. Similarly, weights larger than $\theta$ are assigned to letters which would lead to implausible patterns [13] if they replaced $\#$.
**Lemma 5.** The $MCSR_{k=1}$ problem is NP-hard.
**Theorem 3.** The MCSR problem is NP-hard.
**MCSR-ALGO.** Our MCSR-ALGO is a non-trivial heuristic that exploits the connection of the MCSR and MCK [18] problems and works by:
(I) Constructing the set of all candidate $\tau$-ghost patterns (i.e., length-$k$ strings over $\Sigma$ with frequency below $\tau$ in $Y$ that can have frequency at least $\tau$ in $Z$).
(II) Creating an instance of MCK from an instance of MCSR. For this, we map the $i$th occurrence of $\#$ to a class $C_i$ in MCK and each possible replacement of the occurrence with a letter $j$ to a different item in $C_i$. Specifically, we consider all possible replacements with letters in $\Sigma$ and also a replacement with the empty string, which models deleting (instead of replacing) the $i$th occurrence of $\#$. In addition, we set the costs and weights that are input to MCK as follows. The cost for replacing the $i$th occurrence of $\#$ with the letter $j$ is set to the sum of the Ghost function for all candidate $\tau$-ghost patterns when the $i$th occurrence of $\#$ is replaced by $j$. That is, we make the worst-case assumption that the replacement forces all candidate $\tau$-ghosts to become $\tau$-ghosts in $Z$. The weight for replacing the $i$th occurrence of $\#$ with letter $j$ is set to $Sub(i, j)$.
(III) Solving the instance of MCK and translating the solution back to a (possibly suboptimal) solution of the MCSR problem. For this, we replace the $i$th occurrence of $\#$ with the letter corresponding to the element chosen by the MCK algorithm from class $C_i$, and similarly for each other occurrence of $\#$. If the instance has no solution (i.e., no possible replacement can hide the sensitive patterns), MCSR-ALGO reports that $Z$ cannot be constructed and terminates.
Lemma 6 below states the running time of MCSR-ALGO (see [3] for the proof on an efficient implementation of this algorithm).
**Lemma 6.** MCSR-ALGO runs in $O(|Y| + k\delta\sigma + T(\delta, \sigma))$ time, where $T(\delta, \sigma)$ is the running time of the MCK algorithm for $\delta$ classes with $\sigma + 1$ elements each.
## 6 Experimental Evaluation
We evaluate our approach, referred to as TPM, in terms of data utility and efficiency. Given a string $W$ over $\Sigma$, TPM sanitizes $W$ by applying TFS-ALGO,
PFS-ALGO, and then MCSR-ALGO, which uses the $O(\delta \sigma \theta)$-time algorithm of [18] for solving the MCK instances. The final output is a string $Z$ over $\Sigma$.
**Experimental Setup and Data.** We do not compare TPM against existing methods, because they are not alternatives to our approach (see Section 7). Instead, we compared against a greedy baseline referred to as BA.
BA initializes its output string $Z_{BA}$ to $W$ and then considers each sensitive pattern $R$ in $Z_{BA}$, from left to right. For each $R$, it replaces the letter $r$ of $R$ that has the largest frequency in $Z_{BA}$ with another letter $r'$ that is not contained in $R$ and has the smallest frequency in $Z_{BA}$, breaking all ties arbitrarily. If no such $r'$ exists, $r$ is replaced by # to ensure that a solution is produced (even if it may reveal the location of a sensitive pattern). Each replacement removes the occurrence of $R$ and aims to prevent $\tau$-ghost occurrences by selecting an $r'$ that will not substantially increase the frequency of patterns overlapping with $R$.
We considered the following publicly available datasets used in [1,11,13,15]: Oldenburg (OLD), Trucks (TRU), MSNBC (MSN), the complete genome of *Escherichia coli* (DNA), and synthetic data (uniformly random strings, the largest of which is referred to as SYN). See Table 1 for the characteristics of these datasets and the parameter values used in experiments, unless stated otherwise.
| Dataset | Data domain | Length $n$ | Alphabet size $|\Sigma|$ | # sensitive patterns | # sensitive positions $|S|$ | Pattern length $k$ |
|---------|-------------|------------|--------------------------|----------------------|---------------------------|-------------------|
| OLD | Movement | 85,563 | 100 | [30, 240] | (60) | [600, 6103] | [3, 7] (4) |
| TRU | Transportation | 5,763 | 100 | [30, 120] | (10) | [324, 2410] | [2, 5] (4) |
| MSN | Web | 4,698,764 | 17 | [30, 120] | (60) | [6030, 320480] | [3, 8] (4) |
| DNA | Genomic | 4,641,652 | 4 | [25, 500] | (100) | [163, 3488] | [5, 15] (13) |
| SYN | Synthetic | 20,000,000 | 10 | [10, 1000] | (1000) | [10724, 20171] | [3, 6] (6) |
Table 1: Characteristics of datasets and values used (default values are in bold).
The sensitive patterns were selected randomly among the frequent length-$k$ substrings at minimum support $\tau$ following [11,13,15]. We used the fairly low values $\tau = 10$, $\tau = 20$, $\tau = 200$, and $\tau = 20$ for TRU, OLD, MSN, and DNA, respectively, to have a wider selection of sensitive patterns. We also used a uniform cost of 1 for every occurrence of each $\tau$-ghost, a weight of 1 (resp., $\infty$) for each letter replacement that does not (resp., does) create a sensitive pattern, and we further set $\theta = \delta$. This setup treats all candidate $\tau$-ghost patterns and all candidate letters for replacement uniformly, to facilitate a fair comparison with BA which cannot distinguish between $\tau$-ghost candidates or favor specific letters.
To capture the utility of sanitized data, we used the *(frequency) distortion* measure $\sum_U (\text{Freq}_W(U) - \text{Freq}_Z(U))^2$, where $U \in \Sigma^k$ is a non-sensitive pattern. The distortion measure quantifies changes in the frequency of non-sensitive patterns with low values suggesting that $Z$ remains useful for tasks based on pattern frequency (*e.g.*, identifying motifs corresponding to functional or conserved DNA [19]). We also measured the number of $\tau$-ghost and $\tau$-lost patterns in $Z$.
following [11, 13, 15], where a pattern $U$ is $\tau$-lost in $Z$ if and only if $\text{Freq}_W(U) \geq \tau$ but $\text{Freq}_Z(U) < \tau$. That is, $\tau$-lost patterns model knowledge that can no longer be mined from $Z$ but could be mined from $W$, whereas $\tau$-ghost patterns model knowledge that can be mined from $Z$ but not from $W$. A small number of $\tau$-lost/ghost patterns suggests that frequent pattern mining can be accurately performed on $Z$ [11, 13, 15]. Unlike BA, by design TPM does not incur any $\tau$-lost pattern, as TFS-ALGO and PFS-ALGO preserve frequencies of nonsensitive patterns, and MCSR-ALGO can only increase pattern frequencies.
All experiments ran on an Intel Xeon E5-2640 at 2.66GHz with 16GB RAM. Our source code, written in C++, is available at https://bitbucket.org/stringsanitization. The results have been averaged over 10 runs.

Fig. 2: Distortion vs. number of sensitive patterns and their total number $|S|$ of occurrences in $W$ (first two lines on the $X$ axis).

Fig. 3: Distortion vs. length of sensitive patterns $k$ (and $|S|$).
**Data Utility.** We first demonstrate that TPM incurs very low distortion, which implies high utility for tasks based on the frequency of patterns (e.g., [19]). Fig. 2 shows that, for varying number of sensitive patterns, TPM incurred on average 18.4 (and up to 95) times lower distortion than BA over all experiments. Also, Fig. 2 shows that TPM remains effective even in challenging settings, with many
sensitive patterns (e.g., the last point in Fig. 2b where about 42% of the positions in $W$ are sensitive). Fig. 3 shows that, for varying $k$, TPM caused on average 7.6 (and up to 14) times lower distortion than BA over all experiments.
Next, we demonstrate that TPM permits accurate frequent pattern mining: Fig. 4 shows that TPM led to no $\tau$-lost or $\tau$-ghost patterns for the TRU and MSN datasets. This implies no utility loss for mining frequent length-$k$ substrings with threshold $\tau$. In all other cases, the number of $\tau$-ghosts was on average 6 (and up to 12) times smaller than the total number of $\tau$-lost and $\tau$-ghost patterns for BA. BA performed poorly (e.g., up to 44% of frequent patterns became $\tau$-lost for TRU and 27% for DNA). Fig. 5 shows that, for varying $k$, TPM led to on average 5.8 (and up to 19) times fewer $\tau$-lost/ghost patterns than BA. BA performed poorly (e.g., up to 98% of frequent patterns became $\tau$-lost for DNA).


We also demonstrate that PFS-ALGO reduces the length of the output string $X$ of TFS-ALGO substantially, creating a string $Y$ that contains less
redundant information and allows for more efficient analysis. Fig. 6a shows the length of $X$ and of $Y$ and their difference for $k = 5$. $Y$ was much shorter than $X$ and its length decreased with the number of sensitive patterns, since more substrings had a suffix-prefix overlap of length $k - 1 = 4$ and were removed (see Section 4). Interestingly, the length of $Y$ was close to that of $W$ (the string before sanitization). A larger $k$ led to less substantial length reduction as shown in Fig. 6b (but still few thousand letters were removed), since it is less likely for long substrings of sensitive patterns to have an overlap and be removed.

(a) DNA
(b) DNA
(c) Substr. of SYN
(d) SYN
Fig. 6: Length of $X$ and $Y$ (output of TFS-ALGO and PFS-ALGO, resp.) for varying: (a) number of sensitive patterns (and $|S|$), (b) length of sensitive patterns $k$ (and $|S|$). On the top of each pair of bars we plot $|X| - |Y|$. Runtime on synthetic data for varying: (c) length $n$ of string and (d) length $k$ of sensitive patterns. Note that $|Y| = |Z|$.
**Efficiency.** We finally measured the runtime of TPM using prefixes of the synthetic string SYN whose length $n$ is 20 million letters. Fig. 6c (resp., Fig. 6d) shows that TPM scaled linearly with $n$ (resp., $k$), as predicted by our analysis in Section 5 (TPM takes $\mathcal{O}(n + |Y| + k\delta\sigma + \delta\sigma\theta) = \mathcal{O}(kn + k\delta\sigma + \delta\sigma\theta)$ time, since the algorithm of [18] was used for MCK instances). In addition, TPM is efficient, with a runtime similar to that of BA and less than 40 seconds for SYN.
## 7 Related Work
Data sanitization (\textit{a.k.a.} knowledge hiding) aims at concealing patterns modeling confidential knowledge by limiting their frequency, so that they are not easily mined from the data. Existing methods are applied to: (I) a \textit{collection} of set-valued data (transactions) [21] or spatiotemporal data (trajectories) [1]; (II) a \textit{collection} of sequences [11,13]; or (III) a \textit{single} sequence [4,15,22]. Yet, none of these methods follows our CSD setting: Methods in category I are not applicable to string data, and those in categories II and III do not have guarantees on privacy-related constraints [22] or on utility-related properties [11,13,4,15]. Specifically, unlike our approach, [22] cannot guarantee that all sensitive patterns are concealed (constraint \textbf{C1}), while [11,13,4,15] do not guarantee the satisfaction of utility properties (\textit{e.g.}, \textbf{P1} and \textbf{P2}).
Anonymization aims to prevent the disclosure of individuals’ identity and/or information that individuals are not willing to be associated with [2]. Anonymization works (e.g., [2,5,7]) are thus not alternatives to our work (see [3] for details).
8 Conclusion
In this paper, we introduced the Combinatorial String Dissemination model. The focus of this model is on guaranteeing privacy-utility trade-offs (e.g., C1 vs. II1 and P2). We defined a problem (TFS) which seeks to produce the shortest string that preserves the order of appearance and the frequency of all non-sensitive patterns; and a variant (PFS) that preserves a partial order and the frequency of the non-sensitive patterns but produces a shorter string. We developed two time-optimal algorithms, TFS-ALGO and PFS-ALGO, for the problem and its variant, respectively. We also developed MCSR-ALGO, a heuristic that prevents the disclosure of the location of sensitive patterns from the outputs of TFS-ALGO and PFS-ALGO. Our experiments show that sanitizing a string by TFS-ALGO, PFS-ALGO and then MCSR-ALGO is effective and efficient.
Acknowledgments. HC is supported by a CSC scholarship. GR and NP are partially supported by MIUR-SIR project CMACBioSeq grant n. RBSI146R5L. We acknowledge the use of the Rosalind HPC cluster hosted by King’s College London.
References
1. Abul, O., Bonchi, F., Giannotti, F.: Hiding sequential and spatiotemporal patterns. TKDE 22(12), 1709–1723 (2010)
2. Aggarwal, C.C., Yu, P.S.: A framework for condensation-based anonymization of string data. DMKD 16(3), 251–275 (2008)
3. Bernardini, G., Chen, H., Conte, A., Grossi, R., Loukides, G., Pisanti, N., Pissis, S.P., Rosone, G.: String sanitization: A combinatorial approach. CoRR abs/1906.11030 (2019)
4. Bonomi, L., Fan, L., Jin, H.: An information-theoretic approach to individual sequential data sanitization. In: WSDM. pp. 337–346 (2016)
5. Bonomi, L., Xiong, L.: A two-phase algorithm for mining sequential patterns with differential privacy. In: CIKM. pp. 269–278 (2013)
6. Cazaux, B., Lecroq, T., Rivals, E.: Linking indexing data structures to de Bruijn graphs: Construction and update. J. Comput. Syst. Sci. (2016)
7. Chen, R., Acs, G., Castelluccia, C.: Differentially private sequential data publication via variable-length n-grams. In: CCS. pp. 638–649 (2012)
8. Cormode, G., Korn, F., Tirthapura, S.: Exponentially decayed aggregates on data streams. In: ICDE. pp. 1379–1381 (2008)
9. Crochemore, M., Hancart, C., Lecroq, T.: Algorithms on strings. Cambridge University Press (2007)
10. Gallant, J., Maier, D., Storer, J.A.: On finding minimal length superstrings. J. Comput. Syst. Sci. 20(1), 50–58 (1980)
11. Gkoulalas-Divanis, A., Loukides, G.: Revisiting sequential pattern hiding to enhance utility. In: KDD. pp. 1316–1324 (2011)
12. Grossi, R., Iliopoulos, C.S., Mercas, R., Pisanti, N., Pissis, S.P., Retha, A., Vayani, F.: Circular sequence comparison: algorithms and applications. AMB 11, 12 (2016)
13. Gwadera, R., Gkoulalas-Divanis, A., Loukides, G.: Permutation-based sequential pattern hiding. In: ICDM. pp. 241–250 (2013)
14. Liu, A., Zhengy, K., Liz, L., Liu, G., Zhao, L., Zhou, X.: Efficient secure similarity computation on encrypted trajectory data. In: ICDE. pp. 66–77 (2015)
15. Loukides, G., Gwadera, R.: Optimal event sequence sanitization. In: SDM. pp. 775–783 (2015)
16. Malin, B., Sweeney, L.: Determining the identifiability of DNA database entries. In: AMIA. pp. 537–541 (2000)
17. Narayanan, A., Shmatikov, V.: Robust de-anonymization of large sparse datasets. In: S&P. pp. 111–125 (2008)
18. Pisinger, D.: A minimal algorithm for the multiple-choice knapsack problem. Eur J Oper Res 83(2), 394–410 (1995)
19. Pissis, S.P.: MoTeX-II: structured MoTif eXtraction from large-scale datasets. BMC Bioinformatics 15, 235 (2014)
20. Theodorakopoulos, G., Shokri, R., Troncoso, C., Hubaux, J., Boudec, J.L.: Prolonging the hide-and-seek game: Optimal trajectory privacy for location-based services. In: WPES. pp. 73–82 (2014)
21. Verykios, V.S., Elmagarmid, A.K., Bertino, E., Saygin, Y., Dasseni, E.: Association rule hiding. TKDE 16(4), 434–447 (2004)
22. Wang, D., He, Y., Rundensteiner, E., Naughton, J.F.: Utility-maximizing event stream suppression. In: SIGMOD. pp. 589–600 (2013)
|
April 19, 2016
Office of the High Commissioner for Human Rights
Geneva, Switzerland
Dear Office of the High Commissioner for Human Rights:
The United States appreciates the opportunity to provide the Office of the High Commissioner for Human Rights with a brief overview of our ongoing work on respecting, protecting, and promoting human rights while preventing and countering violent extremism. Please find attached the U.S. response to the OHCHR questionnaire on best practices for PV/E/CVE.
As President Obama said at the White House Summit on Countering Violent Extremism on February 19, 2015:
“When people are oppressed, and human rights are denied -- particularly along sectarian lines or ethnic lines -- when dissent is silenced, it feeds violent extremism. It creates an environment that is ripe for terrorists to exploit. When peaceful, democratic change is impossible, it feeds into the terrorist propaganda that violence is the only answer available.
And so we must recognize that lasting stability and real security require democracy. That means free elections where people can choose their own future, and independent judiciaries that uphold the rule of law, and police and security forces that respect human rights, and free speech and freedom for civil society groups. And it means freedom of religion -- because when people are free to practice their faith as they choose, it helps hold diverse societies together.”
Sincerely,
Keith M. Harper, Ambassador
U.S. Representative to the UN Human Rights Council
Attachment –
U.S. National Strategy to Empower Local Partners to Prevent Violent Extremism and its Strategic Implementation Plan (SIP)
The U.S. National Strategy to Empower Local Partners to Prevent Violent Extremism and its Strategic Implementation Plan (SIP):
In August 2011, the United States released its National Strategy to Empower Local Partners to Prevent Violent Extremism and its Strategic Implementation Plan. Respect for the rule of law and for human rights and fundamental freedoms, including freedom of peaceful assembly, freedom of religion, and freedom of expression, including for members of the media, is central to our National Strategy for Empowering Local Partners and the execution of the SIP.
https://www.whitehouse.gov/sites/default/files/empowering_local_partners.pdf
https://www.whitehouse.gov/sites/default/files/sip-final.pdf
The guiding principles include the following:
- We must do everything in our power to protect the people from violent extremism while protecting everyone’s human rights and fundamental freedoms.
- We must build partnerships and provide support to communities based on mutual trust, respect, and understanding.
- We must use a wide range of good governance programs—including those that promote immigrant integration and civic engagement, protect civil rights, and provide social services—that may help prevent radicalization that leads to violence.
- We must support local capabilities and programs to address problems of national concern.
- Government officials and the public should not stigmatize or blame communities because of the actions of a handful of individuals.
- Strong religious beliefs should never be confused with violent extremism. Though we will not tolerate illegal activities, opposition to government policy is neither illegal nor unpatriotic and does not make someone a violent extremist.
These principles not only reflect our values, but many are supported by empirical research on the drivers of violent extremism. For example, in analytical work conducted within the U.S. Department of State we have observed the following trends on human rights issues:
- State-sponsored violence and abuse (as measured by the Political Terror Index) is highly correlated with the emergence of new violent extremist organizations (VEO). A review of terrorism data since 1995 found that countries with above average levels of state-sponsored violence double their risk of a violent extremism group emerging. Countries with the highest levels of state-sponsored violence quadruple their risk of a violent extremist group emerging.
- Low levels of voice and accountability – a measurement of political rights and civil liberties collected by Freedom House – are significant predictors of increased levels of state-sponsored violence and abuse, which is associated with both the emergence and expansion of violent extremism.
Perceptions of government discrimination against ethnic or religious groups may be associated with violent extremist behavior (analysis from surveys administered by Afrobarometer). This is supported by a number of studies that indicate that perceptions of injustice and the belief that one’s religion or identity is under threat can drive violent extremism.
**Additional Guidance and Good Practices:**
The United States has worked closely with international partners and expert bodies such as the Global Counterterrorism Forum (GCTF) and the International Institute for Justice and the Rule of Law (IJI) in Malta to develop non-binding good practices for preventing and countering terrorism that are grounded in respect for international obligations and commitments and for the rule of law.
Examples of such good practices can be found in the following guidance documents. We have highlighted a number of specific practices we believe are particularly applicable to promoting and protecting human rights and preventing and countering violent extremism:
**GCTF Abu Dhabi Memorandum on Good Practices for Education and Countering Violent Extremism**
- **Good Practice 7:** *Increase and expand on curricula that emphasize civic education, civic responsibility and human values.*
§ “Civic education provides youth with a framework for a collective civic identity and therefore fosters tolerance and the willingness to negotiate and compromise. To be most effective, civic education and its related values [should] be relevant to the local context and culture. It is also important to consider how best to highlight the value of civic education in light of a greater demand for math, science, engineering, and medicine rather than social sciences and humanities.”
**GCTF Abu Dhabi Plan of Action and Countering Violent Extremism**
- United States (NGO-World Organization for Resource Development and Education (WORDE)): Cultural Competency Training:
§ “This program educates policymakers and law enforcement officials about Muslim communities in the United States to enhance shared values and raise awareness of cultural and customary sensitivities to Muslim communities.”
- United States (Foundation-The Sanneh Foundation) Summer Camp:
§ “Originally aimed at youth in Minneapolis, Minnesota, this sports camp has reached immigrant youth from the Somali community and helped to develop opportunities for leadership, teamwork, collaboration, and social inclusion in areas where immigrants have difficulty assimilating into U.S. culture and have sometimes resorted to acts of violent extremism.”
**GCTF Ankara Memorandum on Good Practices for Multi-Sectoral Approach to Countering Violent Extremism**
Good Practice 7: States, in cooperation with both governmental and nongovernmental actors, are encouraged to consider comprehensive action in preventing and countering violent extremism. Although the role of the government is crucial, a strategy that involves a “whole-of-society” approach in addition to a “whole-of-government” one can be effective.
§ “Effectively addressing the conditions conducive to violent extremism requires a broader range of actors than security agencies. Different governmental agencies are responsible for ensuring respect for human rights and fundamental freedoms, creating new job opportunities, sustaining community stability, regulating migration flow, and increasing the level of resilience to radicalization and recruitment into violent extremist groups. States and their structures would benefit from establishing or intensifying information work with the public in the interest of more effectively explaining the effort undertaken by state authorities to counter violent extremism, as well as all detrimental consequences related to violent extremism. Government-initiated efforts, however, may not be sufficient for a successful CVE program. A range of actors, including civil society, (e.g., international and local partners, NGOs, religious organizations, universities, and communities) might be encouraged to take part in these efforts and this could be addressed within the appropriate legal and/or policy framework. States might benefit from positive voices emanating from different groups in any given community, in order to counter obstacles a CVE program might face in the implementation process.”
Good Practice 11: States can help civil society in CVE activities.
§ “Many civil society groups function in different fields (e.g., human rights, social services, cultural activities) and often might not be aware that these efforts also contribute to countering violent extremism. They might not be aware of the fact that they can play a vital role in CVE. They may also lack sufficient resources. In other respects, there may be robust NGOs that may not possess CVE-specific expertise. State actors can support civil society to increase their awareness and capacity in CVE.”
Good Practice 12: States should promote tolerance and facilitate dialogue in society to build communities which appreciate their differences and understand each other.
§ “It is important to identify the ways to stimulate inter-cultural, inter-religious, and inter-ethnic dialogue. An exchange of views might enable one to understand how others see the world. Creating dialogue channels serves as a first step for communities to get to know one another. Once different communities start to socialize, they might acknowledge the fact that there are communalities that they can use as a common ground for further dialogue. States might also work to promote democratic values, human rights,
pluralism, and freedom through education and outreach programs. Religious communities can work together to promote tolerance and to stem support for violent extremism. As a part of their efforts, they might create exchange programs of young theologians and might offer meetings for students to promote inter-religious dialogue and tolerance. Educational projects to raise awareness of different forms of prejudice and hostility might be implemented to prevent intolerance and discrimination.”
**GCTF Doha Plan of Action for Community-Oriented Policing in a Counteracting Violent Extremism Context**
**GCTF Good Practices on Community Engagement and Community-Oriented Policing as Tools to Counter Violent Extremism**
- **Good Practice 10:** Tailor community engagement and community oriented policing trainings to address the issues and dynamics of the local community and to instill awareness of potential indicators and behaviors.
§ “To maintain the trust and respect integral to community engagement and community-oriented policing, practitioners should be trained properly on the parameters of engagement and how it relates to the local contexts where they are engaging. For example training manuals on community-oriented policing as well as smaller “pocket guides” aimed at informing front line officers on potential behaviors and indicators to raise awareness of violent extremist threats versus behavioral norms could be distributed to local police. Furthermore, front line law enforcement should be trained on community cultural, societal, and religious behavior and be able to distinguish it from potential criminal and violent extremist indicators and behaviors. Training methods and materials should be continually updated and revised to keep up with the evolution of threats and with conclusions/good practices developed by members of GCTF and other relevant entities.”
**GCTF/OSCE Good Practices on Women and Countering Violent Extremism**
- **Introduction:** “It is also important to recognize the larger framework of human rights in which this discussion takes place. Practical integration of women and girls into all aspects of CVE programming can only occur in the context of broader guarantees of the human rights of women and girls in particular; these include addressing the causes of gender inequality such as the subordination of women and discrimination on the basis of sex, gender, age, and other factors. The promotion and protection of women’s rights and gender equality needs to underlie CVE programs and strategies. The human rights of women and girls, as with all human rights, should be promoted and protected at all times and not just as a means for CVE.”
- **Good Practice 4:** Protect the human rights of women and girls, including their equality, non-discrimination, and equal participation, and ensure that CVE efforts do not stereotype or instrumentalize, women and girls.
§ “The promotion and protection of women’s human rights are integral to efforts to include women and girls and mainstream gender in CVE. Women’s human rights concerns often underlie the incentives for, as well as the difficulties in, their engagement in CVE. For example, the victimization of women and girls by terrorists may motivate them to participate in CVE, but gender-based discrimination and stereotyping can hinder their full and equal engagement. These barriers need to be addressed to enable women and girls to safely and productively contribute to CVE efforts. This must happen in a nuanced way, as there is significant variation in women’s rights and gender equity. In certain environments, women and girls risk being instrumentalized and their rights compromised for counterterrorism and CVE objectives. The use, real or perceived, of government relationships with women and girls for security purposes (e.g., for gathering intelligence) can generate distrust and become counterproductive to CVE. A too obvious association of women and girls’ human rights with a CVE agenda can also further expose women and girls as targets for violent extremism.”
o **Good Practice 5:** *Prevent and address the direct and indirect impacts of violent extremism and terrorism on women and girls.*
§ “Violent extremist and terrorist groups often target women and girls for gender-based violence, including abductions, forced marriages, sexual violence, forced pregnancies, attacks on women human rights defenders and leaders, attacks on girls’ access to education, and restrictions on their freedom of movement. Preventing these attacks, providing protection for women and girls who are most at risk, rejecting societal acceptance, prosecuting perpetrators, and developing assistance including livelihood opportunities for women survivors are essential. These efforts will not only provide critical improvements in human security but also foster social cohesion and resilience in communities affected by violent extremism and terrorist violence. Addressing these impacts also enables women and girls to safely and productively engage in CVE activities.”
o **Good Practice 7:** *Include gender-sensitive monitoring and evaluation in CVE policy and programs to enhance effectiveness.*
§ “The effectiveness of all CVE efforts will be enhanced by integrating a gender perspective and including women and girls in monitoring and evaluation mechanisms. Sex-disaggregated data can provide a nuanced picture of the outputs and differential impact of CVE activities, to evaluate positive gains in areas such as skills, awareness, capacity, social cohesion, and resilience, and also to ensure that CVE does not contribute to an increase in human rights violations, such as gender-based violence by all parties. Sex-specific indicators and baseline information should
therefore be incorporated in the assessment of both general CVE initiatives and those that specifically seek to advance women and girls’ roles in CVE. Quantitative indicators should be used for instance to track the proportion of men and women among target groups of CVE activities, as well as the numbers of women and girls recruited into violent extremism. Qualitative monitoring, such as through polls, interviews, community roundtables, and focus groups, before, during, and after a given CVE initiative, should explicitly include women and girls. Women and women’s groups should be included in the independent evaluation of all CVE efforts, particularly those that seek to advance the roles of women and girls in CVE. Monitoring and evaluation of CVE programs focused on women and girls should also take into account the particular context and operational constraints in engaging with them. Developing and sustaining engagement with women and girls to counter violent extremism will likely need to be a long-term process; realistic metrics should be devised to measure effectiveness at each stage of that process. This might include improved performance management systems and evaluations, which could include gender-sensitive indicators and sex-disaggregated data.”
o **Good Practice 15:** *Engage and empower women in civil society and civil society actors working in the field of women’s and human rights, especially women’s organizations, as critical CVE stakeholders.*
§ “Engaging women in civil society and civil society actors who are working to advance women’s rights, particularly women’s organizations, is vital for CVE efforts to be effective and sustainable. These actors already contribute in many ways to CVE through activities which build resilience to violence and intolerance, though these activities are not necessarily oriented towards CVE or described as such. These activities include conflict prevention, peacebuilding, economic growth, security sector reform, and the promotion of human rights and the rule of law, other activities to advance the women, peace, and security agenda, as well as service provision, particularly in areas where the government lacks presence. These activities should be encouraged to continue. Women in civil society and civil society actors working with women are well positioned to act as a bridge to women in local communities, having better access to reach, empower, and build the capacity of women and girls for CVE, especially those that may be isolated or marginalized. CVE efforts should empower these actors to use this access. Other important CVE contributions these stakeholders should be encouraged to make include: (1) participating in community-oriented policing initiatives; (2) developing and disseminating alternative and inclusive-narratives; (3) advising on the design and
implementation of CVE activities; and (4) supporting monitoring and evaluating the effectiveness of CVE efforts. Coordination and clear modalities of engagement are essential to engage and collaborate effectively on CVE with women in civil society and civil society actors working with women, in order to ensure their safety, independence, and credibility.”
- **Good Practice 18:** *Engage girls and young women through education and within informal and formal educational environments to counter violent extremism.*
§ “Education can be utilized in a myriad of ways to build resilience and reduce recruitment and radicalization to violent extremism and terrorism in all its forms and manifestations. This requires promoting access to and protecting formal and informal religious and secular educational institutions as a safe space for all, including girls and young women. Education enhances the capacity of young women and girls to help build resilience among their peers, their families and their communities. Educational materials and activities for girls and young women centered on, inter alia, civic education and responsibility, community engagement, tolerance, interfaith dialogue, and human rights can be particularly important.”
- **Good Practice 22:** *Highlight women victims of violent extremism and terrorism, including as part of CVE efforts.*
§ “As with all victims of violent extremism and terrorism, women and girls should be highlighted to emphasize their equal human rights, counter their dehumanization and promote solidarity with them. Establishing platforms that amplify the voices of women victims of violent extremism and can also contribute to effective CVE. Victims should also be offered ongoing support and assistance to deal with the emotional complications that can arise from public discussion of the terrorist event. Improving media coverage of willing women victims of terrorism is key to these efforts and highlighting women victims more broadly. The capacity of media should be built for gender-sensitive reporting that recognizes the particular impacts of terrorism on women and girls and also respects their privacy and agency and ability to heal from physical and emotional trauma.”
**GCTF Rome Memorandum on Good Practices for Rehabilitation and Reintegration of Violent Extremist Offenders**
- **Good Practice 2:** *Good prison standards and practices can offer an appropriate starting point for building an effective, safe and smoothly operating rehabilitation program.*
§ “Counter-extremism and rehabilitation programs have the best chance of succeeding when they are nested in a safe, secure, adequately resourced, and well operated custodial setting where the human rights of prisoners are respected. It is important that
there is a clear legal basis and procedural framework for detention which complies with human rights and international law obligations and clearly delineates the institutions and agencies involved, as well as their respective roles, responsibilities and powers in this area. Prison officials must respect judicial decisions regarding incarceration, and ensure that inmates are not subject to extra-judicial punishment. The UN’s Standard Minimum Rules for the Treatment of Prisoners (1957) is a good starting point. As stated in the Rabat Memorandum, “the principles and philosophy” espoused in the UN standards provide a “useful and flexible guide that countries should use when deciding what conditions of confinement are appropriate for prisoners.” Some countries face problems of prison overcrowding, lack of resources, and deficient services. In developing effective responses, it is important to try to address these types of problems. Good management also improves the safety of facility staff and other prisoners. Properly managing terrorists and other high risk criminals reduces the opportunities for escape, conspiratorial misconduct, and inappropriate or dangerous external communications. Improving the prison environment also can help ensure that prisons do not become incubators of radicalization. Interactions with prison staff who are engaging in humane and positive behavior towards the inmates can create cognitive dissonance and openings for changes in thinking and behavior.”
IIJ Prison Management Recommendations to Counter and Address Prison Radicalization
- **Introduction:** “In addition, it is important to note that there is already substantial professional experiences and expertise, as well as many documents and handbooks regarding overall prison standards and operations, including the newly adopted and updated United Nations Standard Minimum Rules for the Treatment of Prisoners (now known as the Mandela Rules). The Mandela Rules provide a good framework for countries to utilize in reviewing the operations of their prisons. A core underlying principle found in these rules is the idea that all prison-based interventions and policies must respect international norms, treaties, and conventions regarding good governance, human rights and due process.”
- **Introduction:** “A well-managed prison is understood to mean a prison that functions based on the principles of good governance and adherence to human rights standards.”
Sources:
- [https://www.thegctf.org/home](https://www.thegctf.org/home)
- [https://theijj.org/1020-2/](https://theijj.org/1020-2/)
**Counterterrorism Laws and Their Implementation:**
Since the emergence of Al Qaeda as a global threat, the passage of domestic counterterrorism legislation has become a common international practice, complemented by a growing number of international instruments and resolutions. Unfortunately, this legislation’s consistency with international human rights obligations and commitments varies dramatically, with many countries passing laws that increase state power at the expense of respect for civil and political rights. Some components of current CT laws that can contribute to this tension include: overly broad definitions of terrorism and terrorist attacks; overly broad limitations and restrictions on the freedoms of expression and peaceful assembly; overly broad police powers; and incommunicado detention.
When developing counterterrorism legislation, States must take all measures in compliance with their obligations under international law, including international human rights obligations, including the rights to freedom of expression, peaceful assembly, and association. It is important that each law be tightly drawn to limit the opportunity for abuse and to ensure that States respect human rights.
**The Importance of Freedom of Expression:**
Respect for freedom of expression is a foundation of democratic society. Efforts to unduly restrict expression can quickly become a tool to silence critics and oppress members of minorities. Overly broad criminalization of incitement can create a risk of abuse because it can be used to criminalize the free expression of ideas and opinions that are not preferred by the government. For example, some governments use sweeping and vaguely-worded anti-terrorism laws to justify crackdowns on peaceful political expression.
Respect for freedom of expression means accepting the expression of opinions that may be offensive to some or unpopular. Curbing political expression can add a sense of frustration and estrangement which may create an environment that is conducive to the spread of terrorism. Civil society is an important stakeholder in countering terrorism and violent extremism and its participation can help increase the accountability of States.
Promoting media freedom is an important element of CVE. Promoting access to diverse media, substantive reporting with integrity, and unimpeded communications networks have positive effects on CVE efforts, including the following:
- Exposure to diverse editorial viewpoints builds community resilience by bolstering critical thinking skills, as viewers engage with competing claims.
- Integrity of reported information builds a context in which news information can be more credible. This can help reduce the attractiveness and efficacy of violent extremist claims, especially in cases where inaccurate information is inspiring people to become foreign fighters.
- A media environment where reporting is credible can also enable successful countermessaging; countermessaging efforts will not be effective if they are not considered credible.
Promoting media freedom contributes to the space for peaceful political and religious expression; peaceful expression in these areas can mitigate perceived grievances that violent extremists exploit to recruit fighters.
In sum, the United States believes that security cannot be framed as a zero-sum tradeoff with respect for human rights. A comprehensive CVE approach recognizes this as a false dichotomy and highlights the importance of good governance and respect for human rights in preventing and countering violent extremism.
CVE itself is fundamentally positive and proactive; it empowers new states and actors, emphasizes preventive action, and advances our collective security while championing universal values and underscoring the importance of protecting and promoting human rights.
|
SAN FRANCISCO
MUNICIPAL TRANSPORTATION AGENCY
BOARD OF DIRECTORS
AND PARKING AUTHORITY COMMISSION
MINUTES
Tuesday, August 21, 2012
Room 400, City Hall
1 Dr. Carlton B. Goodlett Place
REGULAR MEETING AND CLOSED SESSION
1 P.M.
BOARD OF DIRECTORS
Tom Nolan, Chairman
Cheryl Brinkman, Vice Chairman
Leona Bridges
Malcolm Heinicke
Jerry Lee
Joel Ramos
Cristina Rubke
Edward Reiskin
DIRECTOR OF TRANSPORTATION
Roberta Boomer
BOARD SECRETARY
ORDER OF BUSINESS
1. Call to Order
Chairman Nolan called the meeting to order at 1:04 p.m.
2. Roll Call
Present: Leona Bridges
Cheryl Brinkman
Malcolm Heinicke
Jerry Lee
Tom Nolan
Joél Ramos
Cristina Rubke
3. Announcement of prohibition of sound producing devices during the meeting.
Chairman Nolan announced that the ringing of and use of cell phones, pagers and similar sound-producing electronic devices are prohibited at the meeting. He advised that any person responsible for the ringing or use of a cell phone, pager, or other similar sound-producing electronic devices might be removed from the meeting. He also advised that cell phones that are set on “vibrate” cause microphone interference and requested that they be placed in the “off” position.
4. Approval of Minutes
On motion to approve the minutes of the July 17, 2012 Regular Meeting: unanimously approved.
5. Communications
Chairman Nolan stated that today’s meeting would be adjourned in memory of Gigi Pabros.
Board Secretary Boomer announced that Item 10.1A, settlement of the Kleinschmidt claim had been removed from the agenda.
6. Introduction of New or Unfinished Business by Board Members
None.
7. Director’s Report (For discussion only)
- Special Recognition Awards
- Growing bicycling and safe, civil streets
- E-line launch
-Update on the Central Subway project
-Ongoing Activities
Director Reiskin presented a special recognition award to Rudy Uribe, transit operator, Green Division, Transit Services.
Director Reiskin discussed the America’s Cup and the launch of the E-line; the Central Subway project, and recent ads on vehicles.
Discussion of growing bicycling and safe, civil streets was continued to the September 18 meeting.
Director Heinicke requested a report regarding the speed of light rail vehicles in the Twin Peaks Tunnel.
PUBLIC COMMENT:
Herbert Weiner discussed the recent ad on Muni vehicles and wondered about the review process prior to posting. The SFMTA has to get money legitimately.
Peter Witt discussed the lack of taxi stands at major events and how taxicab drivers are being ticketed for trying to serve the public.
Christopher Fulkerson discussed free speech. When government creates a sign, the information in the sign is understood to be approved by government, which is not okay.
8. Citizens’ Advisory Council Report
No report.
9. Public Comment
Tara Housman discussed the America’s Cup, the SFMTA’s distain of the Taxi Advisory Council (TAC) and the enforcement of meters on Independence Day.
Brad Newsham discussed the recent resignation of TAC members. They tried to improve the industry but were treated horribly.
Brian Rosen stated that the wait list was left off the agenda. People on the wait list are not being taken care of. He should be given a chance to get his medallion in the same way that others have.
Howard Meehan discussed the medallion wait list and what it’s like to drive a cab. Cab drivers are being thrown under the bus.
Ed Healy stated that the SFMTA has ignored the cab industry’s plan and they had been promised money but the SFMTA is getting three times more money. Their work has been thrown into the trash.
Herbert Weiner discussed late or missing buses, the need to adhere to schedules and vehicle switchbacks.
Christopher Fulkerson discussed process, expressed appreciation for promoting Ms. Hayashi, and read a quote. Taxi town hall meetings are a sham.
Ron Wolter presented a letter he sent to Ms. Hayashi about taxi stands at the Caltrain Station and asked that the issue of taxi flow and taxi stands be addressed. The industry has no representation.
Peter Witt stated that has been ostracized by the SFMTA and the former Taxi Commission. His suggestions regarding implementing a congesting pricing system and increasing taxi supply have been ignored. He also discussed how data from his taxi survey is useable yet is deleted. (Mr. Witt provided a written statement that is attached to these minutes.)
Corey Lamb discussed the need for taxi stands at large events such as the recent celebration for the Golden Gate Bridge. There was no accessible place to drop off passengers.
Barry Taranto expressed concern about the resignation of TAC members and suggested that the Board discuss it. He requested information regarding Ms. Hayashi’s status. He stated that the taxi items should be put off and the Board should attend town hall meetings.
David Barlow wondered if the Board had published a plan to improve taxi service. The industry needs to know what the plan is because all they’ve seen is a new scheme month after month with no improvement. He also discussed the need to control other vehicles that are trying to perform taxi service.
Iza Pardinas expressed disbelief that the taxi medallion wait list would be done away with. Medallions should be kept for drivers who deserve them. The “little people” should get medallions before companies do.
Keith Dennis discussed the lack of bus shelters at Fillmore and Eddy streets. People are freezing because there’s no shelter. People also need electronic signs to tell them when a bus is coming.
Emil Lawrence stated that if the SFMTA thinks medallions are their assets then cab drivers are the SFMTA’s slaves. Cab drivers get bills that aren’t covered by insurance when they have been in an accident. No other vehicle associated with transportation gets parking tickets. The TAC has been kept in the dark and nobody consults with the guys who work in the industry.
Mark Gruberg stated that the industry has moved from a participatory process to a dictatorial process. The message from the TAC resignations is about as loud a message as could be. The taxi industry has unprecedented competition due to lack of state regulation which is a threat to their survival and to the SFMTA’s regulatory scheme. People on the list have waited for 14-16 years but they’re now being pushed out without explanation.
Charles Rathbone expressed appreciation for an award given to Luxor Cab from readers of SF Magazine who voted Luxor as the best of the Bay Area.
Bill Mounsey read his letter of resignation from the TAC. The SFMTA Board hasn’t taken their recommendations seriously. TAC members have over 100 years of experience which is not respected.
Robert Cessano stated that the SFMTA has taken over $20 million from the industry but it can’t run buses. The SFMTA doesn’t care about enforcement or taxis because it is getting money from drivers.
Barry Korengold discussed his resignation from TAC. After two years of work, before presenting recommendations, the SFMTA puts out a plan that doesn’t consider what the TAC has talked about and that disregards drivers who have put their lives into the industry. The SFMTA needs to listen to the concerns and not just look at medallions as a revenue source.
Tariq Mehmood stated that his income has dropped by 50%. Taxi stands are packed with cabs waiting for rides. Uber cab is taking money from his pocket. The SFMTA is not working for taxi drivers and needs to find ways to create more business rather than cut income.
Benjamin Valis expressed disappointment. At a critical juncture there is a fleet of pirate taxis that charge double or triple the authorized rate and rather than come up with way to handle it, the SFMTA has come up with way to make money from medallion sales. The SFMTA is out of touch.
THE FOLLOWING MATTERS BEFORE THE SAN FRANCISCO MUNICIPAL TRANSPORTATION AGENCY BOARD OF DIRECTORS ARE RECOMMENDED FOR ACTION AS STATED BY THE SFMTA DIRECTOR OF TRANSPORTATION OR CITY ATTORNEY WHERE APPLICABLE. EXPLANATORY DOCUMENTS FOR ALL CALENDAR ITEMS ARE AVAILABLE FOR REVIEW AT 1 SOUTH VAN NESS AVE. 7th FLOOR.
CONSENT CALENDAR
10. All matters listed hereunder constitute a Consent Calendar, are considered to be routine by the San Francisco Municipal Transportation Agency Board of Directors and will be acted upon by a single vote. There will be no separate discussion of these items unless a member of the Board of Directors or the public so requests, in which event the matter shall be removed from the Consent Calendar and considered as a separate item.
(10.1) Requesting the Controller to allot funds and to draw warrants against such funds available or will be available in payment of the following claims against the SFMTA:
A. Henry Kleinschmidt vs. CCSF, Superior Ct. #CGC9493996 filed on 10/30/09 for $1,296.80
B. State Farm Ins. vs. CCSF, Superior Ct. #CGC11515730 filed on 7/5/11 for $1,500
C. Mohamed Hosny vs. CCSF, Superior Ct. #CGC10504806 filed in 10/10 for $10,000
D. Rodney Green vs. CCSF, Superior Ct. #CGC10504321 filed on 10/1/10 for $10,500
E. Raul Navarro vs. CCSF, Superior Ct. #CGC10499627 filed on 5/14/10 for $15,000
F. Gregor MacLennan vs. CCSF, Superior Ct. #CGC12519755 filed on 4/4/12 for $75,000
Item 10.1 A was removed from the agenda.
RESOLUTION 12-104
(10.2) Approving the following traffic modifications:
A. REVOKE – FULL TIME BUS STOP and ESTABLISH – PART-TIME BUS STOP, 6 AM TO 8 PM, DAILY – 33rd Avenue, west side, from Geary Boulevard to 100 feet southerly.
B. ESTABLISH – RESIDENTIAL PERMIT PARKING AREA T, 2-HOUR PARKING, 8 AM TO 6 PM, MONDAY THROUGH FRIDAY and RESCIND – RESIDENTIAL PERMIT PARKING AREA T, 2-HOUR PARKING, 8 AM TO 3 PM, MONDAY THROUGH FRIDAY – Balceta Avenue, 0-99 Block, both sides, between Woodside and Laguna Honda Boulevard.
C. RESCIND – 2-HOUR PARKING, 7 AM TO 6 PM, MONDAY THROUGH SATURDAY and ESTABLISH – RED ZONE – Stevenson Street, north side, west of 7th Street, from 90 feet to 122 feet easterly of the westerly terminus.
D. ESTABLISH – TOW-AWAY, NO PARKING, 1 AM TO 5 AM, EVERY DAY – Patterson Street, both sides, between Flower Street and Oakdale Avenue.
E. ESTABLISH – STOP SIGN – Stopping Aptos Avenue at Darien Way.
F. ESTABLISH – NO PARKING VEHICLES OVER SIX FEET HIGH – North Point Street, south side, between Leavenworth Street and Columbus Street.
G. ESTABLISH – NO PARKING, 6 PM TO 11 PM, SUNDAY, MONDAY, WEDNESDAY AND FRIDAY – Berry Street, north side, from 26 feet to 102 feet west of 4th Street.
H. ESTABLISH – TOW-AWAY NO STOPPING ANYTIME – Clarence Place, east side, from 120 feet north of Townsend Street to the northern end of Clarence Place; and Clarence Place, west side, from 140 feet north of Townsend Street to the northern end of Clarence Place.
I. ESTABLISH – TOW-AWAY NO STOPPING, 10 PM TO 6 AM, EVERYDAY – Iowa Street, both sides, between 23rd Street and 25th Street.
J. ESTABLISH -- RESIDENTIAL PERMIT PARKING AREA A, 1-HOUR PARKING, 8 AM TO 9 PM, MONDAY THROUGH FRIDAY – Sansome Street, 1300 block, between Greenwich and Filbert Streets, west side Greenwich Street, 200 block, between Sansome Street and Greenwich Steps, both sides; and Filbert Street, 200 block, between Sansome Street and Filbert Steps, both sides.
K. RESCIND – RESIDENTIAL PERMIT PARKING AREA A, 2-HOUR PARKING, 8 AM TO 9 PM, MONDAY THROUGH FRIDAY – Sansome Street, 1300 block, between Greenwich and Filbert Streets, west side Greenwich Street, 200 block, between Sansome Street and Greenwich Steps, both sides; and Filbert Street, 200 block, between Sansome Street and Filbert Steps, both sides.
L. ESTABLISH -- MEDIAN ISLANDS – Bryant Street at 26th Street, south side; and Bryant Street at Cesar Chavez Street, north side.
M. ESTABLISH – PERPENDICULAR PARKING – Bryant St., east side, from 95 feet to 165 feet north of 25th St.; and Bryant St., west side, from 125 feet to 275 feet south of 24th St.
N. RESCIND – LEFT LANE MUST TURN LEFT – Bryant Street at Cesar Chavez Street, southbound approach.
O. ESTABLISH – NO PARKING ANYTIME and ESTABLISH – SIDEWALK WIDENING – Broadway, north side, from Kearny Street to 45 feet easterly – No Parking.
P. ESTABLISH – NO PARKING ANYTIME and ESTABLISH – SIDEWALK WIDENING – Broadway, south side, from Kearny Street to 65 feet easterly – No Parking.
Q. ESTABLISH – NO PARKING ANYTIME and ESTABLISH – SIDEWALK WIDENING – Broadway, north side, from Montgomery Street to 42 feet westerly – No Parking.
R. ESTABLISH – NO PARKING ANYTIME and ESTABLISH – SIDEWALK WIDENING – Broadway, south side, from Montgomery Street to 69 feet westerly – No Parking.
(Explanatory documents include a staff report and resolution.)
Item 10.2 L, M and N was severed at the request of a member of the public.
PUBLIC COMMENT on Item 10.2 L, M and N:
Robert Thomson expressed support for the items.
RESOLUTION 12-105
On motion to approve Item 10.2 L, M and N:
ADOPTED: AYES – Bridges, Brinkman, Heinicke, Lee, Nolan, Ramos and Rubke
(10.3) Approving a revised inter-agency transfer discount fare with the Water Emergency Transportation Authority, which operates the Alameda/Oakland and Alameda Harbor Bay ferry services, and provide for the transfer discount fare to Clipper® card customers only.
(Explanatory documents include a staff report, resolution and analysis.)
RESOLUTION 12-106
(10.4) Approving the award by the Department of Public Works of a contract to Rodan Builders for the Second Level Deck Repair project at the Fifth & Mission Garage in an amount not to exceed $950,722.20. (Explanatory documents include a staff report, resolution and summary.)
RESOLUTION 12-107
(10.5) Authorizing the Director of Transportation to execute Contract No. 1262, Bernal Substation Upgrade, with Balfour Beatty Rail in an amount not to exceed $3,546,880 and for a term of 365 calendar days. (Explanatory documents include a staff report, resolution and financial plan.)
RESOLUTION 12-108
(10.6) Adopting CEQA findings and authorizing the Director of Transportation to execute an Industrial Lease with a First Right of Negotiation to Purchase with Prologis, for 2650 Bayshore Boulevard, Daly City, for towed car operations, and for other facilities, storage and uses, with an initial 20-year term, plus two five-year extension options. (Explanatory documents include a staff report, resolution, summary and lease.)
RESOLUTION 12-109
No public comment.
On motion to approve the Consent Calendar (Item 10.1A removed and Item 10.2 L, M and N severed):
ADOPTED: AYES -- Bridges, Brinkman, Heinicke, Lee, Nolan, Ramos and Rubke
REGULAR CALENDAR
SPECIAL ORDER - 2:00 PM
11. Amending Transportation Code Division II, Sections 1102 and 1116 to transition the Taxi Medallion Sales Pilot Program into a long-term medallion transfer policy. (Explanatory documents include a staff report, resolution and amendment.)
Ed Reiskin, Director of Transportation, presented the report.
Director Heinicke proposed amendments to the staff proposal that would increase the amount of money that would go to the medallion holder from $150,000 to two-thirds of the sales price, subject to a cap of $200,000 and would lower the transfer tax from 30% to 20%.
Chris Sweiss, Chairman, Taxi Advisory Council presented the work and recommendations of the Council.
PUBLIC COMMENT:
Speakers expressing support for the item: Steven Stapp, Tim Lapp, Robert Cesano, Charles Rathbone, Carl Macmurdo, Jim Gillespie, John Lazar,
Speakers expressing opposition for the item: Tara Housman, Richard Hybels, Brad Newsham, Bill Mounsey, Mark Gruberg, Gerald Ganley, Ron Wolter, Tom Diesso, Barry Taranto, Tariq Mehmood, Ed Healy, Corey Lamb, David Hathaway, Dan Hinds, Hansu Kim, Benjamin Valis, Barry Korengold, Keith Raskin, Emil Lawrence, Chris Sweiss, John Han and Christopher Fulkerson
Chairman Nolan requested that staff quickly return to the Board with proposals regarding the medallion wait list and the Driver’s Fund.
On motion to amend the resolution to increase the amount of money that would go to the medallion seller from $150,000 to two-thirds of the sales price subject to a cap of $200,000:
ADOPTED: AYES – Bridges, Brinkman, Heinicke, Lee, Nolan, Ramos and Rubke
On motion to amend the resolution to reduce the transfer fee from 30% to 20%:
ADOPTED: AYES – Bridges, Brinkman, Heinicke, Lee, Nolan, Ramos and Rubke
RESOLUTION 12-110
On motion to approve the item as amended:
ADOPTED: AYES – Bridges, Brinkman, Heinicke, Lee, Nolan, Ramos and Rubke
12. Amending Sections 1102, 1103, 1105, 1106, 1108, 1113, 1114 and 1117-1123 of Article 1100 of Division II of the San Francisco Transportation Code to: (1) add definitions of necessary terms and amend the definitions of A-Card Seniority and Single Operator Part-time Permit; (2) add requirements for renewal of Color Scheme permits; (3) render a permit application inactive if it is not completed within 60 days; (4) require all Gas and Gates Medallion vehicles to change shifts on Color Scheme property; (5) authorize the Director of Transportation to impose a moratorium on the issuance of Color Scheme Permits or Dispatch Service Permits; (6) prohibit retaliation for exercise of rights provided by Article 1100; (7) delete requirement that taxis taken out of service be returned to service in 30 days or be permanently replaced; (8) to make each day of the unauthorized use of a spare vehicle by a Color Scheme a separate offense; (9) require Color Schemes to report vehicle-related insurance claims received or filed to the SFMTA and to ensure that Gas and Gates Medallions are not violating laws limiting the length of a commercial
driver's shift; (10) require Color Schemes to provide certain notices to the Paratransit Broker regarding In-Taxi Equipment; (11) prohibit certain practices by Drivers in connection with accepting payment by Paratransit Debit Card; (12) prohibit Drivers from tampering with required taxi equipment; (13) require that taxi security cameras be manufactured after 2006; (14) require Color Schemes to provide security camera data to the SFMTA and the SFPD; (15) make minor changes to the procedures for hearings on decisions to grant or deny permits; (15) renumber and amend Sections 1118 – 1123; (16) clarify procedures for hearings on Citations issued to Permit Holders and members of the public; (17) clarify SPMTA’s procedures for providing public notice to the taxi industry; and (18) make Color Scheme Permit Holders responsible for ensuring that all Gate Fees charged for use of vehicles affiliated with the Color Scheme are within the Gate Fee cap. (Explanatory documents include a staff report, resolution and amendment.)
Items 12 and 13 were called together.
Director Reiskin requested an amendment regarding the receipt of a regular monthly report from each Color Scheme that lists each accident that occurred.
PUBLIC COMMENT:
Speakers expressing opposition: Charles Rathbone, Barry Taranto, Jim Gillespie, Corey Lamb, John Lazar, Barry Korengold and Emil Lawrence.
Speakers expressing support: Mark Gruberg, Hansu Kim, Tone Lee, Ed Healy and Benjamin Valis.
Speakers expressing neither support nor opposition: Bill Mounsey and John Han
On motion to amend the resolution, Section 1106 (o)(l), to add “by the fifth day of each month, each Color Scheme must file a report with the SFMTA listing each accident that occurred during the previous month involving any Taxi or Ramp Taxi affiliated with the Color Scheme and resulting in property damage or bodily injury
ADOPTED: AYES – Bridges, Brinkman, Heinicke, Lee, Nolan, Ramos and Rubke
RESOLUTION 12-111
On motion to approve the item as amended:
ADOPTED: AYES – Bridges, Brinkman, Heinicke, Lee, Nolan, Ramos and Rubke
13. Amending Transportation Code Division II Article 300, Section 310 establishing fines for new violations of the Transportation Code regarding motor vehicle for hire administrative fines. (Explanatory documents include a staff report, resolution and amendment.)
RESOLUTION 12-112
On motion to approve:
ADOPTED: AYES – Bridges, Brinkman, Heinicke, Lee, Nolan, Ramos and Rubke
14. Discussion and vote pursuant to Administrative Code Section 67.10(d) as to whether to conduct a closed session.
On motion to invoke the attorney-client privilege: unanimously approved.
RECESS REGULAR MEETING AND CONVENE CLOSED SESSION
CLOSED SESSION
1. Call to Order
Chairman Nolan called the closed session to order at 4:53 p.m.
2. Roll Call
Present: Leona Bridges
Cheryl Brinkman
Malcolm Heinicke
Jerry Lee
Tom Nolan
Joél Ramos
Cristina Rubke
Also present: Ed Reiskin, Director of Transportation
Roberta Boomer, Board Secretary
Julia Friedlander, Deputy City Attorney
Mariam Morley, Deputy City Attorney
3. Pursuant to Government Code Sections 54956.9 (b), and Administrative Code Section 67.10 (b) (2), the Municipal Transportation Agency Board of Directors will meet in closed session to discuss attorney-client matters in the following case(s):
CONFERENCE WITH LEGAL COUNSEL
Anticipated Litigation:
_X_ As defendant or _X_ As plaintiff
ADJOURN CLOSED SESSION AND RECONVENE OPEN SESSION - The closed session was adjourned at 5:10 p.m.
15. Announcement of Closed Session.
Chairman Nolan announced that the SFMTA Board of Directors met in closed session to anticipated litigation with the City Attorney. The Board of Directors took no action.
16. Motion to disclose or not disclose the information discussed in closed session.
On motion to not disclose the information discussed: unanimously approved.
ADJOURN - The meeting was adjourned at 5:11 p.m. in memory of Gigi Pabros.
A tape of the meeting is on file in the office of the Secretary to the San Francisco Municipal Transportation Agency Board of Directors.
Roberta Boomer
Board Secretary
The Ethics Commission of the City and County of San Francisco has asked us to remind individuals and entities that influence or attempt to influence local legislative or administrative action may be required by the San Francisco Lobbyist Ordinance [S.F. Campaign and Governmental Conduct Code section 2.100 et seq.] to register and report lobbying activity. For more information about the Lobbyist Ordinance, please contact the Ethics Commission at 415.581.2300; fax: 415.581.2317; 25 Van Ness Avenue, Suite 220, SF, CA 94102-6027 or the web site: sfgov.org/ethics.
To the SFMTA, to be included in the minutes of the SFMTA's regular meeting for Aug. 21, 2012.
From Peter Witt
Item. #9 (Public Comment).
S.F. Native,,, 24 year yellow cabbie..... DISINFRANCHISED since Newsom became Mayor.
I've been black balled by the TXC and MTA's Directors and staff.
For the record ... The two ideas I've suggested always get deleted ........
1.) Implement a "Congestive Pricing System" to ;
a.) Slow taxi demand.
b.) Increases taxi supply..... which will ,
2.) Incentive-fie a new taxi-sharing program that would ....
a.) Lower fares at peaks times.
b.) Boost taxi driver revenue.
c.) Increase taxi supply and efficiency.
In the "City of Love" ..........WHYNOT ??
Dispatch is another story!
15 years of unprecedented "Customer Outreach" data deleted from the records.
Public information withheld from the P.C. & N. process for political purpose.
DATA that otherwise would give full transparency and accountable also puts SFMTA taxi-services 15 years behind the times.
Useable DATA,... comparable , even to the untrained......... Executive Director.
Not great PR !!!!!!.
Not great taxi services !!!!
|
tissue, electrical stimulation, nutritional deprivation, environmental manipulation etc. and the effects of these on different aspects of brain development and function were included.
The demonstrations included dissection and separation of different regions of the brain, separation of cell types, isolation of symaptosomal and myelin membranes, culturing of different cell types of nervous tissue, analysis of lipids and incorporation of the labelled precursor into brain lipids, analysis of fatty acids, assay of neurotransmitter levels and enzymes involved in their metabolism, use of stereotaxis for implementation of cannulae and electrodes, studies on self stimulation, drug effects on behaviour of rats, behavioural techniques including those used for assessment of neuromotor development, motor coordination, learning performance etc.
Round table discussion was arranged on "Methodology of subcellular fractionation". Specialists discussed techniques used in the preparation of monoclonal antibodies, immunohistochemistry, voltage and patch clamp, ligand binding studies and electrophysiology using slides, video etc.
It is proposed to bring out a textbook on "Introduction to Neuroscience" which can be used to teach this course. Those interested in this book can contact Prof. L. J. Parekh.
Shaila Telang and L. J. Parekh
Biochemistry Department,
Baroda University,
Baroda 390 002.
NEWS
ANTIBIOTICS: THE RESISTANCE PROBLEM
ABSTRACT
The increasing frequency of acquired resistance to antibiotics is a worldwide health problem which demands international attention. However, the rapidity with which new resistant strains are appearing, and the fact that existing resistant strains are becoming more prevalent, highlight the need for more information about the current situation and for action to control it. To meet the need for national and international surveillance of antibiotic resistance, the World Health Organisation (WHO) recommends that health authorities be informed of the best and most cost-effective ways of using antibiotics and pass this information on to all health professionals.
The discovery of antibiotics was one of the major events in the history of public health. Antibiotics saved millions of lives and shortened the duration of illness for hundreds of millions or more. However, the dramatic nature of their effects encouraged an explosive increase in their use both for humans and in veterinary medicine. All this contributed to the growing problem of bacterial resistance.
First warnings
The first clinically serious consequence of antibiotic resistance was the widespread dissemination in hospitals, in the 1950s, of strains of *Staphylococcus aureus* that were resistant to penicillin. These strains had developed ability to form an antibiotic-destroying enzyme, penicillinase (beta-lactamase) and they subsequently acquired resistance to several other chemically unrelated antibiotics. From the early 1950s onwards, these so-called "multiple-antibiotic" resistant staphylococci became endemically established in many hospitals throughout the world.
Recently the situation has worsened. Surveillance data presented to the World Health Organization (WHO) indicate that serious consequences of antibiotic resistance were no longer confined to hospitals but were increasing in the general population.
The prevalence was even greater in developing countries than in industrialized ones. Resistance to such easily available antibiotics as ampicillin, tetracycline, chloramphenicol and sulfonamides has made its appearance. Patients in developing countries are now in a situation where only the low-cost antibiotics are available to them, yet these are becoming progressively
less effective. It is also clear that the import of expensive "new" antibiotics from developed countries, even if economically feasible, would cause only a temporary improvement in the situation.
**Consequences of widespread resistance**
Antibiotic resistance limits the effectiveness of antibiotics against bacteria. These bacteria may be resistant to the antibiotic from the start, or they may acquire resistance from another organism in the patient during treatment.
Resistant bacteria may actually grow faster in the presence of antibiotics. This process (known as "superinfection") can have important clinical consequences, especially in hospital patients, many of whom have an increased susceptibility to infection by organisms that seldom invade healthy persons. These organisms are frequently responsible for respiratory or septicemic complications that may be a greater hazard than the infection for which the antibiotic treatment was originally given.
The use of wide-spectrum antibiotics, particularly when not essential, has produced resistance in a wide variety of bacteria, thus seriously limiting the possibilities of controlling infection by any of the existing antibiotics.
**Pressures favouring excessive antibiotic use**
When antibiotics are available on the open market, the attitude of patient and family are decisive. The desire to do the best for the patient in a situation of fear and anxiety, coupled with public ignorance about the efficacy of antibiotics in particular diseases, leads to unnecessary, and even damaging, treatment. In other cases, the dosage is inappropriate: either insufficient or not taken long enough. Poor choice of antibiotics often results from the multiplicity of names under which they are marketed, the promotion in developing countries of antibiotics that are obsolete or inappropriate, and misleading advertising.
Unless physicians have a good knowledge of the management of microbial infections, they may be tempted to give unnecessary treatment, in an effort to do the best for the patient.
**Irrational use**
Moreover, recent surveys in North America and Britain indicate that about one-quarter of all patients receive one or more courses of antibiotics while in hospital. In a Canadian survey in 1976, only 41% of all courses of antibiotics that were prescribed could be considered as "rational", and 22% were "questionable".
In hospitals, the most frequent form of therapeutic misuse is to give unnecessary courses of antibiotics. This is true in general practice as well; in one survey in the USA, nearly 60% of physicians used antibiotics to treat the common cold, which is a virus infection no antibiotic can combat. The possible benefits of preventing secondary bacterial infection in the patient are more than cancelled out by the danger of promoting resistance to antibiotics.
**Antibiotics in animals**
Antibiotics are regularly used to treat and prevent animal disease and to promote the growth of livestock. Unfortunately, the administration of antibiotics to animals for any purpose leads to an accumulation of resistant bacteria in their gut. The importance of this for human is that (1) antibiotic resistant pathogens common to animals may reach humans by cross-infection, and (2) antibiotic resistant, non-pathogenic organisms in the animal may pass to and colonize humans and thus transfer this resistance. Evidence now available confirms that these resistant strains reach man via the food chain.
**Plan of action**
The increasing frequency of acquired resistance to antibiotics is a worldwide health problem which demands international attention. However, the rapidity with which new resistant strains are appearing, and the fact that existing resistant strains are becoming more prevalent, highlight the need for more information about the current situation and for action to control it. To meet the need for national and international surveillance of antibiotic resistance, the World Health Organization (WHO) recommends that health authorities be informed of the best and most cost-effective ways of using antibiotics and pass this information on to all health professionals.
**Veterinarians**
WHO has recommended that no antibiotic of therapeutic value in humans, or showing cross-resistance with such an antibiotic, should be used to promote growth in animals. However, this policy will not work, unless the use of such antibiotics to prevent and treat diseases in animals is also restricted.
Since the use of antibiotics is an important means of
treating bacterial diseases in animals, (a) countries should prohibit the therapeutic use in animals of newer antibiotics used to treat serious infections in humans (*e.g.* gentamicin and related aminoglycosides, spectinomycin, rifampicin); (b) chloramphenicol should be reserved for use in humans in order not to destroy its effectiveness for treating typhoid; (c) the routine use of antibiotic prophylaxis in the absence of proven infection should not become a substitute for food hygiene in animal-rearing establishments.
**General practitioners**
The unrestricted sale of antibiotics to the general public encourages their inappropriate use. Legislation making them available only on prescription by designated classes of health professionals, is therefore strongly recommended. They should also be kept informed of the best use of these substances.
Information is urgently needed about the pattern of antibiotic use in each country to assess the extent of overuse, misuse (and underuse) in common clinical situations encountered in various countries.
Special mention must be made of the widespread use, particularly in developing countries, of preparations containing two or more antibiotics in fixed ratios. Their spectrum of activity is often so wide that they have undesirable effects on the body, few of them have notable therapeutic advantages, and they are generally costly.
Manufacturers and importers of antibiotics should be required to provide the same information to users in all countries in which their products are sold. This information should always include the generic name of the product, the indications and contraindications for use, and the side-effects.
In conclusion unless steps are taken to check the misuse of antibiotics, which leads to resistance, one of the best weapons humanity has devised for the protection and restoration of health could be placed in jeopardy. (*WHO FEATURES*, No. 89, October 1984; WHO Organization, Media Service, 1211 Geneva 27, Switzerland.)
---
**EMBRYO TECHNOLOGIES PROMISING**
... Embryo transfer techniques "could replace amniocentesis, the prenatal test for Down's syndrome and some other diseases; an embryo removed from its mother's womb could be tested for as many as 2,000 diseases and safely returned to her. . . . Eventually, with the development of new gene-altering techniques, doctors could remove an embryo from a woman with [a genetically transferred] disease, alter the defective chromosomes, and return the embryo to the mother. These prospects move Lawrence G. Sucsy [Fertility & Genetics Inc., Chicago] to declare that embryo transfers will become commonplace in spite of the moral questions and legal barriers. He predicts: 'The power of motherhood will overcome the flak.'" [(Fern Schumer Chapman in *Fortune*, 17 Sep 84, p. 41–7) (Reproduced with permission from Press Digest, *Current Contents*®, No. 52, December 24, 1984, p. 13. Published by the Institute for Scientific Information®, Philadelphia, PA, USA.)]
CYCLOSPORINE AS DIABETES TREATMENT STIRS DEBATE
... "There is growing evidence that [diabetes] is an autoimmune disease, one in which the immune system somehow goes awry and attacks the body's own tissues, in this case the insulin-secreting cells located in the islets of Langerhans of the pancreas. . . . Calvin Stiller, John Dupré and their colleagues [U. Hosp., London, Ontario] have now treated more than 40 patients, all of whom had been diagnosed as having insulin-dependent diabetes . . . with the immunosuppressive drug cyclosporine. The treated patients experienced, Dupré says, 'an unexpectedly high reduction in insulin requirements'. . . . Despite the success of the cyclosporine study, questions have been raised about giving immunosuppressive drugs to insulin-dependent diabetics at this time. Aldo Rossini [U. Massachusetts Medical Sch., Worcester] says, 'I feel that the potential ill effects of the immunosuppression outweigh the disease.' The potential ill effects depend on the particular immunosuppressive regimen used but among the more common and serious are increased susceptibility to infections and to cancer." [(Jean L. Marx in *Science* 225(4668):1381-3, 21 Sep. 84) (Reproduced with permission from Press Digest, *Current Contents®*, No. 50, December 10, 1984, p. 19. Published by the Institute for Scientific Information®, Philadelphia, PA, USA.)]
OLD AGE MEMORY GOOD IN PARTS
... "To investigate age differences in everyday memory we constructed a 'practical memory questionnaire,' similar to one devised by Maryanne Martin [U. Oxford]. We asked subjects to rate their memory ability for the kind of things people need to remember in daily life on a 5-point scale from 1 (Very Poor). . . . to 5 (Very Good). The questionnaire included 26 items querying memory for prices, names of people, dates of birthdays, shopping lists, articles in newspapers, TV programmes, conversations, faces, childhood events, addresses, routes, answering letters, and so on. One hundred and fifty-seven people completed the questionnaires—a young group aged 20-39, a middle-aged group aged 40-59, and an elderly group aged over 60. . . . The age groups differed little in their memory for factual and personal information. . . . But for names and numbers the age differences are marked: the elderly group consistently rated their ability as poorer. The decrement with age is most evident for telephone numbers, postcodes, and names of acquaintances." [(Gillian Cohen & Dorothy Faulkner (Open U., UK) in *New Scientist* 104(1425): 49-51, 11 Oct 84) (Reproduced with permission from Press Digest, *Current Contents®*, No. 52, December 24, 1984, p.15. Published by the Institute for Scientific Information®, Philadelphia, PA, USA.)]
INDUSTRY GROWTH VS. AIR POLLUTION CONTROL
... "In a major victory for the Reagan administration, the Supreme Court . . . upheld a controversial environmental policy allowing industries to expand in regions with the dirtiest air, even if it results in an increase in pollution. The Environmental Protection Agency [EPA] regulation allows a company to add a polluting operation—a boiler, for example—to a plant so long as the company reduces emissions in some other part of its facility. The plant's net pollution is not allowed to increase by more than what the EPA considers a minimal amount. Challenged by environmental organizations, the policy was struck down by an appeals court panel . . . as a breach of the Clean Air Act's mandate for improved air quality in the country's industrialized sections. However, the Supreme Court said the policy was a proper accommodation between the competing congressional objectives of clean air and industrial growth." [(Fred Barbash in *Washington Post*, 26 June 84, p. A1, A5) (Reproduced with permission from Press Digest, *Current Contents®*, No. 42, October 15, 1984, p. 15. Published by the Institute for Scientific Information®, Philadelphia, PA, USA.)]
|
Surveying the Landscape as Technology Revolutionizes Media Coverage of Appellate Courts
Howard J. Bashman
Follow this and additional works at: https://lawrepository.ualr.edu/appellatepracticeprocess
Part of the Communication Technology and New Media Commons, Courts Commons, and the Law and Society Commons
Recommended Citation
Howard J. Bashman, Surveying the Landscape as Technology Revolutionizes Media Coverage of Appellate Courts, 18 J. App. Prac. & Process 7 ().
Available at: https://lawrepository.ualr.edu/appellatepracticeprocess/vol18/iss1/3
This document is brought to you for free and open access by Bowen Law Repository: Scholarship & Archives. It has been accepted for inclusion in The Journal of Appellate Practice and Process by an authorized administrator of Bowen Law Repository: Scholarship & Archives. For more information, please contact firstname.lastname@example.org.
SURVEYING THE LANDSCAPE AS TECHNOLOGY REVOLUTIONIZES MEDIA COVERAGE OF APPELLATE COURTS
Howard J. Bashman*
Just as technology has tremendously changed the manner in which appellate lawyers, judges, and their staffs perform legal research over the past twenty-five years, moving from books and physical law libraries to online legal research services and access via computers, hand-held cellular devices, and the like, so too has technology revolutionized the manner in which many people access nearly every other type of information, including news coverage.
If I wanted to read an article appearing in *The New York Times* on any day in 1992, and if I was fortunate enough to have access to a computer that was connected to the internet—probably by a slow-speed dialup service over a telephone modem from home—I could eventually manage to download
*Howard J. Bashman, an appellate lawyer who operates an appellate litigation boutique based in Willow Grove, Pennsylvania, writes a monthly column on appellate issues for *The Legal Intelligencer*, Philadelphia’s daily newspaper for lawyers. Bashman is widely known for his appellate blog, *How Appealing*, hosted by Breaking Media at http://appellateblog.com/. In more than fifteen years of blogging about appellate-court decisions and related news, Bashman has observed first-hand the seismic shifts in mainstream media coverage of appellate-court rulings that are the focus of this article.
and read the article online. But the far more common way to see what was in *The New York Times* in 1992 was to purchase a copy from a newsstand.
More recently, the internet has become ubiquitous, readily available at high speeds on my computer at work, on my computer at home, on my cell phone, and perhaps even on my television. If I want to see what’s in today’s edition of *The New York Times*, I don’t drive to a drugstore or newsstand or library to purchase or peruse a copy of the print edition of that newspaper. Instead I go to www.nytimes.com and click on the link for “Today’s Paper,” which will bring me directly to a list of every article appearing in multiple editions of today’s newspaper, with full online access to their content.\(^1\)
Now that we can access nearly the full contents of pretty much any newspaper in the world on our cell phones, the number of people who continue to purchase or subscribe to home delivery of an actual print copy of a newspaper continues its steady decline. Indeed, not only has online largely replaced paper when it comes to accessing the contents of newspapers, but popular apps such as Facebook and Twitter tend to be the way in which large numbers of people learn what is happening, instead of directly visiting a newspaper’s web site.
The consequences of the digitalization of news coverage are many. Across the nation over the past decade or so, some major newspapers have closed down (see Denver’s *Rocky Mountain News*\(^2\)) or abandoned print for online-only coverage
---
1. *The New York Times* is not alone in offering this sort of comprehensive online experience. *See, e.g.*, Plain Dealer Staff, *Dear Readers: Information about The Plain Dealer’s Delivery Schedule* (May 22, 2013, 7:06 AM CDT), http://www.cleveland.com/metro/index.ssf/2013/05/dear_readers_information_about.html [hereinafter *Plain Dealer Announcement*] (noting, in explanation of *Plain Dealer’s* move to fewer daily issues and enhanced digital offerings, that “[w]ith our updated digital edition, you can read The Plain Dealer anytime, anywhere,” that “[i]t is an exact digital replica of the morning’s paper in the page-by-page format you enjoy, delivered daily to your desktop or mobile device,” that it “is faster, includes enhanced search capabilities, and allows you to quickly scan headlines and section fronts,” and that it “offers . . . breaking news from our reporters as well as real-time information,” and also reminding readers that copies of the paper edition would still be available “at over 2,000 locations every day”).
2. *See, e.g.*, Richard Pérez-Peña, *Rocky Mountain News Fails to Find Buyer and Will Close*, N.Y. TIMES (Feb. 26, 2009), http://www.nytimes.com/2009/02/27/business/media/27paper.htm (reporting that “[t]he Rocky Mountain News, would, after nearly 150 years of existence,” publish its last edition the next day, and that the *News*, “founded in
Numerous other newspapers—such as *The Birmingham News* in Alabama, *The Plain Dealer* in Cleveland, *The Harrisburg Patriot–News* here in Pennsylvania, and *The New Orleans Times-Picayune*—have managed to remain alive by decreasing
1859 . . . claim[ed] to be not just the oldest newspaper in the state, but the oldest continuously operating business”).
3. *See* Dan Richman & Andrea James, *Seattle P-I to Publish Last Edition Tuesday*, SEATTLE POST-INTelligENCER (Mar. 16, 2009), http://www.seattlepi.com/business/article/Seattle-P-I-to-publish-last-edition-Tuesday-1302597.php (reporting that the “146-year old newspaper, Seattle’s oldest business,” would end publication but “would maintain seattlepi.com, making it the nation’s largest daily newspaper to shift to an entirely digital news product”); *see also* Seattlepi.com, seattlepi.com (online-only edition of paper including, as of July 2017, sections focused on local news, national and international news, business, sports, arts and entertainment, lifestyle news, travel, comics, education, and real estate).
4. *See* Stefanie Murray, *Ann Arbor News to Close in July*, MLIVE (Mar. 23, 2009, 9:18 AM EDT), http://www.mlive.com/news/ann-arbor/index.ssf/2009/03/ann_arbor_news_to_close_in_jul.htm (reporting that the *News* had been “the city’s daily newspaper since 1835” and that a new web-based media company, AnnArbor.com, would “publish a print edition twice a week” beginning later in 2009); *see also* MLiveAnnArbor, http://www.mlive.com/ann-arbor/ (online edition of paper offering, as of July 2017, state and local news).
5. *See* Dawn Kent Azok, *Many Birmingham News Staffers Depart as Paper Ceases Daily Publication, Moves to Three Days a Week*, AL.COM (Sept. 28, 2012, 9:17 PM CST), http://blog.al.com/businessnews/2012/09/birmingham_news_staffers_depar.html (reporting that the *News* would cease daily publication “after 125 years” but that it and its related papers in Huntsville and Mobile would be “published on Sundays, Wednesdays and Fridays,” and that “there [would] be a greater emphasis on digital news coverage at their online home, al.com”) [hereinafter Birmingham Reductions]; *see also* AL.com, www.al.com (online edition of all three papers, offering, as of July 2017, primarily state and local news).
6. *See, e.g.*, Plain Dealer Announcement, *supra* note 1 (explaining that the paper was about to reduce home delivery to three days a week, “with larger news sections and expanded local coverage: a Wednesday edition enriched with more food and dining coverage, a Friday edition with Northeast Ohio’s most comprehensive blueprint for entertainment, and a Sunday edition filled with even more arts, travel, opinion, sports and news,” and that “[a] full subscription of three premium days of home delivery will include access to our new digital edition seven days a week plus the Saturday home-delivered bonus edition [that includes sections focused on autos, high-school sports, and Ohio State football]”).
7. *See* Ted Sickler, *New Patriot-News Will Offer a Sunday Experience Three Days a Week*, PennLive (Dec. 22, 2012, 12:57 AM EST), http://www.pennlive.com/midstate/index.ssf/2012/12/pa_media_group_1.html (reporting that starting on January 1, 2013, the paper would be “delivered to subscribers’ homes on Tuesdays, Thursdays and Sundays,” and promising both that “[w]hether you read The Patriot-News in print or online at PennLive.com, you’ll find that the depth and reach of coverage will remain the same,” and that “the company’s digital-first approach will create richer stories online and in print”).
8. David Carr & Christine Haughney, *New Orleans Newspaper Scales Back in Sign of Print Upheaval*, N.Y. TIMES (May 25, 2012), http://www.nytimes.com/2012/05/25/business/technology/new-orleans-newspaper-scales-back-in-sign-of-print-upheaval.html (reporting that the *Times-Picayune* would move to a three-day-a-week schedule, with a Thursday edition featuring “more local news and features” than the current Monday edition).
the number of days per week on which they publish print editions. The loss of classified advertising to free online alternatives, such as Craigslist, and the loss of print readers, resulting in a loss of traditional advertisers who have flocked to follow digital-edition readers to online options, have posed tremendous financial challenges to the continued existence of even the largest and most successful newspapers in the United States. The newspapers that have managed to survive have engaged in large staff cuts reducing the number of reporters and editors, and often the most experienced reporters and editors have been at greatest risk of losing their jobs because their salaries and benefits are so much greater (and thus costlier to the employer) than those of the brand-new reporters and editors who replace them.\footnote{See, e.g., Birmingham Reductions, \textit{supra} note 5 (reporting that “more than 100 employees were laid off, roughly a quarter of the total workforce,” when three jointly held Alabama papers substituted a more robust online presence for some of their print editions and that “[t]he majority of the layoffs—about 60—were reporters, editors, photographers, copy editors and others in the newsroom,” while also noting that “the loss of institutional knowledge with the departures of many veteran journalists is a challenge”).}
These and the many other consequences of the digitalization of news coverage of course extend to the manner in which the news media cover appellate-court rulings.
The decline in traditional newspaper coverage of appellate courts, and of civics topics in general, is difficult to measure empirically, but it seems to be an unavoidable consequence of declines in newspaper readership and newspaper coverage of local government entities, including state and federal courts, as a result of the disappearance in many areas of reporters primarily assigned to cover the business of the courts for general-interest newspapers.\footnote{While recognizing that it is all but impossible to empirically study the extent to which job losses and other cutbacks in the journalism industry have affected journalistic coverage of appellate courts, Christopher J. Davey—a former newspaper journalist who later worked as the Director of Public Information for the Supreme Court of Ohio—collected many relevant and informative details in a must-read article. Christopher J. Davey, \textit{The Future of Online Legal Journalism: The Courts Speak Only Through Their Opinions?} 8 I/S 575, 583–84 (2013) (noting that over 40,000 journalism jobs had disappeared by the middle of 2012 and that insiders had by then observed that “the ranks of professional journalists dedicated to covering the courts and legal affairs have been decimated”), 584 (discussing both the disappearance of beat reporters whose responsibilities included “check[ing] in daily” on state supreme courts and the formation of...}
Despite the gloomy economic reality for newspapers and the unlikelihood of any turnaround in that regard, at present a substantial number of experienced journalists who regularly cover the courts continue to ply their trade. Although space limitations prevent me from listing all who deserve to be mentioned here, I must include Bob Egelko of *The San Francisco Chronicle*, Maura Dolan of *The Los Angeles Times*, Bill Rankin of *The Atlanta Journal-Constitution*, Chuck Lindell of *The Austin American-Statesman*, Kent Faulk
---
11. See Bob Egelko, *Courts Reporter*, SFCHRONICLE.COM, http://www.sfchronicle.com/author/bob-egelko/ (noting that Egelko’s beat includes “state and federal courts in California, the Supreme Court and the State Bar,” and that he has covered “the passage of Proposition 13 in 1978, the appointment of Rose Bird to the state Supreme Court and her removal by the voters, the death penalty in California and the battles over gay rights and same-sex marriage”).
12. See Maura Dolan, *Writer*, LATIMES.COM, http://www.latimes.com/la-bio-mauradolan-staff.html (noting that Dolan covers “covers the California Supreme Court and the U.S. 9th Circuit Court of Appeals,” and collecting recent articles).
13. See Bill Rankin, *Reporter for Enterprise*, AJC.COM, http://www.ajc.com/news/state-regional/bill-rankin/Iw5wIqT8YSqQHL6m14V68I/ (noting that Rankin joined the paper in 1989, and that “[f]or most of his time at the AJC, Bill has covered criminal justice, legal affairs and Georgia and federal courts”).
14. See Chuck Lindell, *State Capitol Reporter*, STATESMAN.COM, http://www.statesman.com/online/contacts/chuck-lindell/wkKwTLAbogmwBOBJvdPaDN/ (noting that Lindell “covers legal issues, appellate courts, politics, the Texas Senate and criminal justice”).
of *The Birmingham News*,\textsuperscript{15} and Jim Provance of *The Toledo Blade*.\textsuperscript{16}
All of these newspaper reporters, and various others like them, regularly cover the rulings of the federal and state appellate courts in the places where their newspapers are located. And even though, thanks to the internet, their news coverage of those courts is now more widely available online than perhaps ever before, declining newspaper circulation and readership have unquestionably curtailed the number of local readers who are consuming that coverage.
Of course, some exceptions exist to the overall trend of declining newspaper coverage of local courts. For example, *The Washington Post* has recently assigned experienced journalist Ann Marimow to cover the D.C. Circuit, the Fourth Circuit, and the state-level appellate courts of Virginia and the District of Columbia.\textsuperscript{17} And Zoe Tillman, who covered appellate courts for *The National Law Journal*—a publication whose contents largely exist beyond a subscription-only paywall—recently began covering appellate courts nationwide for BuzzFeed, an online-only news publisher whose content is freely available to all.\textsuperscript{18}
Moreover, thanks to Twitter, news of newly issued federal appellate court decisions is often disseminated earlier than ever, often as promptly as the attorneys in a case will be advised of the rulings, and even before the opinions become publicly available on the courts’ own sites.\textsuperscript{19} Journalists can register for
\begin{footnotesize}
\begin{enumerate}
\item See Kent Faulk, AL.COM, http://connect.al.com/staff/krfaulk/ (noting that Faulk has been a reporter since 1985 and now covers “the courts beat” for both the *Birmingham News* and the Alabama Media Group).
\item See Contact Us, TOLEDOBLADE.COM, http://www.toledoblade.com/contactus (noting that Provance is the *Blade’s* Columbus Bureau Chief); http://www.toledoblade.com/JimProvance (collecting recent articles).
\item See Ann E. Marimow, Reporter, Washington, D.C., WASHINGTONPOST.COM, https://www.washingtonpost.com/people/ann-e-marimow/?utm_term=.1104909b215f (indicating that Marimow “covers legal affairs in the District and Maryland for the Washington Post,” and collecting her recent stories).
\item See Zoe Tillman, BUZZFEED.COM, https://www.buzzfeed.com/zoetillman (noting that Tillman is a BuzzFeed News reporter based in Washington, D.C., and collecting recent stories).
\item To view just a few examples from the Twitter feed of Mike Scarella, a reporter for *The National Law Journal*, see https://twitter.com/MikeScarella/status/885505774704832512 (“Twitter account—purportedly belonging to a judge—that follows US attorney is not grounds for recusal: appeals court”), https://twitter.com/MikeScarella/status/883323145788.
\end{enumerate}
\end{footnotesize}
alerts from a federal appellate court’s electronic filing system to be notified of new filings, including the court’s filing of particular opinions, just as quickly as the lawyers in a case will be notified. Journalists such as Tillman,\textsuperscript{20} along with Brad Heath of \textit{USA Today}\textsuperscript{21} and Mike Scarcella of \textit{The National Law Journal},\textsuperscript{22} frequently tweet out news and links to new appellate-court rulings before anyone else has done so.
At the same time that technology has resulted in a decrease in newspaper coverage of appellate-court rulings, the internet has made access to those rulings and to appellate oral arguments more freely available than ever before. Every federal appellate court and state appellate court makes its published opinions freely accessible online. Nearly all federal appellate courts, and many state appellate courts, also make oral-argument audio freely available online. The Ninth Circuit livestreams its oral arguments on YouTube,\textsuperscript{23} and some state appellate courts also provide live and archived online access to oral-argument recordings.\textsuperscript{24}
Hundreds of thousands—if not millions—of interested people watched online and on television in February 2017 when
\textsuperscript{20} See @zoetillman, https://twitter.com/ZoeTillman (describing Tillman as a reporter “covering courts and justice for BuzzFeedNews” and, of course, cataloguing her tweets).
\textsuperscript{21} See @bradheath, https://twitter.com/bradheath (describing Heath as an investigative reporter for USA Today who covers “law and justice”).
\textsuperscript{22} See @mikescarcella, https://twitter.com/mikescarcella (describing Scarcella as a senior editor at @lawdotcom, @TheNLJ and @Legal_Times, and describing his beat as “[f]ederal enforcement,” “regulatory,” and “courts”).
\textsuperscript{23} See \textit{United States Court of Appeals for the Ninth Circuit}, YOUTUBE, https://www.youtube.com/user/9thcirc/videos (including an unusually sophisticated search function: the first hit produced by the distinctly non-legal term “travel ban” is the audio-only recording of the oral argument in \textit{Washington v. Trump}, no. 17-35105 (9th Cir. Feb. 7, 2017) (motion to stay TRO pending appeal)). Readers will recall that this hearing was conducted telephonically, so there was no courtroom video for the Ninth Circuit to upload to its YouTube channel.
\textsuperscript{24} The National Center for State Courts has compiled a list of links to the online resources of those states that make their appellate court oral arguments available in this manner. \textit{Appellate Procedure—State Links}, NAT’L CTR. FOR STATE COURTS, http://www.ncsc.org/topics/appellate/appellate-procedure/state-links.aspx?cat=Appellate%20Court%20Oral%20Arguments%20Online.
the Ninth Circuit livestreamed its first oral argument concerning President Trump’s so-called travel ban.\textsuperscript{25} And despite the tremendous interest in the subject matter of the travel-ban oral argument, it may have been one of the least visually interesting oral arguments of all times, with the attorneys participating remotely by telephone and the judges themselves also participating from off-camera locations.\textsuperscript{26}
\textsuperscript{25} See, e.g., Richard Wolf & Alan Gomez, \textit{Trump’s Travel Ban Court Hearing: Five Takeaways}, USA TODAY.COM (Feb. 7, 2017, 9:23 PM EST), https://www.usatoday.com/story/news/politics/2017/02/07/takeaways-president-donald-trump-court-of-appeals-hearing-hearing-travel-ban/97616664/. Wolf and Gomez reported that the hearing
offered a rare opportunity for the general public to hear the federal court system in action, and listeners far and wide were eager to tune in. A court spokesman said 137,000 people from as far away as France, Germany and Japan listened to the live audio stream on the court’s YouTube channel. David Madden said that was “by far” the largest audience for an oral argument since the court began live-streaming oral arguments two years ago. The feed also was carried live on CNN, MSNBC, Fox News and on USA TODAY’s web site.
\textit{Id.} The news media’s making this important oral argument available in real time illustrates the power—and the civic value—of online news at its best. And yet there remains room for measured analysis released hours or days after an important appellate decision, some of those more reflective assessments even appearing in the print editions of newspapers that still produce them. See, e.g., Adam Liptak, \textit{After the Ruling, and the Tweet, What Comes Next?} N.Y. TIMES, Feb. 11, 2017, at A8 (discussing consequences of travel-ban hearing and decision).
Despite the advantages of real-time reporting, experience shows that the pressure to be first to post a story can lead to errors. The best-known example may be the confusion surrounding early online reporting on the Supreme Court’s Obamacare ruling. E.g., Chenda Ngak, \textit{Getting it Wrong: Media Rushes to Report Supreme Court’s Health Care Decision}, CBSNEWS.COM (June 29, 2012, 4:59 PM), http://www.cbsnews.com/news/getting-it-wrong-media-rushes-to-report-on-supreme-courts-health-care-decision/ (showing screen shots of Breaking News banner on CNN.com website announcing that “[t]he Supreme Court has struck down the individual mandate for health care”; reporting that other online news stories—like the Huffington Post’s assertion “that the Supreme Court had ruled the mandate unconstitutional”—conveyed “the opposite of the court’s opinion”; also reporting that “NPR and Time re-tweeted CNN’s [erroneous] report”; and noting that “[w]ith all of the confusion created by mixed reports, a handful of politicians tweeted out the misinterpreted Supreme Court ruling, later rushing to delete”).
\textsuperscript{26} See United States Court of Appeals for the Ninth Circuit, \textit{17-35105 State of Washington, et al. v. Donald J. Trump et al.}, YOUTUBE, https://www.youtube.com/watch?v=RPOFowWqFGU (providing oral-argument audio). The motions panel to which the matter was assigned did not sit together: Judge Canby was in his chambers in Phoenix, Judge Clifton was in his chambers in Honolulu, and Judge Friedland was in her chambers in San Jose, California, when they heard the argument by telephone. Similarly, Mr. Flentje, Special Counsel to the Assistant Attorney General, argued for the federal government from Washington, D.C., while Mr. Purcell, Solicitor General of the State of Washington, argued from Olympia, Washington, and his co-counsel from the office of the Minnesota Attorney General (who was on the line, but did not argue) was in St. Paul, Minnesota. Email from Susan V. Gelmis, Chief Deputy Clerk for Operations, U.S. Ct. of App. for the 9th Cir., to
Appellate judges and lawyers with the talent and time to write opinions and briefs in language accessible to people who are not lawyers are thus providing a wonderful service to those members of the general public who take advantage of their technology-facilitated ability to access those briefs and opinions online. Although the writing styles of many judges and lawyers remain afflicted by legalese, one can hope that over time more and more appellate opinions and briefs will be written so that they can be easily understood by intelligent members of the general public.\footnote{Howard J. Bashman, Author, & Nancy Bellhouse May, Editor, \textit{J. App. Prac. \& Process, Physical Locations of Participants in Travel-Ban Hearing} (Aug. 11, 2017, 6:49 PM CDT) (copy on file with author). Ms. Gelmis can be heard at the start of the oral-argument recording describing the applicable procedures to counsel. She spent the hearing on the telephone while seated at the courtroom deputy’s desk in Courtroom 1 of the James R. Browning Courthouse in San Francisco. \textit{Id.}}
To summarize, technological innovations have severely undermined the economic model of the newspaper industry because fewer readers subscribe to and read actual newspapers, preferring to get their news online. As a result, many newspapers have significantly reduced their staffs and cut back on publication schedules, and some have even gone out of business entirely. These changes have had a negative impact on even the most renowned and seemingly strongest newspapers, financially speaking, such as \textit{The New York Times}, \textit{The Washington Post}, \textit{The Los Angeles Times}, and \textit{The Wall Street Journal}.\footnote{\textit{Cf.} Price Marshall, \textit{Tribute to the Honorable Richard S. Arnold}, 1 \textit{J. App. Prac. \& Process}, 199, 203 (1999) (noting that Judge Arnold was mindful of this aspect of the appellate judge’s role, always writing with such care that his opinions were “understandable to the losing party—not only to his lawyer, but also to the litigant himself”).}
\footnote{On the other hand, markers of financial health like subscription numbers, profits, and share prices at those very papers have increased dramatically in the first half of 2017. \textit{See, e.g.,} Michael Barthel, \textit{Despite Subscription Surges for Largest U.S. Newspapers, Circulation and Revenue Fall for Industry Overall}, PEWRESEARCH.ORG—FACTTANK (June 1, 2017), http://www.pewresearch.org/fact-tank/2017/06/01/circulation-and-revenue-fall-for-newspaper-industry/ (reporting that “[y]early financial statements show that The New York Times added more than 500,000 digital subscriptions in 2016—a 47% year-over-year rise” and that “[t]he Wall Street Journal added more than 150,000 digital subscriptions, a 23% rise, according to audited statements produced by Dow Jones”; noting as well that Tronc, the parent company of both the \textit{Los Angeles Times} and the \textit{Chicago Tribune}, “saw an 8% decline in advertising revenue and a 4% decline in total revenue, though circulation revenue increased 5%,” and also pointing out that although the}
Although it appears beyond dispute that traditional newspaper coverage of federal and state appellate-court rulings has decreased as a result of the financial setbacks that the newspaper industry has suffered, those same technological innovations have resulted in faster real-time coverage of appellate-court rulings and faster and easier access to them, in addition to providing direct access to the oral arguments that may have preceded those rulings.
As someone who appreciates high-quality newspaper coverage of appellate-court rulings, I am perhaps more prone than younger lawyers to reminisce about the good old days. Yet, at the same time, I also very much appreciate the speed at which news of appellate-court rulings can be widely disseminated today via Twitter and the ease of accessing oral arguments and appellate rulings in digital form.
When it comes to online coverage of appellate-court rulings, *BuzzFeed* is not the only new entrant in the arena. *Ars Technica* frequently covers rulings involving legal issues touching on technology.\(^{29}\) *The Hollywood Reporter* operates a blog titled “THR, Esq.” that covers appellate court rulings affecting the entertainment industry.\(^{30}\) Any number of blogs closely cover appellate court rulings that touch on issues of
---
\(^{28}\) *Washington Post* is privately held and does not release financial data, its leaders expect 2017 to be its “third straight year of double-digit revenue growth”); James B. Stewart, *Washington Post, Breaking News, Is Also Breaking New Ground*, NYTIMES.COM (May 19, 2017), https://www.nytimes.com/2017/05/19/business/washington-post-digital-news.html (reporting that the *Post* “has said that it was profitable last year—and not through cost-cutting”; that in May 2017, it “trailed only CNN and the New York Times” among news organizations in unique users and digital page views; that its “digital ad revenue is . . . in excess of $100 million”; and that it expects 2017 to be its “third straight year of double-digit revenue growth”); Natalia Wojcik, *Trump Has Been “Rocket Fuel” for NYT Digital Subscriptions, CEO Says*, CNBC.COM—MARKET INSIDER (May 3, 2017, 4:41 PM EDT), http://www.cnbc.com/2017/05/03/shares-of-new-york-times-surge-after-subscriber-growth.html (reporting of the *New York Times* that “quarterly circulation revenue increased by 11.2 percent,” that “[p]laid subscriptions for digital-only surpassed 2.2 million, up 62.2 percent year over year,” and that “shares . . . were up 21 percent year to date, and have risen about 29 percent over the past 12 months”).
\(^{29}\) See *ArsTechnica–Policy*, https://arstechnica.com/tech-policy (including legal news among topics covered from tech-policy perspective and maintaining archive that includes site’s tech-related legal coverage).
\(^{30}\) See *The Hollywood Reporter–THR Esq.*, http://www.hollywoodreporter.com/blogs/thr-esq (updating Hollywood-related legal coverage frequently and maintaining archive of past coverage).
national security.\textsuperscript{31} Professor Hasen’s \textit{Election Law Blog} thoroughly covers cases within its realm.\textsuperscript{32} And various other blogs, including my own \textit{How Appealing},\textsuperscript{33} regularly cover interesting state and federal appellate-court rulings.
Even old-line newspapers have entered the law-blog arena. Bill Rankin writes a blog concerning legal issues on AJC.com, the web site of \textit{The Atlanta Journal-Constitution}.\textsuperscript{34} The \textit{Proof and Hearsay} blog of the \textit{Milwaukee Journal Sentinel} offers coverage of appellate court rulings from time to time.\textsuperscript{35} And even Bob Egelko of \textit{The San Francisco Chronicle} blogs about legal issues on that publication’s web site from time to time.\textsuperscript{36} Until recently, WSJ.com operated its own \textit{Law Blog} supplementing \textit{The Wall Street Journal}’s already very strong coverage of federal and state courts.\textsuperscript{37}
\textsuperscript{31} See, e.g., \textit{Just Security}, https://www.justsecurity.org/aboutus (indicating that blog is “based at the Center for Human Rights and Global Justice at New York University School of Law” and characterizing it as “an online forum for the rigorous analysis of U.S. national security law and policy”); \textit{Lawfare}, http://www.lawfareblog.com/ (indicating that blog is published in cooperation with Brookings and characterizing it as focused on “hard national security issues”); \textit{Take Care}, https://takecareblog.com/ (characterizing blog’s mission as “[e]nsuring the President ‘shall take Care that the Laws be faithfully executed’” and indicating that it provides “a platform for incisive legal analysis of a wide range of issues”).
\textsuperscript{32} See \textit{Election Law Blog}, http://electionlawblog.org/ (indicating that blog covers “[t]he law of politics and the politics of law: election law, campaign finance, legislation, voting rights, legislation, redistricting, and the Supreme Court nomination process”).
\textsuperscript{33} See \textit{How Appealing}, http://appellateblog.com/ (describing blog—in light of its May 6, 2002, founding date—as “[t]he Web’s first blog devoted to appellate litigation”). That founding date can be confirmed in the blog’s archive at http://howappealing.abovethelaw.com/2002_05_01_appellateblog_archive.html (scroll down to see archived initial post).
\textsuperscript{34} See \textit{AJC—Bill Rankin’s Legal Brief}, http://legal.blog.ajc.com/.
\textsuperscript{35} See \textit{Journal Sentinel—Proof and Hearsay}, https://www.jsonline.com/blog/proofandhearsay/ (indicating that blog covers “crime, courts and legal issues in Milwaukee and throughout Wisconsin”).
\textsuperscript{36} See \textit{SFGate—Politics Blog, Author: Bob Egelko}, http://blog.sfgate.com/politics/author/begelko/ (collecting Egelko posts); \textit{SFGate—Blog}, http://blog.sfgate.com/stew/author/begelko/ (same).
\textsuperscript{37} See \textit{The Wall Street Journal—Blogs, Law Blog}, https://blogs.wsj.com/law/ (featuring headlined announcement of “The WSJ Law Blog: 2006–2017,” and suggesting at paywall that full article to which headline refers will reveal \textit{Law Blog}’s demise); \textit{see also} Debra Cassens Weiss, \textit{Wall Street Journal’s Law Blog Shuts Down}, ABAJOURNAL.COM (JULY 5, 2017, 11:18 AM CDT), http://www.abajournal.com/news/article/wall_street_journals_law_blog_shuts_down (reporting that “[t]he Wall Street Journal Law Blog has shut down after more than a decade . . . [m]ore than 20,000 posts and five lead writers,” and acknowledging that \textit{Law Blog} had often been “a source for articles appearing at ABAJournal.com”).
For all of these reasons, I am reluctant to opine concerning whether news coverage of federal and state appellate courts’ rulings is now better or worse than it was fifteen or even twenty-five years ago. The coverage is certainly different, often relying more heavily on wire services such as The Associated Press, Reuters, and Bloomberg News. At the same time, new providers such as Courthouse News Service, Law360.com (whose content is largely behind paywalls), new online outlets associated with longstanding legal-information sources such as the *ABA Journal*’s online edition, and various regional law-focused publications, such as *The Indiana Lawyer*, have arisen to fill parts of the gap.
The only thing that is certain about the future is continued change. No doubt many of the concerns raised in this article will seem quaint—if not trivial—ten or fifteen years from now. But the topic will remain of utmost importance. The appellate judiciary depends on the faith and trust of the general public to carry out its important role as an impartial arbiter of public and private disputes. Whether the general public will continue to have the same amount of faith and trust in the judiciary as has historically been the case, or will have more or less confidence in our legal institutions now that the manner in which news
---
38. See *ABA Journal*, http://www.abajournal.com/ (offering news and information in categories that include “featured news,” “latest headlines,” “in-depth reporting,” “topics” in “legal technology,” and subscriber access to the digital edition of the print magazine).
39. See *The Indiana Lawyer*, http://www.theindianalawyer.com/ (offering constantly updated legal reporting pertaining to Indiana in categories of “latest news,” “in depth,” and “trial reports,” and also offering access to the contents of the print edition of the paper).
40. See Davey, *supra* note 10, at 589–94 (acknowledging decline in news media, discussing relevant research into decline’s effect on public knowledge about and perceptions of courts and judges, and urging courts to make their processes more transparent and judges to take a more active role in educating the public about the role of the courts).
41. See, e.g., Stephen G. Breyer, *Reflections on the Role of the Appellate Courts: A View from the Supreme Court*, 8 J. App. Prac. & Process 91, 94, 98 (2006) (pointing out that the Supreme Court’s long history has given it “tremendous prestige,” asserting that “the result of that prestige is that people who would otherwise be in the streets fighting one another submit their disputes to the rule of law,” characterizing this acquiescence to the rule of law as “important for the stability of the United States,” and warning that “[p]ersistent attacks pose a problem because although the courts will weather thoughtful criticism of specific judicial opinions, courts cannot survive a constant deluge of negative comments intended to undermine popular support for the entire judiciary”).
about the judiciary is collected and disseminated has changed so much remains to be seen.
To end on what I hope is an encouraging note, if electronic media continues to be the manner in which the vast majority of information is communicated in the future, as all signs suggest it will be, then we should be heartened by the fact that coverage of appellate courts and appellate-court rulings has made the transition from print-only publications to electronic media. Let us hope that electronic media’s coverage of appellate matters will continue to expand and grow so that digital journalists’ knowledge of the appellate courts will come to rival the expertise that existed across the nation back when local and regional newspapers—and not just large national publications—could assign experienced journalists to cover the work of the appellate courts.
|
INFORMATION TO USERS
This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer.
The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction.
In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.
Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand corner and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book.
Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6" x 9" black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order.
University Microfilms International
A Bell & Howell Information Company
300 North Zeeb Road, Ann Arbor, MI 48106-1346 USA
313-761-4700 800-521-0600
Hypersonic, turbulent viscous interaction past an expansion corner
Bigdeli, Behzad, M.S.
The University of Texas at Arlington, 1992
HYPersonic, TURBuLENT VISCOUS INTERACTION
PAST AN EXPANSION CORNER
The members of the committee approve the masters thesis of Behzad Bigdeli
Frank K. Lu
Supervising Professor
Dale A. Anderson
Donald R. Wilson
HYPersonic, TURBuLENT VISCOUS INTERACTION
PAST AN EXPANSION CORNER
by
BEHZAD BIGDELI
Presented to the Faculty of the Graduate School of
The University of Texas at Arlington in Partial Fulfillment
of the Requirements
for the Degree of
MASTER OF SCIENCE IN AEROSPACE ENGINEERING
THE UNIVERSITY OF TEXAS AT ARLINGTON
December 1992
ACKNOWLEDGEMENTS
I would like to thank Dr. Frank Lu for his guidance and effort which made this task possible. I thank Dr. Anderson and Dr. Wilson, members of my supervising committee, for sharing their knowledge and assisting me in the thesis process.
I thank my parents Mr. Mostafa Bigdeli and Mrs. Alamtaj Sakhaee for their support. A special thanks to my mother-in-law, Mary Jo Tyson, whose financial support is greatly appreciated.
Finally, I thank my wife, Melissa, for her patience, support and encouragement.
November 20, 1992
A study was made of turbulent viscous interaction on a flat plate at zero incidence, a compression corner and an expansion corner at hypersonic speeds. By assuming a pressure law, the boundary layer properties of the flow were obtained by simultaneous solution of a displacement thickness relationship and a coupling equation relating the effects of incidence and displacement thickness to the effective body shape. The tangent-wedge rule was employed to predict the pressure distribution over the flat plate and the compression corner, while a new pressure law approximation for expansion corner was proposed. The effect of wall temperature ratio on the displacement thickness ratio was studied and a new representative value of $n$ in the power law assumption was obtained. The method was extended to turbulent supersonic flows past expansion corners. The results were compared with the experimental data for both hypersonic and supersonic cases.
# TABLE OF CONTENTS
ACKNOWLEDGEMENTS .......................................................... iii
ABSTRACT ........................................................................ iv
LIST OF ILLUSTRATIONS .................................................... vii
NOMENCLATURE ................................................................ viii
1. INTRODUCTION ................................................................. 1
2. VISCOS INTERACTION ..................................................... 5
2.1 Problem in General ....................................................... 5
2.2 Bodies With Sharp Leading Edges ................................... 5
2.2.1 Displacement Thickness ........................................ 7
2.2.2 Pressure Laws ...................................................... 14
2.2.3 Effective Body Shape ............................................. 15
3. VISCOS INTERACTION REGIONS AND PARAMETERS ........... 17
3.1 Strong Viscous Interaction Region ................................. 17
3.2 Weak Viscous Interaction Region ................................... 17
3.3 Strong and Weak Viscous Interaction Parameters ............. 19
4. APPLICATION AND DISCUSSION ....................................... 21
4.1 Flat Plate at Zero Incidence ......................................... 21
4.1.1 Strong Solution .................................................... 23
4.1.2 Weak Solution ..................................................... 25
4.1.3 Matching of Strong and Weak Solutions .................. 28
4.2 Compression Corner ................................................... 32
4.3 Expansion Corner ........................................ 39
4.3.1 Method of Solution .................................. 39
4.3.2 Displacement Thickness for Expansion Corner ........... 41
4.3.3 Pressure Law for Expansion Corner .................... 42
4.3.4 Results and Discussion for Expansion Corner .......... 45
5. CONCLUSIONS AND RECOMMENDATIONS .......................... 57
5.1 Conclusions ............................................. 57
5.2 Recommendations ......................................... 58
APPENDIX ....................................................... 59
REFERENCES .................................................... 63
LIST OF ILLUSTRATIONS
Figure 1. Surface Features of a Blunt Nosed Body ........................................... 6
Figure 2. Displacement Thickness Ratio, $n = 8.5$ ............................................. 13
Figure 3. Flat Plate Viscous Interaction Regions .................................................. 18
Figure 4. Flat Plate Pressure Distribution, $M = 9$, $\alpha = 0$ ................................. 29
Figure 5. Compression Corner Geometry .............................................................. 33
Figure 6. Compression Corner Pressure Distribution, $M = 9.22$, $\alpha = 15^\circ$ ............ 37
Figure 7. Expansion Corner Geometry ................................................................. 40
Figure 8. Expansion Corner Pressure Distribution, $M = 8$, $\alpha = 2.5^\circ$ .................. 47
Figure 9. Boundary Layer Properties, $M = 8$, $\alpha = 2.5^\circ$ .................................... 49
Figure 10. Rate of Change of Boundary Layer Properties, $M = 8$, $\alpha = 2.5^\circ$ .......... 51
Figure 11. Expansion Corner Pressure Distribution, $M = 8$, $\alpha = 4.25^\circ$ ............... 52
Figure 12. Expansion Corner Pressure Distribution, $M = 7.35$ .............................. 54
Figure 13. Expansion Corner Pressure Distribution, $M = 3$ and $4$, $\alpha = 15^\circ$ .......... 55
NOMENCLATURE
A $\gamma$, constant
$a,m$ $P = a x^m$, constants
B $\frac{\gamma(\gamma + 1)}{2}$, constant
$b,s$ $P = b \exp(sx)$, constants
$C_\infty$ constant in linear viscosity-temperature law
$K$ $M_\infty \frac{dy_e}{dx}$, flow parameter
$L$ length of the flat plate to the hinge line, cm
$l$ scale variable
$M$ Mach number
$n$ power law variable
$P$ pressure ratio
$Pr$ Prandtl number
$Re$ unit Reynolds number
$r$ recovery factor
$T$ temperature, K
$u$ velocity, m/s
$X$ normalized form of the physical coordinate $x$
$x$ distance from the leading edge along the plate, cm
$Y$ Howarth’s transformation variable
$y$ distance normal to the surface, cm
$y_e$ effective body shape
$y_w$ geometric body shape
| Symbol | Description |
|--------|-------------|
| $\alpha$ | corner angle, degree |
| $\gamma$ | ratio of specific heats |
| $\Delta$ | transformed boundary layer thickness |
| $\delta$ | boundary layer thickness, cm |
| $\delta^*$ | displacement thickness, cm |
| $\eta$ | $(5.75 - 1.625 \frac{T_w}{T_o})$, empirical variable |
| $\kappa$ | $(6 - 1.3 \frac{T_w}{T_o})$, empirical variable |
| $\mu$ | absolute viscosity coefficient |
| $\xi$ | dummy variable of integration |
| $\rho$ | density |
| $\bar{\chi}_s$ | strong viscous interaction parameter |
| $\bar{\chi}_w$ | weak viscous interaction parameter |
**Subscripts**
- $b$: bluntness
- $e$: condition at edge of boundary layer
- $I$: inviscid
- $o$: stagnation condition
- $r$: recovery condition
- $t$: turbulent
- $w$: wall condition
- $\infty$: freestream condition
1. INTRODUCTION
In recent years increasing interest in the development of flight vehicles at extremely high velocities has motivated a large number of research studies. At hypersonic speeds where the flight velocity is far greater than the speed of sound, the characteristics of the flow can be drastically different than those at supersonic speeds. These hypersonic features have brought a set of new fluid dynamic problems into prominence such as rarefied gases, dissociation effects and viscous interactions.
Viscous interaction was identified in view of the pressure measurements near the leading edge of a wedge for the first time by Becker [1]. He found that the actual pressure along the wedge is considerably higher than the values of the inviscid pressure obtained from the oblique shock theory. Viscous interaction or pressure interaction [2] can be viewed as the mutual interaction between the external inviscid flow field and the boundary layer around a body. The boundary layer grows for example on a flat plate as $M^2/\sqrt{Re_\delta}$ [3]. Therefore, relatively speaking, in subsonic and most supersonic flows where the Reynolds number is much larger than the square of the Mach number, the effect of boundary layer growth in changing the effective body shape and the actual pressure distribution is very small and can be ignored in general. In hypersonic flight, viscous interaction becomes significant. This is due to the fact that hypersonic flow contains a large amount of kinetic energy some of which will be dissipated once the flow is slowed
by viscous effects within the boundary layer. The kinetic energy is converted into the internal energy of the gas and the process is called the viscous dissipation. The effects of compression and energy dissipation impart a considerable temperature rise. This temperature rise causes the coefficient of viscosity to increase. Also, from the equation of state it can be seen that the density decreases by increasing temperature since pressure is approximately constant in normal direction. The reduced density requires a larger boundary layer thickness to maintain the same mass flow through the boundary layer. Therefore, the boundary layer grows rapidly. This thick boundary layer distorts the external inviscid flow field severely which in turn modifies the boundary layer growth. Therefore, the prediction of such mutual interactions is necessary to correctly design a shape which satisfies certain performance requirements. For certain flows, this task is accomplished by including the effect of the boundary layer displacement thickness in the effective body shape and calculating the corresponding pressure distribution.
There are several examples of viscous interaction which are of interest in both laminar and turbulent boundary layers. Some of those that have been studied analytically include hypersonic flow near the leading edge of a sharp flat plate, unseparated shock boundary layer interaction in turbulent flow, turbulent interaction on curved surfaces and the turbulent flow at a wedge compression or expansion corner. For any shape with a sharp edge, a rapid change of boundary layer growth occurs in the regions of strong favorable or adverse pressure gradient whether the flow is laminar or turbulent. The strong pressure gradient is usually caused by changes of the surface slope, for example, at the hinge line of a corner.
Hayes and Probstein [4] epitomized viscous interaction problem in their book. Cheng et al. [5] studied the laminar hypersonic viscous interaction. They were able to demonstrate the mutual effects of the surface incidence and the boundary layer growth. Cheng et al. [5] adopted Lees’ [6] similarity theory for local flat plates at hypersonic boundary layers and they used the Busemann [7] centrifugal correction to the Newtonian theory. They found parameters governing the flow over any shape with a sharp edge. These parameters reflect the effect of boundary layer growth, incidence, strong viscous interaction and bluntness. Cheng et al. applied the analysis to a set of flat plates at different incidence in laminar flow only. Moreover, the choice of the Newton-Busemann law [7] is appropriate only for strong viscous interaction regions since it does not tend to the correct downstream value and as shown by Mohammadian [8] and Cheng and Kirsch [9], it leads to physically unrealistic, highly-oscillatory solutions for some curved surfaces.
Sullivan [10] modified Cheng’s method [5] in order to remove the limitations caused by applying the Newton-Busemann pressure law [7]. While a number of approximate pressure rules can be used in place of the Newton-Busemann law [7], Sullivan [10] chose the tangent-wedge approximation. He applied Lees’ [6] boundary layer theory along with the tangent-wedge approximation to study the laminar viscous interaction of the flow around a convex (expansive) corner. Subsequently, Stollery [11] applied both the original Cheng’s method [5] and the modified Sullivan’s method [10] to a wide variety of two dimensional shapes and made some comparisons with experimental data.
Barnes and Tang [12] investigated mass injection and formulated the
strong and weak interaction parameters in turbulent flow. Stollery and Bates [13] provided a similar theory to the laminar case for the turbulent flow. They showed that the key to the method is to express the turbulent boundary layer growth in terms of an initially unknown pressure distribution $P(x)$. They also pointed out that the turbulent viscous interaction on curved surfaces and shock-wave boundary layer interactions are important. Stollery and Bates [13] employed the momentum-integral method of Spence [14] for a more detailed prediction and made comparisons with experimental data obtained by Coleman [15] and Elfstrom [16].
The aim of the present study is to partially reproduce the results obtained by Stollery and Bates [13] for turbulent hypersonic viscous interaction and to extend the analysis to an expansion corner. The possibility of suggesting a new pressure law approximation in place of the tangent-wedge rule approximation for the expansion corner will be investigated. Also, the method is extended to supersonic flow past expansion corners. The results will be compared with the experimental data obtained by Bloy [17] and Lu and Chung [18] for the hypersonic case and those obtained by Goldfeld [19] for the supersonic case.
2. VISCous INTERACTION
2.1 Problem in General
Hypersonic, viscous flow past an arbitrary body is affected by three different effects [20]. These effects are the incidence or geometric body shape $y_w$, the boundary layer growth or displacement thickness $\delta^*$ and the bluntness of the shape or entropy layer $y_b$. The entropy layer is defined as a region of strong entropy gradients which is created in the nose region of a blunt nose body. The boundary layer grows inside the entropy layer and is affected by it. This interaction is also called vorticity interaction [3].
At hypersonic speeds the boundary layer is thick. Therefore, the effective body shape $y_e$ can no longer be represented by the surface of the body. Instead, the effective body shape should be considered as the surface of the body plus the displacement thickness. Figure 1 illustrates the geometric body shape, displacement thickness, entropy layer and the effective body shape and their notations for a blunt nosed body.
2.2 Bodies With Sharp Leading Edges
In this study the emphasis is on bodies with a sharp leading edge and bluntness is ignored. In the other words, the effective body shape is the same as the entropy layer, i.e. $y_e = y_b$. Hypersonic flow past bodies with sharp leading edges can be represented by three equations. These equations are the boundary
Figure 1: Surface Features of a Blunt Nosed Body.
layer equation, the inviscid flow equation and a coupling equation. Mathematically these equations can be written functionally as
\[
\delta^* = f_1(P) \tag{1}
\]
\[
P = f_2(y_e) \tag{2}
\]
\[
y_e = f_3(\delta^*) \tag{3}
\]
where \( P \) is an initially unknown pressure distribution \( \frac{P_e(x)}{P_\infty} \) at position \( x \). The pressure is assumed constant across the boundary layer so that \( P_e = P_w \), but due to viscous interaction the pressure at the outer edge of the boundary layer \( P_e \) is not the same as the free stream pressure \( P_\infty \). From this very general statement of the problem one can conclude that a complete solution to the flow past a given shape can be obtained once the boundary layer growth, external pressure distribution and the effective body shape are known. Many different choices for these three equations are possible. Rather simple forms of these equations will be employed in order to obtain faster solution.
### 2.2.1 Displacement Thickness
A general expression for displacement thickness for turbulent flows was obtained by Stollery and Bates [13] based on the momentum-integral relation. They adopted the analysis given by Spence [14]. The final result is:
\[ \frac{M\delta^*}{x} = 0.051 \frac{1+1.3 \frac{T_w}{T_o}}{\left(1+2.5 \frac{T_w}{T_o}\right)^{\frac{3}{5}}} \left(\frac{M_\infty^4 C_\infty}{Re_x P}\right)^{\frac{1}{5}} \left(\int_0^x P^\eta d\xi/x\right)^{\frac{4}{5}} \]
(4a)
where
\[ \eta = \frac{1}{7} \left(5.75 - 1.625 \frac{T_w}{T_o}\right) \]
(4b)
\[ \kappa = \frac{1}{7} \left(6 - 1.3 \frac{T_w}{T_o}\right) \]
(4c)
\( M_\infty \) is the freestream Mach number, \( T_w \) is the wall temperature, \( T_o \) is the stagnation temperature, \( C_\infty \) is the constant of proportionality in the linear viscosity-temperature relationship, \( Re_x \) is the Reynolds number based on the distance \( x \) from the leading edge, \( P \) is an initially unknown pressure distribution and \( \xi \) is the dummy variable of integration in \( x \). Equation (4a) describes the growth of the boundary layer displacement thickness \( \delta^* \) in an unknown pressure gradient for any given Mach number and wall temperature ratio. If \( P \) is assumed constant in equation (4a), it can be shown that:
\[ \frac{\delta^*}{x} = 0.051 \frac{1+1.3 \frac{T_w}{T_o}}{\left(1+2.5 \frac{T_w}{T_o}\right)^{\frac{3}{5}}} \left(\frac{M_\infty^4 C_\infty}{Re_x P}\right)^{\frac{1}{5}} \]
(4d)
It is easy to deduce from equation (4d) that for given wall temperature conditions,
\[ \frac{\delta^*}{x} \sim \left(\frac{M_\infty^4 C_\infty}{Re_x P}\right)^{\frac{1}{5}} \]
(4e)
Equation (4e) shows that the displacement thickness increases with increasing Mach number, and decreases in an adverse pressure gradient. Further, equation (4d) reveals that the effect of wall temperature on the displacement thickness is very weak, with the displacement thickness $\delta^*$ increasing slightly with increasing wall temperature ratio $\frac{T_w}{T_o}$. Stollery and Bates [13] showed that equation (4e) is only appropriate for flows over flat plates whereas for general surfaces the less approximate relationship, equation (4a), should be used.
Spence [14] showed that the analysis of a compressible turbulent boundary layer, as in the laminar case, can be greatly simplified by replacing the physical coordinate $y$ with a similarity variable $Y$. He adopted Howarth's transformation [23] or
$$Y = \int_0^y \frac{\rho}{\rho_e} \, dy$$ \hspace{1cm} (5)
where $\rho$ is the density at height $y$ above the surface and the subscript $e$ denotes the condition at the outer edge of the boundary layer. First, he showed that for a laminar boundary layer on a flat plate, if the product of the density and the coefficient of viscosity is constant the similarity transformation allows the velocity profile to be presented in a form which accounts for the effect of compressibility and shows no explicit dependence on Mach number and Prandtl number, i.e.
$$\frac{u}{U_e} = f\left(\frac{Y}{\Delta}\right)$$ \hspace{1cm} (6a)
where $f$ is a universal function and $\Delta$ is a transformed boundary layer thickness.
\[ \Delta = \int_0^\delta \frac{\rho}{\rho_e} dy \]
(6b)
For turbulent flows Spence [14] showed that if the velocity profile obeys a power law of the form
\[ \frac{u}{u_e} = \left( \frac{Y}{\Delta} \right)^{\frac{1}{n}} \]
(6c)
the results compares well with the experimental data. Stollery [22] extended Spence’s analysis [14] to obtain a simple expression for the displacement thickness ratio \( \frac{\delta^*}{\delta} \). This analysis is as follows. By definition:
\[ \delta^* = \int_0^\delta \left( 1 - \frac{\rho u}{\rho_e u_e} \right) dy \]
(7a)
Using the assumptions given in equations (6a) and (6b), equation (7a) can be written as:
\[ \delta^* = \delta - \int_0^\Delta f\left( \frac{Y}{\Delta} \right) dY = \delta - \Delta J \]
(7b)
where
\[ J = \int_0^1 f\left( \frac{Y}{\Delta} \right) d\left( \frac{Y}{\Delta} \right) \]
(7c)
Using equation (6c) the integral \( J \), equation (7c) can be evaluated to yield
The boundary layer thickness $\delta$ can be written as:
$$\delta = \int_0^\delta dy = \int_0^\Delta \frac{\rho_e}{\rho} dY \quad (8a)$$
Applying the equation of state to equation (8a) gives:
$$\delta = \int_0^\Delta \frac{T}{T_e} dY \quad (8b)$$
Assume that the linear Crocco [23] temperature-velocity relationship holds in the form
$$\frac{T}{T_e} = \frac{T_w}{T_e} + \left( \frac{T_r - T_w}{T_e} \right) \frac{u}{u_e} - \left( \frac{T_r - T_e}{T_e} \right) \left( \frac{u}{u_e} \right)^2 \quad (8c)$$
In this relationship $T_r$ is the recovery temperature, $T_e$ is the temperature at the outer edge of the boundary layer and $T_w$ is the wall or the surface temperature. The recovery temperature is defined as:
$$T_r = T_\infty \left( 1 + r \frac{\gamma - 1}{2} M_\infty^2 \right) \quad (9a)$$
where $r$ is the recovery factor and is a function of the Prandtl number. White [24] states the recovery factor for laminar and turbulent flows based on the experimental results as follows:
\[ r = \text{Pr}_t^{\frac{1}{2}} \quad \text{laminar flow} \quad (9c) \]
\[ r \approx \text{Pr}_t^{-\frac{1}{3}} \quad \text{turbulent flow} \quad (9d) \]
where \( \text{Pr}_t \) is the turbulent Prandtl number. In this analysis a turbulent Prandtl number of 0.89 is assumed which corresponds to a recovery factor of 0.9. Using equation (7b) the displacement thickness ratio can be shown to be:
\[ \frac{\delta^*}{\delta} = \frac{\delta - \Delta J}{\delta} = 1 - \frac{\Delta J}{\delta} \quad (10a) \]
Substituting the expressions obtained in equations (7d), (8b) and (9a) in equation (10a) and simplifying (Appendix A) yields the following expression for the displacement thickness ratio:
\[ \frac{\delta^*}{\delta} = 1 - \frac{n+2}{\left(\frac{T_r}{T_e}\right) + \left(\frac{n+2}{n}\right)\left(\frac{T_w}{T_e}\right) + (n + 1)} \quad (10b) \]
Equation (10b) explicitly shows the dependence of the displacement thickness ratio of a hypersonic turbulent boundary layer on the wall temperature ratio. The value of \( n \) can vary from 6 to 10, depending on the Reynolds number. The higher the value of the Reynolds number, the fuller the velocity profile and the larger the value of \( n \). The effect of wall temperature conditions on the displacement thickness ratio is illustrated in figure 2 which plots equation (10b) for four different cases using a representative value of \( n \) equal to 8.5. It can be seen that the displacement thickness ratio under the adiabatic conditions (\( T_w = T_e \)) is slightly higher than the values obtained under cold wall conditions (\( T_w = 0 \)).
Figure 2: Displacement Thickness Ratio, $n = 8.5$.
Also, a comparison of the theoretical values with experimental data of Hopkins et al. [25] for $\frac{T_w}{T_r} = 0.3$ and $0.5$ shows that the trend with $\frac{T_w}{T_r}$ is reflected accurately and the theoretical and the experimental data are in reasonably good agreement.
### 2.2.2 Pressure Laws
The change in pressure can be considered to be the most important consequence of viscous interaction. Therefore, it is important in any analysis dealing with hypersonic viscous interactions to be able to describe the pressure. For a wide range of conditions applying viscous-interaction theory, the tangent-wedge rule has been found to be adequate. Sullivan [10] and Stollery [11] showed that
$$P = 1 + \gamma K^2 \left( \frac{\gamma + 1}{4} + \left\{ \left( \frac{\gamma + 1}{4} \right)^2 + \frac{1}{K^2} \right\}^{1/2} \right)$$ \hspace{1cm} (11a)
where
$$K = M_\infty \frac{dy_e}{dx}$$ \hspace{1cm} (11b)
The tangent-wedge rule describes the static pressure $P$ as a function of the effective body shape $y_e$. The pressure distribution calculated from the tangent-wedge rule must tend asymptotically to the inviscid two-dimensional value. The final inviscid two-dimensional value of pressure is given by:
$$P_I = 1 + \gamma (M_\infty \alpha)^2 \left( \frac{\gamma + 1}{4} \pm \left\{ \left( \frac{\gamma + 1}{4} \right)^2 + \frac{1}{(M_\infty \alpha)^2} \right\}^{1/2} \right)$$ \hspace{1cm} (11c)
where $\alpha$ is the corner angle in radian and $M_\infty$ is the freestream Mach number. The plus and minus signs correspond to the compression and expansion cases respectively. Either one of the equations (11c) can be used to obtain the final inviscid value of the pressure for a flat plate at zero incidence since both equations for $K = 0$ yield $P = 1$. Equation (11c) with the minus sign shows the range of validity of this pressure law for the expansive cases. Since a negative value for pressure is not physically possible the lowest possible value of $P$ is zero which corresponds to a value of $M\alpha$ equal to $-\left(\frac{2}{\gamma(\gamma-1)}\right)^{\frac{1}{2}}$. For air with $\gamma = 1.4$, this corresponds to a value of $M\alpha = -1.89$. Therefore, the tangent-wedge rule for expansion corner flows is limited and it is necessary to obtain a more general pressure law which can cover a broader range of $M\alpha$.
### 2.2.3 Effective Body Shape
The effective body shape $y_e$ can be expressed as
$$y_e = y_w + \delta^* \quad (12a)$$
where $y_e$ can be differentiated with respect to $x$ to yield
$$\frac{dy_e}{dx} = \frac{dy_w}{dx} + \frac{d\delta^*}{dx} \quad (12b)$$
This is only an approximate relationship. The exact relationship for this coupling equation is given by Lees [6] and Reeves [26], which is repeated as follows:
\[
\frac{dy_e}{dx} = \frac{dy_w}{dx} + \frac{d\delta^*}{dx} - (\delta - \delta^*) \frac{d}{dx} \ln(\rho_e u_e) \tag{12c}
\]
For hypersonic boundary layers, as the Mach number tends to infinity, the displacement thickness becomes the same as the boundary layer thickness. Therefore, the last term in equation (12c) becomes small and can be neglected.
3. VISCous INTERACTION REGIONS AND PARAMETERS
Viscous interactions can be classified into two distinct regions of strong and weak interactions. Figure 3 shows a schematic of strong and weak viscous interaction regions for flow past a flat plate at zero incidence. These regions are described in the following sections.
3.1 Strong Viscous Interaction Region
The strong viscous interaction region can be defined as a region in which the mutual interaction of the boundary layer and the inviscid flow are strong in which $\frac{dx}{dx}$ and $\frac{dy}{dx}$ are large. Therefore, the incoming freestream flow is significantly disturbed which in turn causes substantial changes in the boundary layer properties. Compared to the strong effect of the boundary layer displacement thickness, the effect of incidence in this region is small. The strong viscous interactions occur very near the nose and it is very unlike that these interactions are turbulent.
3.2 Weak Viscous Interaction Region
In contrast to the strong viscous interaction region, the rate of growth of the displacement thickness $\frac{dx}{dx}$ and the effective body shape $\frac{dy}{dx}$ in the weak viscous interaction region are small. Streamlines are slightly deflected into the incoming freestream flow. Hence, freestream distortion is negligible which results
Figure 3: Flat Plate Viscous Interaction Regions.
in weak changes in the boundary layer properties. In this region the dominant factor affecting the flow is the incidence.
3.3 Strong and Weak Viscous Interaction Parameters
It is necessary to analyze any hypersonic viscous flow problem for the possibility of existence of viscous interactions in order to make sure whether these effects need to be included. Hence, it is important to identify parameters which govern these effects. Tang and Engh [27] suggested a modified form of the laminar Lees-Probstein hypersonic interaction parameter that can be used to express both strong and weak viscous interactions in turbulent flow. This modified form is
\[
\bar{x} = M_\infty^3 \left( \frac{C_\infty}{Re_x} \right)^{\frac{1}{5}}
\]
Barnes and Tang [12] showed that equation (13) with a suitable interaction equation overpredicts the induced pressures at high Mach numbers. Therefore, they formulated the strong and weak viscous interaction parameters for turbulent flow. These parameters are
\[
\bar{x}_S = \left( \frac{M_\infty^9 C_\infty}{Re_x} \right)^{\frac{2}{7}}
\]
for the strong turbulent viscous interaction and
\[
\bar{x}_W = \left( \frac{M_\infty^9 C_\infty}{Re_x} \right)^{\frac{1}{8}}
\]
for the weak turbulent viscous interaction. It is worth noting that viscous interactions are not always the dominant factor in all hypersonic flows. If $\overline{\chi}_w \ll 1$, then viscous interaction is unimportant.
4. APPLICATION AND DISCUSSION
4.1 Flat Plate at Zero Incidence
A review of the Stollery and Bates [13] work shows that the pressure distribution over a flat plate in hypersonic, turbulent viscous flow can be predicted using equations (4a), (11a) and (12a). The geometric body shape for a flat plate at zero incidence is $y_w(x) = 0$. Therefore, in equation (12a) the effective body shape, becomes
$$y_e(x) = \delta^*(x) \quad (16a)$$
which can be differentiated with respect to $x$ to yield
$$\frac{dy_e(x)}{dx} = \frac{d\delta^*(x)}{dx} \quad (16b)$$
Therefore, parameter $K$ given by equation (11b) can be written as
$$K = M_\infty \frac{dy_e(x)}{dx} = M_\infty \frac{d\delta^*(x)}{dx} \quad (17)$$
This equation can be substituted into the tangent-wedge approximation, equation (11a), to yield
\[ P = 1 + \gamma \left( M_\infty \frac{d\delta^*}{dx} \right)^2 \left( \frac{\gamma + 1}{4} + \left\{ \left( \frac{\gamma + 1}{4} \right)^2 + \frac{1}{\left( M_\infty \frac{d\delta^*}{dx} \right)^2} \right\}^{1/2} \right) \] (18)
An explicit relationship for the displacement thickness \( \delta^*(x) \) can be obtained by multiplying both sides of equation (4a) by \( \frac{x}{M_\infty} \) as follows:
\[ \delta^* = a_1 \left( \frac{M_\infty^4 C_\infty}{Re_x} \right)^{1/5} \frac{\left( \int_0^x P^\eta \, d\xi/x \right)^{4/5}}{P^\kappa} x \] (19a)
where
\[ a_1 = 0.051 \frac{1+1.3 \frac{T_w}{T_o}}{\left( 1 + 2.5 \frac{T_w}{T_o} \right)^{3/5}} \] (19b)
The definition of Reynolds number can be used to simplify equation (19a) to
\[ \delta^* = a_1 \left( \frac{M_\infty^4 C_\infty}{Re} \right)^{1/5} \frac{\left( \int_0^x P^\eta \, d\xi/x \right)^{4/5}}{P^\kappa} x^{4/5} \] (19c)
The boundary layer displacement thickness \( \delta^*(x) \) can be calculated using equation (19c) if \( P \) is known [22]. \( P \) is an initially unknown pressure distribution which can be specified knowing the behavior of the displacement thickness. Since two distinct regions of strong and weak viscous interactions are already defined it seems reasonable to assume two different forms for the pressure distribution \( P \) corresponding to these regions. This assumption leads to two sets of solutions for the displacement thickness and the pressure distribution. The first is a strong
solution for the region close to the leading edge of the sharp flat plate and the second is a weak solution further back on the flat plate.
4.1.1 Strong Solution
If transition occurs in a region very close to the leading edge of a flat plate there will be a strong turbulent viscous interaction region. In this region the pressure $P$ can be assumed to take the form [22]
$$P = ax^m \quad (20)$$
The pressure distribution $P$ in equation (19c) can be replaced by equation (20) and the obtained relationship can be integrated to yield
$$\delta^* = a_1 \left( \frac{M_\infty^4 C_\infty}{Re_x} \right)^{\frac{1}{5}} \frac{a^{\left(\frac{4}{5}\eta - \kappa\right)}}{(m\eta + 1)^{\frac{4}{5}}} x^{\left(\frac{4}{5} + \frac{4}{5}m\eta - m\kappa\right)} \quad (21a)$$
This equation can be differentiated with respect to $x$ to yield
$$\frac{d\delta^*(x)}{dx} = \left(\frac{4}{5} + \frac{4}{5}m\eta - m\kappa\right) a_1 \left( \frac{M_\infty^4 C_\infty}{Re} \right)^{\frac{1}{5}} \frac{a^{\left(\frac{4}{5}\eta - \kappa\right)}}{(m\eta + 1)^{\frac{4}{5}}} x^{\left(-\frac{1}{5} + \frac{4}{5}m\eta - m\kappa\right)} \quad (21b)$$
Since the rate of change of the displacement thickness is large in the strong viscous interaction region, $K$ is also large and the tangent-wedge approximation of equation (11c) can be written as
\[ P = \gamma \frac{\gamma + 1}{2} K^2 + \left( \frac{\gamma + 3}{\gamma + 1} \right) - \frac{8\gamma}{(\gamma + 1)^3} \frac{1}{K^2} + \ldots \quad (22a) \]
\[ \approx \gamma \frac{\gamma + 1}{2} K^2 \quad (22b) \]
Equation (21b) can be substituted into equation (22b) and simplifying yields
\[ P = \gamma \frac{\gamma + 1}{2} \left( \frac{4}{5} + \frac{4}{5} m\eta - m\kappa \right)^2 a_1^2 \left( \frac{M_\infty^{1/4} C_\infty}{Re} \right)^2 \left( \frac{a^{4/5} - \kappa}{(m\eta + 1)^{4/5}} \right)^2 x^{-\left( \frac{2}{5} + \frac{2}{5} m\eta - 2m\kappa \right)} \quad (23) \]
Comparing equations (20) and (23) reveals that
\[ m = \frac{8}{5} m\eta - 2m\kappa - \frac{2}{5} \quad (24a) \]
This equation can be solved for \( m \) and the result is
\[ m = \frac{2}{8\eta - 10\kappa - 5} \quad (24b) \]
where \( \eta \) and \( \kappa \) are empirical variables which depend on the wall temperature ratio and are given in equations (4b) and (4c) respectively. It is interesting to note that the value of \( m \) is independent of the wall temperature ratio. For example, under cold-wall conditions, \( \eta \) and \( \kappa \) are equal to \( \frac{5.75}{7} \) and \( \frac{6}{7} \) respectively which correspond \( m = -\frac{2}{7} \) while under adiabatic wall conditions \( \eta \) and \( \kappa \) are equal to \( \frac{4.125}{7} \) and \( \frac{4.7}{7} \) respectively but the value of \( m \) remains the same. Assuming that \( a = 1 \) and using the obtained value of \( m = -\frac{2}{7} \), equation (21b) leads to
Therefore, the strong solution for hypersonic, turbulent flow close to the leading edge of a sharp flat plate at zero incidence for a given set of initial conditions can be obtained by solving equations (18) and (25) simultaneously. The solution includes the rate of growth of the displacement thickness and the pressure distribution.
4.1.2 Weak Solution
Further back on the flat plate in the weak viscous interaction region the pressure on a sharp flat plate can be predicted using the tangent-wedge approximation for pressure at the edge of the effective surface. The tangent-wedge approximation for this region can be written as
\[ P = 1 + \gamma K + \gamma \frac{\gamma + 1}{4} K^2 + \ldots \]
\[ \approx 1 + \gamma K \]
Further, since \( K \ll 1 \), \( P \) can be assumed to be equal to unity. Therefore, \( P \) in the general expression obtained for the displacement thickness, equation (19c), can be replaced with unity and the obtained expression can be integrated to yield the following expression:
\[ \delta^* = a_1 \left( \frac{M_\infty^4 C_\infty}{Re} \right)^{\frac{1}{5}} x^{\frac{4}{5}} \]
This expression is valid for predicting the growth of the displacement thickness over a sharp flat plate in the weak viscous interaction region. Equation (27a) shows that the displacement thickness varies with distance $x$ from the leading edge as
$$\delta^* \propto x^{\frac{4}{5}}$$ \hspace{1cm} (27b)
This can be expressed in terms of the effective body shape as
$$y_e \propto x^{\frac{4}{5}}$$ \hspace{1cm} (28)
which compares well with conventional results for the boundary-layer growth over a sharp flat plate. Also, this result is significantly different from the conventional laminar case [3] where the displacement thickness grows parabolically as
$$\delta^* \propto x^{\frac{1}{2}}$$ \hspace{1cm} (29)
Differentiating equation (27a) with respect to $x$ yields
$$\frac{d\delta^*}{dx} = \frac{4}{5} a_1 \left( \frac{M_\infty^4 C_\infty}{Re} \right)^{\frac{1}{5}} x^{-\frac{1}{5}}$$ \hspace{1cm} (30a)
which can alternatively be expressed in terms of the weak viscous interaction parameter as
$$M_\infty \frac{d\delta^*}{dx} = \frac{4}{5} a_1 \bar{x}_w$$ \hspace{1cm} (30b)
This expression can be substituted in the relationship obtained for the parameter $K$, equation (11b), to give
$$K = M_\infty \frac{dy_e}{dx} = \frac{4}{5} a_1 \bar{x}_w$$ \hspace{1cm} (31)
This value of $K$ can be used in equation (26a) and the result is
$$P = 1 + \gamma \left( \frac{4}{5} a_1 \bar{x}_w \right) + \gamma \frac{\gamma + 1}{4} \left( \frac{4}{5} a_1 \bar{x}_w \right)^2 + \ldots$$ \hspace{1cm} (32a)
This expression for air where $\gamma = 1.4$ and adiabatic wall conditions ($T_w = T_o$) becomes
$$P = 1 + 0.062 \bar{x}_w + 0.00164 \bar{x}_w^2 + \ldots$$ \hspace{1cm} (32b)
which compares well with results given by Barnes and Tang [12]
$$P = 1 + 0.06 \bar{x}_w + 0.00152 \bar{x}_w^2 + \ldots$$ \hspace{1cm} (32c)
Also, under cold-wall conditions ($T_w = 0$), equation (32a) becomes
$$P = 1 + 0.057 \bar{x}_w + 0.0014 \bar{x}_w^2 + \ldots$$ \hspace{1cm} (32d)
In a similar way to that for the strong solution, the weak solution can be obtained for flow farther back on a sharp flat plate by solving equations (18) and (30a) simultaneously for a given set of Mach number and wall temperature ratio.
4.1.3 Matching of Strong and Weak Solutions
A complete solution for pressure distribution in hypersonic, turbulent viscous flow past a sharp edge flat plate can be obtained by matching the strong and weak solutions. A comparison of the pressure distribution for a sharp flat plate at zero incidence using the aforementioned method, the pressure law suggested by Barnes and Tang [12] and the results obtained by Stollery and Bates [13] is shown in figure 4 in which $M_\infty = 9$, $T_w = 295$ K, $T_0 = 1070$ K, $\text{Re} = 5.5 \times 10^5/\text{cm}$.
The parameter $a_1$ is given by equation (19b) and is solely dependent on the wall temperature ratio. The scaled coordinate $X$ is given by
$$X = \frac{x}{l}$$ \hspace{1cm} (33a)
where
$$l = \frac{M_\infty^9 C_\infty x}{\text{Re}_x},$$ \hspace{1cm} (33b)
or
$$X = \frac{\text{Re}_x}{M_\infty^9 C_\infty}.$$ \hspace{1cm} (33c)
Also, the strong and weak turbulent viscous interaction parameters can be written in
Figure 4: Flat Plate Pressure Distribution, $M = 9$, $\alpha = 0$.
terms of $X$ as
$$\bar{X}_s = \left( \frac{1}{X} \right)^{\frac{2}{7}}$$ \hspace{1cm} (34)
and
$$\bar{X}_w = \left( \frac{1}{X} \right)^{\frac{1}{5}}$$ \hspace{1cm} (35)
Equations (34) and (35) reveals that the strong and weak viscous interaction parameters differ merely by a small change in their exponents. Therefore, these choices of the scaling parameters are appropriate for showing the effect of the turbulent viscous interaction parameters and they provide the possibility of extending the computation from very close to the leading edge of a flat plate to very far back. Further, the ratio of $X$ to $a_1^5$ clearly identifies the relationship between the viscous parameter and the wall temperature ratio since $\frac{X}{a_1^5} = \left( \frac{\bar{X}_w}{a_1^5} \right)^5$.
It can be seen that close to the leading edge, the pressure has a value much greater than unity which drops rapidly to the value of one further downstream. Also, the results obtained here compare well with the results obtained by Stollery and Bates [13]. However, they employed the simple and more approximate expression for the displacement thickness, equation (4e), while in this study the less approximate and more general form of the displacement thickness, equation (4a), has been used. Barnes and Tang’s results [12] do not admit the solution obtained here or that obtained by Stollery and Bates [13] in the strong viscous interaction region. It is no surprise that the Barnes and Tang’s solution [12]
diverges in the strong viscous interaction region since their pressure law was derived based on the weak interaction theory which is valid only for small values of $\alpha$ and $K$. Also, the series expansions used to derive this pressure law, equation (32c), cannot be expected to hold in this region. It is interesting to note that the adiabatic wall condition was assumed by Barnes and Tang [12] in the derivation of their pressure law while the wall temperature ratio used by Stollery and Bates [13] was $\frac{T_w}{T_o} = 0.275$. In contrast, Barnes and Tang's [12] weak viscous interaction solution compares well with the results obtained by the other methods.
In practice a region of strong turbulent viscous interaction is very unlikely since the transition Reynolds numbers are so high. Therefore, there is no experimental data available for the strong turbulent viscous interaction region to be compared with the strong solution presented here. However, the weak solution for a flat plate is of particular interest in computing the turbulent viscous interaction past a compression or expansion corner. This will be discussed further in the subsequent section.
The method outlined here can be easily extended to the case of a flat plate with incidence. The remaining difference is that for a flat plate with incidence the geometric body shape $y_w$ is not zero. Therefore, the effective body shape can no longer be represented by equation (16a). The correct form of this equation for a flat plate with incidence is
$$y_e(x) = \delta^*(x) + x \tan \alpha$$
(36)
in which the geometric body shape is expressed in terms of the distance from the
leading edge of the flat plate and the tangent of the incidence angle.
4.2 Compression Corner
The strong viscous-inviscid interactions which occur at the wing-flap junction of control surfaces and on intake compression ramps are of particular interest in the design of any hypersonic vehicle. The behavior of the boundary layer on these control surfaces can be understood by study of a simple compression corner. When an unseparated turbulent boundary layer at hypersonic speeds encounters a two dimensional compression corner, some features of the flow can be predicted using the method described in the following section.
The basic steps in predicting the development of the boundary layer and the pressure distribution for a compression corner follow those of the flat plate. First, it is necessary to determine the geometric body shape and the effective body shape. Then an expression for the displacement thickness is formulated and finally the expressions obtained for the displacement thickness and the pressure are solved simultaneously.
A compression ramp is illustrated in figure 5. The geometric body shape for this corner can be mathematically expressed by the following piecewise continuous functions:
\[ y_w(x) = 0 \quad \text{for} \quad x \leq L \]
Figure 5: Compression Corner Geometry.
where \( x \) is the distance from the leading edge of the flat plate, \( L \) is a characteristic length measured from the sharp leading edge of the plate to the hinge line of the corner and \( \alpha \) is the ramp angle. The representation of the geometric body shape by this mathematical model clearly indicates that a compression corner can be viewed as two distinct sections. The first section from the sharp leading edge to the hinge line of the corner can be considered as a flat plate at zero incidence. However, the remaining section, from the hinge line to the end of the corner, can be viewed as a sharp flat plate at incidence. Thus, a full solution of the hypersonic, turbulent viscous flow past a compression corner can be constructed from the solution obtained for these two sections. The solution for the flat plate section can be obtained by employing the method as described in section 4.1 for a given set of initial conditions.
The prediction of the pressure distribution and the displacement thickness over the ramp section can be accomplished through the following procedure. The geometric body shape, equation (37b), for the ramp can be differentiated with respect to \( x \) to yield
\[
\frac{dy_w(x)}{dx} = \tan \alpha
\]
Then the rate of change of the effective body shape, equation (12b), takes the form
\[
\frac{dy_e(x)}{dx} = \frac{d\delta^*}{dx} + \tan \alpha
\] (38)
and the parameter $K$, equation (11b), becomes
\[
K = M_\infty \left( \frac{d\delta^*}{dx} + \tan \alpha \right)
\] (39)
This expression can be substituted in the tangent-wedge rule, equation (11a), to yield the following expression:
\[
P = 1 + \gamma M_\infty^2 \left( \frac{d\delta^*}{dx} + \tan \alpha \right)^2 \left( \left( \frac{\gamma+1}{4} \right) + \left\{ \left( \frac{\gamma+1}{4} \right)^2 + \frac{1}{M_\infty^2 \left( \frac{d\delta^*}{dx} + \tan \alpha \right)^2} \right\}^{1/2} \right)
\] (40)
Now it remains to evaluate the rate of change of the displacement thickness $\frac{d\delta^*}{dx}$. The general expression for the displacement thickness, equation (4a), can still be used. As was the case for the flat plate at zero incidence, $P$ should be specified in this equation before the derivative of the displacement thickness $\delta^*$ with respect to $x$ can be obtained. A rather logical way of specifying $P$ is to take the approach that was employed for the flat plate. In the region close to the hinge line over the ramp the boundary layer can be expected to grow exponentially. An exponential pressure rise was suggested by Oswatitsch and Weighardt [28] and was calculated by a perturbation of the boundary layer by Lighthill [29]. Therefore, $P$ can be assumed to take the form
\[
P = b \exp(sz)
\] (41)
where $b$ and $s$ are constants. This equation can be substituted in the general expression for the displacement thickness, equation (19c) and differentiated with respect to $x$ to yield
$$\frac{d\delta^*(x)}{dx} = a_1 \left( \frac{M_\infty^4 C_\infty}{Re_x} \right)^{\frac{1}{5}} b^{\left(\frac{4}{5}\eta - \kappa\right)} \left\{ s^{\left(\frac{4}{5}\eta - \kappa\right)} \right\} e^{sx^{\left(\frac{4}{5}\eta - \kappa\right)}} \quad (42)$$
The constants $b$ and $s$ can be evaluated using the value of pressure at the hinge line obtained previously by matching to the upstream, flat plate solution.
Very far downstream over the ramp the rate of change of the displacement thickness is small. Therefore, $P$ can be assumed to be constant as it was the case for the flat plate at zero incidence. Thus, the rate of growth of the displacement thickness can be computed using equation (30a). Therefore, the pressure distribution further downstream on the ramp can be predicted by solving equations (30a) and (40) simultaneously. The obtained solution for this region can be matched with the solution generated for the region very close to the hinge line over the ramp. This final matching completes the solution of the turbulent, hypersonic flow past a compression corner.
Figure 6 illustrates a comparison of the pressure distribution for turbulent, hypersonic viscous flow past a compression corner obtained by the present study against that of Stollery and Bates [13], with experimental data of Elfstrom [16] and the final inviscid value of the pressure, where $M_\infty = 9.22$, $Re = 5.5 \times 10^5 /cm$, $T_w = 295$ K, $T_o = 1070$ K, $L = 43$ cm, $\alpha = 15$ degrees and the hinge line is at $x = L$. The inviscid pressure distribution was calculated using the inviscid form of the tangent-wedge rule, equation (11c), with plus sign. Figure 6 shows that the
Figure 6: Compression Corner Pressure Distribution, $M = 9.22$, $\alpha = 15^\circ$.
agreement between the theoretical and experimental data for a 15-degree compression corner is remarkably good, considering the simplicity of the mathematical model and the closeness of the experimental flows to incipient separation. Further this figure reveals that in the region very close to the hinge line over the ramp, the pressure rises drastically and follows an almost straight line which flattens out further downstream. This behavior of the pressure can be best explained in view of the changes of the parameter $K$. As shown before, $K$ is influenced by the combined effect of incidence and displacement thickness. In the region close to the leading edge over the flat plate part of the compression corner, the incidence does not have any effect on $K$ while the effect of the displacement thickness is strong. As the boundary layer encounters the sudden change of incidence close to the hinge line over the ramp, the displacement thickness thins dramatically in the adverse pressure gradient which in turn causes the pressure to rise very rapidly. Further downstream on the ramp, the displacement thickness grows thicker again, the effect of incidence becomes predominant and the pressure flattens out and exceeds slightly the final inviscid two-dimensional. The reason for this slight excess pressure is that the effective body shape $y_e(x)$ has a greater slope than the corner angle. It can be concluded that for hypersonic flow at a given Mach number past a flat plate or a ramp with a given corner angle, $K$ is a function of one independent variable, namely the rate of change of the displacement thickness. Also, the tangent-wedge rule for pressure is a function of $K$ only. Therefore, the pressure is directly influenced by variations of the rate of change of the displacement thickness.
4.3 Expansion Corner
The interaction between expansions and turbulent boundary layers is a problem which has received little attention. This study aims to aid some insights to such an interaction.
4.3.1 Method of Solution
Figure 7 illustrates a schematic of an expansion corner. Comparison of this figure with the compression corner, figure 5, shows that the difference between these two cases is the turning angle of the flow. The turning angle for the compression corner was taken as positive while the flow turning angle is negative for the expansion corner. Thus, the mathematical model describing the expansion corner is the same as the compression corner with the angle being negative in the
|
He told the doctor he was due in the bar-room at eight o'clock in the morning; the bar-room was in a slum in the Bowery; and he had only been able to keep himself in health by getting up at five o'clock and going for long walks in the Central Park.
‘A sea-voyage is what you want,’ said the doctor.
‘Why not go to Ireland for two or three months? You will come back a new man.’
‘I'd like to see Ireland again.’
And he began to wonder how the people at home were getting on. The doctor was right. He thanked him, and three weeks after he landed in Cork.
As he sat in the railway-carriage he recalled his native village, built among the rocks of the large headland stretching out into the winding lake. He could see the houses and the streets, and the fields of the tenants, and the Georgian mansion and the owners of it; he and they had been boys together before he went to America. He remembered the villagers going every morning to the big house to work in the stables, in the garden, in the fields—mowing, reaping, digging, and Michael Malia building a wall; it was all as clear as if it were yesterday, yet he had been thirteen years in America; and when the train stopped at the station the first thing he did was to look round for any changes that might have come into it. It was the same blue limestone station as it was thirteen years ago, with the same five long miles between it and Duncannon. He had once walked these miles gaily, in little over an hour, carrying a heavy bundle on a stick, but he did not feel strong enough for the walk to-day, though the evening tempted him to try it. A car was waiting at the station, and the boy, discerning from his accent and his dress that Bryden had come from America, plied him with questions, which Bryden answered rapidly, for he wanted to hear who were still living in the village, and if there was a house in which he could get a clean lodging. The best house in the village, he was told, was Mike Scully’s, who had been away in a situation for many years, as a coachman in the King’s County, but had come back and built a fine house with a concrete floor. The boy could recommend the loft, he had slept in it himself, and Mike would be glad to take in a lodger, he had no doubt. Bryden remembered that Mike had been in a situation at the big house. He had intended to be a jockey, but had suddenly shot up into a fine tall man, and had become a coachman instead; and Bryden tried to recall his face, but could only remember a straight nose and a somewhat dusky complexion.
So Mike had come back from King’s County, and had built himself a house, had married there were children for sure running about; while he, Bryden, had gone to America, but he had come back; perhaps he, too, would build a house in Duncannon, and—— His reverie was suddenly interrupted by the carman.
‘There’s Mike Scully,’ he said, pointing with his whip, and Bryden saw a tall, finely built, middle-aged man coming through the gates, who looked astonished when he was accosted, for he had forgotten Bryden even more completely than Bryden had forgotten him; and many aunts and uncles were mentioned before he began to understand.
‘You’ve grown into a fine man, James,’ he said, looking at Bryden’s great width of chest. ‘But you’re thin in the cheeks, and you’re very sallow in the cheeks too.’
‘I haven’t been very well lately—that is one of the reasons I’ve come back; but I want to see you all again.’
‘And thousand welcome you are.’
Bryden paid the carman, and wished him ‘God-speed.’ They divided the luggage, Mike carrying the bag and Bryden the bundle, and they walked round the lake, for the townland was at the back of the domain; and while walking he remembered the woods thick and well-forested; now they were wind-worn, the drains were choked, and the bridge leading across the lake inlet was falling away. Their way led between long fields where herds of cattle were grazing; the road was broken—Bryden wondered how the villagers drove their carts over it, and Mike told him that the landlord could not keep it in repair, and he would not allow it to be kept in repair out of the rates, for then it would be a public road, and he did not think there should be a public road through his property.
At the end of many fields they came to the village, and it looked a desolate place, even on this fine evening, and Bryden remarked that the county did not seem to be as much lived in as it used to be. It was at once strange and familiar to see the chickens in the kitchen; and, wishing to re-knit himself to the old customs, he begged of Mrs. Scully not to drive them out, saying they reminded him of old times.
‘And why wouldn’t they?’ Mike answered, ‘he being one of ourselves bred and born in Duncannon, and his father before him.’
‘Now, is it truth ye are telling me?’ and she gave him her hand, after wiping it on her apron, saying he was heartily welcome, only she was afraid he wouldn’t care to sleep in a loft.
‘Why wouldn’t I sleep in a loft, a dry loft!’ ‘You’re thinking a good deal of America over here,’ said he, ‘but I reckon it isn’t all you think it. Here you work when you like and you sit down when you like; but when you’ve had a touch of blood-poisoning as I had, and when you have seen young people walking with a stick, you think that there is something to be said for old Ireland.’
‘You’ll take a sup of milk, won’t you? You must be dry,’ said Mrs. Scully.
And when he had drunk the milk Mike asked him if he would like to go inside or if he would like to go for a walk.
‘Maybe resting you’d like to be.’
And they went into the cabin and started to talk about the wages a man could get in America, and the long hours of work.
And after Bryden had told Mike everything about America that he thought of interest, he asked Mike about Ireland. But Mike did not seem to be able to tell him much. They were all very poor—poorer, perhaps, than when he left them.
‘I don’t think anyone except myself has a five-pound-note to his name.’
Bryden hoped he felt sufficiently sorry for Mike. But after all Mike’s life and prospects mattered little to him. He had come back in search of health, and he felt better already; the milk had done him good, and the bacon and the cabbage in the pot sent forth a savoury odour. The Scullys were very kind, they pressed him to make a good meal; a few weeks of country air and food, they said, would give him back the health he had lost in the Bowery; and when Bryden said he was longing for a smoke, Mike said there was no better sign than that. During his long illness he had never wanted to smoke, and he was a confirmed smoker.
It was comfortable to sit by the mild peat fire watching the smoke of their pipes drifting up the chimney, and all Bryden wanted was to be left alone; he did not want to hear of anyone’s misfortunes, but about nine o’clock a number of villagers came in, and Bryden remembered one or two of them—he used to know them very well when he was a boy; their talk was as depressing as their appearance, and he could feel no interest whatever in them. He was not moved when he heard that Higgins the stonemason was dead; he was not affected when he heard that Mary Kelly, who used to go to do the laundry at the Big House, had married; he was only interested when he heard she had gone to America. No, he had not met her there; America is a big place. Then one of the peasants asked him if he remembered Patsy Carabine, who used to do the gardening at the Big House. Yes, he remembered Patsy well. He had not been able to do any work on account of his arm; his house had fallen in; he had given up his holding and gone into the Poor-House. All this was very sad, and to avoid hearing any further unpleasantness, Bryden began to tell them about America. And they sat round listening to him; but all the talking was on his side; he wearied of it; and looking round the group he recognized a ragged hunchback with grey hair; twenty years ago he was a young hunchback, and, turning to him, Bryden asked him if he were doing well with his five acres.
‘Ah, not much. This has been a poor season. The potatoes failed; they were watery—there is no diet in them.’
These peasants were all agreed that they could make nothing out of their farms. Their regret was that they had not gone to America when they were young; and after striving to take an interest in the fact that O’Connor had lost a mare and a foal worth forty pounds, Bryden began to wish himself back in the slum. And when they left the house he wondered if every evening would be like the present one. Mike piled fresh sods on the fire, and he hoped it would show enough light in the loft for Bryden to undress himself by.
The cackling of some geese in the street kept him awake, and he seemed to realize suddenly how lonely the country was, and he foresaw mile after mile of scanty fields stretching all round the lake with one little town in the far corner. A dog howled in the distance, and the fields and the boreens between him and the dog appeared as in a crystal. He could hear Michael breathing by his wife’s side in the kitchen, and he could barely resist the impulse to run out of the house, and he might have yielded to it, but he wasn’t sure that he mightn’t awaken Mike as he came down the ladder. His terror increased, and he drew the blanket over his head. He fell asleep and awoke and fell asleep again, and lying on his back he dreamed of the men he had seen sitting round the fireside that evening, like spectres they seemed to him in his dream. He seemed to have been asleep only a few minutes when he heard Mike calling him. He had come half-way up the ladder, and was telling him that breakfast was ready.
‘What kind of a breakfast will he give me?’ Bryden asked himself as he pulled on his clothes. There were tea and hot griddle cakes for breakfast, and there were fresh eggs; there was sunlight in the kitchen, and he liked to hear Mike tell of the work he was going to be at in the farm—one of about fifteen acres, at least ten of it was grass; he grew an acre of potatoes, and some corn, and some turnips for his sheep. He had a nice bit of meadow, and he took down his scythe, and as he put the whetstone in his belt Bryden noticed a second scythe, and he asked Mike if he should go down with him and help him to finish the field.
‘It’s a long time since you’ve done any mowing, and its heavier work than you think for. You’d better go for a walk by the lake.’ Seeing that Bryden looked a little disappointed, he added, ‘If you like you can come up in the afternoon and help me to turn the grass over.’ Bryden said he would, and the morning passed pleasantly by the lake shore—a delicious breeze rustled in the trees, and the reeds were talking together, and the ducks were talking in the reeds; a cloud blotted out the sunlight, and the cloud passed and the sun shone, and the reed cast its shadow again in the still water; there was a lapping always about the shingle; the magic of returning health was sufficient distraction for the convalescent; he lay with his eyes fixed upon the castles, dreaming of the men that had manned the battlements; whenever a peasant driving a cart or an ass or an old woman with a bundle of sticks on her back went by, Bryden kept them in chat, and he soon knew the village by heart. One day the landlord from the Georgian mansion set on the pleasant green hill came along, his retriever at his heels, and stopped surprised at finding somebody whom he didn’t know on his property. ‘What, James Bryden!’ he said. And the story was told again how ill-health had overtaken him at last, and he had come home to Duncannon to recover. The two walked as far as the pine-wood, talking of the county what it had been, the ruin it was slipping into, and as they parted Bryden asked for the loan of a boat.
‘Of course, of course!’ the landlord answered, and Bryden rowed about the islands every morning; and resting upon his oars looked at the old castles, remembering the prehistoric raiders that the landlord had told him about. He came across the stones to which the lake-dwellers had tied their boats, and these signs of ancient Ireland were pleasing to Bryden in his present mood.
As well as the great lake there was a smaller lake in the bog where the villagers cut their turf. This lake was famous for its pike, and the landlord allowed Bryden to fish there, and one evening when he was looking for a frog with which to bait his line he met Margaret Dirken driving home the cows for the milking. Margaret was the herdsman’s daughter, and lived in a cottage near the Big House; but she came up to the village whenever there was a dance, and Bryden had found himself opposite to her in the reels. But until this evening he had had little opportunity of speaking to her, and he was glad to speak to someone, for the evening was lonely, and they stood talking together.
‘You’re getting your health again,’ she said, ‘and will be leaving us soon.’
‘I’m in no hurry.’
‘You’re grand people over there; I hear a man is paid four dollars a day for his work.’
‘And how much,’ said James, ‘has he to pay for his food and for his clothes?’
Her cheeks were bright and her teeth small, white and beautifully even; and a woman’s soul looked at Bryden out of her soft Irish eyes. He was troubled and turned aside, and catching sight of a frog looking at him out of a tuft of grass, he said:
‘I have been looking for a frog to put upon my pike line.’
The frog jumped right and left, and nearly escaped in some bushes, but he caught it and returned with it in his hand.
‘It is just the kind of frog a pike will like,’ he said. ‘Look at its great white belly and its bright yellow back.’
And without more ado he pushed the wire to which the hook was fastened through the frog’s fresh body, and dragging it through the mouth he passed the hooks through the hind-legs and tied the line to the end of the wire.
‘I think,’ said Margaret, ‘I must be looking after my cows; it’s time I got them home.’
‘Won’t you come down to the lake while I set my line?’
She thought for a moment and said:
‘No, I’ll see you from here.’
He went down to the reedy tarn, and at his approach several snipe got up, and they flew above his head uttering sharp cries. His fishing-rod was a long hazel-stick, and he threw the frog as far as he could in the lake. In doing this he roused some wild ducks; a mallard and two ducks got up, and they flew toward the larger lake in a line with an old castle; and they had not disappeared from view when Bryden came toward her, and he and she drove the cows home together that evening.
They had not met very often when she said:
‘James, you had better not come here so often calling to me.’
‘Don’t you wish me to come?’
‘Yes, I wish you to come well enough, but keeping company isn’t the custom of the country, and I don’t want to be talked about.’
‘Are you afraid the priest would speak against us from the altar?’
‘He has spoken against keeping company, but it is not so much what the priest says, for there is no harm in talking.’
‘But if you’re going to be married there is no harm in walking out together.’
‘Well, not so much, but marriages are made differently in these parts; there isn’t much courting here.’
And next day it was known in the village that James was going to marry Margaret Dirken.
His desire to excel the boys in dancing had caused a stir of gaiety in the parish, and for some time past there had been dancing in every house where there was a floor fit to dance upon; and if the cottager had no money to pay for a barrel of beer, James Bryden, who had money, sent him a barrel, so that Margaret might get her dance. She told him that they sometimes crossed over into another parish where the priest was not so averse to dancing, and James wondered. And next morning at Mass he wondered at their simple fervour. Some of them held their hands above their head as they prayed, and all this was very new and very old to James Bryden. But the obedience of these people to their priest surprised him. When he was a lad they had not been so obedient, or he had forgotten their obedience; and he listened in mixed anger and wonderment to the priest, who was scolding his parishioners, speaking to them by name, saying that he had heard there was dancing going on in their homes. Worse than that, he said he had seen boys and girls loitering about the road, and the talk that went on was of one kind—love. He said that newspapers containing love stories were finding their
way into the people’s houses, stories about love, in which there was nothing elevating or ennobling. The people listened, accepting the priest’s opinion without question. And their pathetic submission was the submission of a primitive people clinging to religious authority, and Bryden contrasted the weakness and incompetence of the people about him with the modern restlessness and cold energy of the people he left behind him.
One evening, as they were dancing, a knock came to the door, and the piper stopped playing, and the dancers whispered:
‘Someone has told on us; it is the priest.’
And the awe-stricken villagers crowded round the cottage fire, afraid to open the door. But the priest said that if they didn’t open the door he would put his shoulder to it and force it open. Bryden went towards the door, saying he would allow no one to threaten him, priest or no priest, but Margaret caught his arm and told him that if he said anything to the priest, the priest would speak against them from the altar, and they would be shunned by the neighbours.
‘I’ve heard of your goings on,’ he said—‘of your beer-drinking and dancing. I’ll not have it in my parish. If you want that sort of thing you had better go to America.’
‘If that is intended for me, sir, I’ll go back to-morrow. Margaret can follow.’
‘It isn’t the dancing, it’s the drinking I’m opposed to,’ said the priest, turning to Bryden.
‘Well, no one has drunk too much, sir,’ said Bryden.
‘But you’ll sit here drinking all night,’ and the priest’s eyes went toward the corner where the women had gathered, and Bryden felt that the priest looked on the women as more dangerous than the porter. ‘It’s after midnight,’ he said, taking out his watch.
By Bryden’s watch it was only half-past eleven, and while they were arguing about the time Mrs. Scully offered Bryden’s umbrella to the priest, for in his hurry to stop the dancing the priest had gone out without his; and, as if to show Bryden that he bore him no ill-will, the priest accepted the loan of the umbrella, for he was thinking of the big marriage fee that Bryden would pay him.
‘I shall be badly off for the umbrella to-morrow,’ Bryden said, as soon as the priest was out of the house. He was going with his father-in-law to a fair. His father-in-law was learning him how to buy and sell cattle. The country was mending, and a man might become rich in Ireland if he only had a little capital. Margaret had an uncle on the other side of the lake who would give twenty pounds, and her father would give another twenty pounds. Bryden had saved two hundred pounds. Never in the village of Duncannon had a young couple begun life with so much prospect of success, and some time after Christmas was spoken of as the best time for the marriage; James Bryden said that he would not be able to get his money out of America before the spring. The delay seemed to vex him, and he seemed anxious to be married, until one day he received a letter from America, from a man who had served in the bar with him. This friend wrote to ask Bryden if he were coming back. The letter was no more than a passing wish to see Bryden again. Yet Bryden stood looking at it, and everyone wondered what could be in the letter. It seemed momentous, and they hardly believed him when he said it was from a friend who wanted to know if his health were better. He tried to forget the letter, and he looked at the worn fields, divided by walls of loose stones, and a great longing came upon him.
The smell of the Bowery slum had come across the Atlantic, and had found him out in this western headland; and one night he awoke from a dream in which he was hurling some drunken customer through the open doors into the darkness. He had seen his friend in his white duck jacket throwing drink from glass into glass amid the din of voices and strange accents; he had heard the clang of money as it was swept into the till, and his sense sickened for the bar-room. But how should he tell Margaret Dirken that he could not marry her? She had built her life upon this marriage. He could not tell her that he would not marry her... yet he must go. He felt as if he were being hunted; the thought that he must tell Margaret that he could not marry her hunted him day after day as a weasel hunts a rabbit. Again and again he went to meet her with the intention of telling her that he did not love her, that their lives were not for one another, that it had all been a mistake, and that happily he had found out it was a mistake soon enough. But Margaret, as if she guessed what he was about to speak of, threw her arms about him and begged him to say he loved her, and that they would be married at once. He agreed that he loved her, and that they would be married at once. But he had not left her many minutes before the feeling came upon him that he could not marry her—that he must go away. The smell of the bar-room hunted him down. Was it for the sake of the money that he might make there that he wished to go back? No, it was not the money. What then? His eyes fell on the bleak country, on the little fields divided by bleak walls; he remembered the pathetic ignorance of the people, and it was these things that he could not endure. It was the priest who came to forbid the dancing. Yes, it was the priest. As he stood looking at the line of the hills the bar-room seemed by him. He heard the politicians, and the excitement of politics was in his blood again. He must go away from this place—he must get back to the bar-room. Looking up, he saw the scanty orchard, and he hated the spare road that led to the village, and he hated the little hill at the top of which the village began, and he hated more than all other places the house where he was to live with Margaret Dirken—if he married her. He could see it from where he stood—by the edge of the lake, with twenty acres of pasture land about it, for the landlord had given up part of his demesne land to them.
He caught sight of Margaret, and he called her to come through the stile.
'I have just had a letter from America.'
'About the money?'
'Yes, about the money. But I shall have to go over there.'
He stood looking at her, wondering what to say; and she guessed that he would tell her that he must go to America before they were married.
'Do you mean, James, you will have to go at once?'
'Yes,' he said, 'at once. But I shall come back in time to be married in August. It will only mean delaying our marriage a month.'
They walked on a little way talking, and every step he took James felt that he was a step nearer the Bowery slum. And when they came to the gate Bryden said:
'I must walk on or I shall miss the train.'
'But,' she said, 'you are not going now—you are not going to-day?'
'Yes, this morning. It is seven miles. I shall have to hurry not to miss the train.'
And then she asked him if he would ever come back.
'Yes,' he said, 'I am coming back.'
'If you are coming back, James, why don't you let me go with you?'
'You couldn't walk fast enough. We should miss the train.'
'One moment, James. Don't make me suffer; tell me the truth. You are not coming back. Your clothes—where shall I send them?'
He hurried away, hoping he would come back. He tried to think that he liked the country he was leaving, that it would be better to have a farmhouse and live there with Margaret Dirken than to serve drinks behind a counter in the Bowery. He did not think he was telling her a lie when he said he was coming back. Her offer to forward his clothes touched his heart, and at the end of the road he stood and asked himself if he should go back to her. He would miss the train if he waited another minute, and he ran on. And he would have missed the train if he had not met a car. Once he was on the car he felt himself safe—the country was already behind him. The train and the boat at Cork were mere formulae; he was already in America.
And when the tall skyscraper stuck up beyond the harbour he felt the thrill of home that he had not found in his native village, and wondered how it was that the smell of the bar seemed more natural than the smell of fields, and the roar of crowds more welcome than the silence of the lake's edge. He entered into negotiations for the purchase of the bar-room. He took a wife, she bore him sons and daughters, the bar-room prospered, property came and went; he grew old, his wife died, he retired from business, and reached the age when a man begins to feel there are not many years in front of him, and that all he has had to do in life has been done. His children married, lonesomeness began to creep about him in the evening, and when he looked into the firelight, a vague, tender reverie floated up, and Margaret's soft eyes and name vivified the dusk. His wife and children passed out of mind, and it seemed to him that a memory was the only real thing he possessed, and the desire to see Margaret again grew intense. But she was an old woman, she had married, maybe she was dead. Well, he would like to be buried in the village where he was born.
There is an unchanging, silent life within every man that none knows but himself, and his unchanging, silent life was his memory of Margaret Dirken. The bar-room was forgotten and all that concerned it, and the things he saw most clearly were the green hillside, and the bog lake and the rushes about it, and the greater lake in the distance, and behind it the blue line of wandering hills.
The untilled field ([1914])
Author: Moore, George, 1852-1933
Publisher: London : William Heinemann
Language: English
Digitizing sponsor: MSN
Book contributor: University of California Libraries
Collection: cdl ; americana
Source: Internet Archive
http://www.archive.org/details/untilledfield00mooriala
Edited and uploaded to www.aughty.org
December 13 2010
|
Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters
Gavin C. Cawley
firstname.lastname@example.org
Nicola L. C. Talbot
email@example.com
School of Computing Sciences
University of East Anglia
Norwich, United Kingdom NR4 7TJ
Editors: Isabelle Guyon and Amir Saffari
Abstract
While the model parameters of a kernel machine are typically given by the solution of a convex optimisation problem, with a single global optimum, the selection of good values for the regularisation and kernel parameters is much less straightforward. Fortunately the leave-one-out cross-validation procedure can be performed or a least approximated very efficiently in closed form for a wide variety of kernel learning methods, providing a convenient means for model selection. Leave-one-out cross-validation based estimates of performance, however, generally exhibit a relatively high variance and are therefore prone to over-fitting. In this paper, we investigate the novel use of Bayesian regularisation at the second level of inference, adding a regularisation term to the model selection criterion corresponding to a prior over the hyper-parameter values, where the additional regularisation parameters are integrated out analytically. Results obtained on a suite of thirteen real-world and synthetic benchmark data sets clearly demonstrate the benefit of this approach.
Keywords: model selection, kernel methods, Bayesian regularisation
1. Introduction
Leave-one-out cross-validation (Lachenbruch and Mickey, 1968; Luntz and Brailovsky, 1969; Stone, 1974) provides the basis for computationally efficient model selection strategies for a variety of kernel learning methods, including the Support Vector Machine (SVM) (Cortes and Vapnik, 1995; Chapelle et al., 2002), Gaussian Process (GP) (Rasmussen and Williams, 2006; Sundararajan and Keerthi, 2001), Least-Squares Support Vector Machine (LS-SVM) (Suykens and Vandewalle, 1999; Cawley and Talbot, 2004), Kernel Fisher Discriminant (KFD) analysis (Mika et al., 1999; Cawley and Talbot, 2003; Saadi et al., 2004; Bo et al., 2006) and Kernel Logistic Regression (KLR) (Keerthi et al., 2005; Cawley and Talbot, 2007). These methods have proved highly successful for kernel machines having only a small number of hyper-parameters to optimise, as demonstrated by the set of models achieving the best average score in the WCCI-2006 performance prediction challenge\(^1\) (Cawley, 2006; Guyon et al., 2006). Unfortunately, while leave-one-out cross-validation estimators have been shown to be almost unbiased (Luntz and Brailovsky, 1969), they are known to exhibit a relatively high variance (e.g., Kohavi, 1995). A kernel with many hyper-parameters, for instance those used in Automatic Relevance Determination (ARD) (e.g., Rasmussen and Williams, 2006) or feature scaling methods (Chapelle et al., 2002; Bo et al., 2006), may provide sufficient
\(^1\) See http://www.modelselect.inf.ethz.ch/index.php.
degrees of freedom to over-fit leave-one-out cross-validation based model selection criteria, resulting in performance inferior to that obtained using a less flexible kernel function. In this paper, we investigate the novel use of regularisation (Tikhonov and Arsenin, 1977) of the hyper-parameters in model selection in order to ameliorate the effects of the high variance of leave-one-out cross-validation based selection criteria, and so improve predictive performance. The regularisation term corresponds to a zero-mean Gaussian prior over the values of the kernel parameters, representing a preference for smooth kernel functions, and hence a relatively simple classifier. The regularisation parameters introduced in this step are integrated out analytically in the style of Buntine and Weigend (1991), to provide a Bayesian model selection criterion that can be optimised in a straightforward manner via, for example, scaled conjugate gradient descent (Williams, 1991).
The paper is structured as follows: The remainder of this section provides a brief overview of the least-squares support vector machine, including the use of leave-one-out cross-validation based model selection procedures, given in sufficient detail to ensure the reproducibility of the results. Section 2 describes the use of Bayesian regularisation to prevent over-fitting at the second level of inference, that is, model selection. Section 3 presents results obtained over a suite of thirteen benchmark data sets, which demonstrate the utility of this approach. Section 4 provides discussion of the results and suggests directions for further research. Finally, the work is summarised and directions for further work are outlined in Section 5.
1.1 Least Squares Support Vector Machine
In the remainder of this section, we provide a brief overview of the least-squares support vector machine (Suykens and Vandewalle, 1999) used as the testbed for the investigation of the role of regularisation in the model selection process described in this study. Given training data,
\[ D = \{ (x_i, y_i) \}_{i=1}^{\ell}, \quad \text{where} \quad x_i \in X \subset \mathbb{R}^d \quad \text{and} \quad y_i \in \{-1,+1\}, \]
we seek to construct a linear discriminant, \( f(x) = \phi(x) \cdot w + b \), in a feature space, \( \mathcal{F} \), defined by a fixed transformation of the input space, \( \phi : X \rightarrow \mathcal{F} \). The parameters of the linear discriminant, \( (w, b) \), are given by the minimiser of a regularised (Tikhonov and Arsenin, 1977) least-squares training criterion,
\[
L = \frac{1}{2} \|w\|^2 + \frac{1}{2\mu} \sum_{i=1}^{\ell} |y_i - \phi(x_i) \cdot w - b|^2,
\]
(1)
where \( \mu \) is a regularisation parameter controlling the bias-variance trade-off (Geman et al., 1992). Rather than specify the feature space directly, it is instead induced by a kernel function, \( K : X \times X \rightarrow \mathbb{R} \), which evaluates the inner-product between the projections of the data onto the feature space, \( \mathcal{F} \), that is, \( K(x,x') = \phi(x) \cdot \phi(x') \). The interpretation of an inner-product in a fixed feature space is valid for any Mercer kernel (Mercer, 1909), for which the Gram matrix, \( K = [k_{ij} = K(x_i,x_j)]_{i,j=1}^{\ell} \) is positive semi-definite, that is,
\[
a^T Ka \geq 0, \quad \forall \ a \in \mathbb{R}^{\ell}, \quad a \neq 0.
\]
The Gram matrix effectively encodes the spatial relationships between the projections of the data in the feature space, \( \mathcal{F} \). A linear model can thus be implicitly constructed in the feature space using only information contained in the Gram matrix, without explicitly evaluating the positions of the data in the feature space via the transformation \( \phi(\cdot) \). Indeed, the representer theorem (Kimeldorf
and Wahba, 1971) shows that the solution of the optimisation problem (1) can be written as an expansion over the training patterns,
\[ w = \sum_{i=1}^{\ell} \alpha_i \phi(x_i) \quad \implies \quad f(x) = \sum_{i=1}^{\ell} \alpha_i K(x_i, x) + b. \]
The advantage of the “kernel trick” then becomes apparent; a linear model can be constructed in an extremely rich, high- (possibly infinite-) dimensional feature space, using only finite-dimensional quantities, such as the Gram matrix, \( K \). The “kernel trick” also allows the construction of statistical models that operate directly on structured data, for instance strings, trees and graphs, leading to the current interest in kernel learning methods in computational biology (Schölkopf et al., 2004) and text-processing (Joachims, 2002). The Radial Basis Function (RBF) kernel,
\[ K(x, x') = \exp\left\{ -\eta \|x - x'\|^2 \right\} \]
is commonly encountered in practical applications of kernel learning methods, here \( \eta \) is a kernel parameter, controlling the sensitivity of the kernel function. The feature space for the radial basis function kernel consists of the positive orthant of an infinite-dimensional unit hyper-sphere (e.g., Shawe-Taylor and Cristianini, 2004). The Gram matrix for the radial basis function kernel is thus of full rank (Micchelli, 1986), and so the kernel model is able to form an arbitrary shattering of the data.
### 1.1.1 A Dual Training Algorithm
The basic training algorithm for the least-squares support vector machine (Suykens and Vandewalle, 1999) views the regularised loss function (1) as a constrained minimisation problem:
\[ \min_{w,b,\varepsilon_i} \frac{1}{2} \|w\|^2 + \frac{1}{2\mu} \sum_{i=1}^{\ell} \varepsilon_i^2 \quad \text{subject to} \quad \varepsilon_i = y_i - w \cdot \phi(x_i) - b. \]
The primal Lagrangian for this constrained optimisation problem gives the unconstrained minimisation problem defined by the following regularised loss function,
\[ L = \frac{1}{2} \|w\|^2 + \frac{1}{2\mu} \sum_{i=1}^{\ell} \varepsilon_i^2 - \sum_{i=1}^{\ell} \alpha_i \left\{ w \cdot \phi(x_i) + b + \varepsilon_i - y_i \right\}, \]
where \( \alpha = (\alpha_1, \alpha_2, \ldots, \alpha_\ell) \in \mathbb{R}^\ell \) is a vector of Lagrange multipliers. The optimality conditions for this problem can be expressed as follows:
\[ \frac{\partial L}{\partial w} = 0 \implies w = \sum_{i=1}^{\ell} \alpha_i \phi(x_i), \tag{2} \]
\[ \frac{\partial L}{\partial b} = 0 \implies \sum_{i=1}^{\ell} \alpha_i = 0, \tag{3} \]
\[ \frac{\partial L}{\partial \varepsilon_i} = 0 \implies \alpha_i = \frac{\varepsilon_i}{\mu}, \quad \forall i \in \{1, 2, \ldots, \ell\}, \tag{4} \]
\[ \frac{\partial L}{\partial \alpha_i} = 0 \implies w \cdot \phi(x_i) + b + \varepsilon_i - y_i = 0, \quad \forall i \in \{1, 2, \ldots, \ell\}. \tag{5} \]
Using (2) and (4) to eliminate $w$ and $\varepsilon = (\varepsilon_1, \varepsilon_2, \ldots, \varepsilon_\ell)$, from (5), we find that
$$\sum_{j=1}^{\ell} \alpha_j \phi(x_j) \cdot \phi(x_i) + b + \mu \alpha_i = y_i \quad \forall \ i \in \{1, 2, \ldots, \ell\}. \tag{6}$$
Noting that $K(x, x') = \phi(x) \cdot \phi(x')$, the system of linear equations, (6) and (3), can be written more concisely in matrix form as
$$\begin{bmatrix} K + \mu I & 1 \\ 1^T & 0 \end{bmatrix} \begin{bmatrix} \alpha \\ b \end{bmatrix} = \begin{bmatrix} y \\ 0 \end{bmatrix},$$
where $K = [k_{ij} = K(x_i, x_j)]_{i,j=1}^{\ell}$, $I$ is the $\ell \times \ell$ identity matrix and $1$ is a column vector of $\ell$ ones. The optimal parameters for the model of the conditional mean can then be obtained with a computational complexity of $O(\ell^3)$ operations, using direct methods, such as Cholesky decomposition (Golub and Van Loan, 1996).
### 1.1.2 Efficient Implementation Via Cholesky Decomposition
A more efficient training algorithm can be obtained, taking advantage of the special structure of the system of linear equations (Suykens et al., 2002). The system of linear equations to be solved in fitting a least-squares support vector machine is given by,
$$\begin{bmatrix} M & 1 \\ 1^T & 0 \end{bmatrix} \begin{bmatrix} \alpha \\ b \end{bmatrix} = \begin{bmatrix} y \\ 0 \end{bmatrix}, \tag{7}$$
where $M = K + \mu I$. Unfortunately the matrix on the left-hand side is not positive definite, and so we cannot solve this system of linear equations directly using the Cholesky decomposition. However, the first row of (7) can be re-written as
$$M (\alpha + M^{-1} 1 b) = y. \tag{8}$$
Rearranging (8), we see that $\alpha = M^{-1} (y - 1 b)$, using this result to eliminate $\alpha$, the second row of (7) can be written as
$$1^T M^{-1} 1 b = 1^T M^{-1} y.$$
The system of linear equations can then be re-written as
$$\begin{bmatrix} M & 0 \\ 0^T & 1^T M^{-1} 1 \end{bmatrix} \begin{bmatrix} \alpha + M^{-1} 1 b \\ b \end{bmatrix} = \begin{bmatrix} y \\ 1^T M^{-1} y \end{bmatrix}. \tag{9}$$
In this case, the matrix on the left hand side is positive-definite, as $M = K + \lambda I$ is positive-definite and $1^T M^{-1} 1$ is positive since the inverse of a positive definite matrix is also positive definite. The revised system of linear equations (9) can be solved as follows: First solve
$$M \rho = 1 \quad \text{and} \quad M v = y, \tag{10}$$
which may be performed efficiently using the Cholesky factorisation of $M$. The model parameters of the least-squares support vector machine are then given by
$$b = \frac{1^T v}{1^T \rho} \quad \text{and} \quad \alpha = v - \rho b.$$
The two systems of linear equations (10) can be solved efficiently using the Cholesky decomposition of $M = R^T R$, where $R$ is the upper triangular Cholesky factor of $M$.
1.2 Leave-One-Out Cross-Validation
Cross-validation (Stone, 1974) is commonly used to obtain a reliable estimate of the test error for performance estimation or for use as a model selection criterion. The most common form, $k$-fold cross-validation, partitions the available data into $k$ disjoint subsets. In each iteration a classifier is trained on a different combination of $k - 1$ subsets and the unused subset is used to estimate the test error rate. The $k$-fold cross-validation estimate of the test error rate is then simply the average of the test error rate observed in each of the $k$ iterations, or folds. The most extreme form of cross-validation, where $k = \ell$ such that the test partition in each fold consists of only a single pattern, is known as leave-one-out cross-validation (Lachenbruch and Mickey, 1968) and has been shown to provide an almost unbiased estimate of the test error rate (Luntz and Brailovsky, 1969). Leave-one-out cross-validation is however computationally expensive, in the case of the least-squares support vector machine a naïve implementation having a complexity of $O(\ell^4)$ operations. Leave-one-out cross-validation is therefore normally only used in circumstances where the available data are extremely scarce such that the computational expense is no longer prohibitive. In this case the inherently high variance of the leave-one-out estimator (Kohavi, 1995) is offset by the minimal decrease in the size of the training set in each fold, and so may provide a more reliable estimate of generalisation performance than conventional $k$-fold cross-validation. Fortunately leave-one-out cross-validation of least-squares support vector machines can be performed in closed form with a computational complexity of only $O(\ell^3)$ operations (Cawley and Talbot, 2004). Leave-one-out cross-validation can then be used in medium to large scale applications, where there may be a few thousand data-points, although the relatively high variance of this estimator remains potentially problematic.
1.2.1 Virtual Leave-One-Out Cross-Validation
The optimal values of the parameters of a Least-Squares Support Vector Machine are given by the solution of a system of linear equations:
\[
\begin{bmatrix}
K + \mu I & 1 \\
1^T & 0
\end{bmatrix}
\begin{bmatrix}
\alpha \\
b
\end{bmatrix}
=
\begin{bmatrix}
y \\
0
\end{bmatrix}.
\]
(11)
The matrix on the left-hand side of (11) can be decomposed into block-matrix representation, as follows:
\[
\begin{bmatrix}
K + \mu I & 1 \\
1^T & 0
\end{bmatrix}
=
\begin{bmatrix}
c_{11} & c_1^T \\
c_1 & C_1
\end{bmatrix}
= C.
\]
Let $[\alpha^{(-i)}; b^{(-i)}]$ represent the parameters of the least-squares support vector machine during the $\ell^{th}$ iteration of the leave-one-out cross-validation procedure, then in the first iteration, in which the first training pattern is excluded,
\[
\begin{bmatrix}
\alpha^{(-1)} \\
b^{(-1)}
\end{bmatrix}
= C_1^{-1} [y_2, \ldots, y_\ell, 0]^T.
\]
The leave-one-out prediction for the first training pattern is then given by
\[
\hat{y}_1^{(-1)} = c_1^T \begin{bmatrix}
\alpha^{(-1)} \\
b^{(-1)}
\end{bmatrix}
= c_1^T C_1^{-1} [y_2, \ldots, y_\ell, 0]^T.
\]
Considering the last $\ell$ equations in the system of linear equations (11), it is clear that $[c_1 \ C_1] [\alpha_1, \ldots, \alpha_\ell, b]^T = [y_2, \ldots, y_\ell, 0]^T$, and so
$$\hat{y}_1^{(-1)} = c_1^T C_1^{-1} [c_1 \ C_1] [\alpha^T, b]^T = c_1^T C_1^{-1} c_1 \alpha_1 + c_1 [\alpha_2, \ldots, \alpha_\ell, b]^T.$$
Noting, from the first equation in the system of linear equations (11), that $y_1 = c_{11} \alpha_1 + c_1^T [\alpha_2, \ldots, \alpha_\ell, b]^T$, thus
$$\hat{y}_1^{(-1)} = y_1 - \alpha_1 \left( c_{11} - c_1^T C_1^{-1} c_1 \right).$$
Finally, via the block matrix inversion lemma,
$$\begin{bmatrix} c_{11} & c_1^T \\ c_1 & C_1 \end{bmatrix}^{-1} = \begin{bmatrix} \kappa^{-1} & -\kappa^{-1} c_1 C_1^{-1} \\ C_1^{-1} + \kappa^{-1} C_1^{-1} c_1^T c_1 C_1^{-1} & -\kappa^{-1} C_1^{-1} c_1^T \end{bmatrix},$$
where $\kappa = c_{11} - c_1^T C_1^{-1} c$, and noting that the system of linear equations (11) is insensitive to permutations of the ordering of the equations and of the unknowns, we have that,
$$y_i - \hat{y}_i^{(-i)} = \frac{\alpha_i}{C_{ii}}.$$
(12)
This means that, assuming the system of linear equations (11) is solved via explicit inversion of $C$, a leave-one-out cross-validation estimate of an appropriate model selection criterion can be evaluated using information already available a by-product of training the least-squares support vector machine on the entire data set (cf., Sundararajan and Keerthi, 2001).
### 1.2.2 Efficient Implementation via Cholesky Factorisation
The leave-one-out cross-validation behaviour of the least-squares support vector machine is described by (12). The coefficients of the kernel expansion, $\alpha$, can be found efficiently, via Cholesky factorisation, as described in Section 1.1.2. However we must also determine the diagonal elements of $C^{-1}$ in an efficient manner. Using the block matrix inversion formula, we obtain
$$C^{-1} = \begin{bmatrix} M & 1 \\ 1^T & 0 \end{bmatrix}^{-1} = \begin{bmatrix} M^{-1} + M^{-1} S_M^{-1} 1^T M^{-1} & -M^{-1} 1 S_M^{-1} \\ -S_M^{-1} 1^T M^{-1} & S_M^{-1} \end{bmatrix}$$
where $M = K + \mu I$ and $S_M = -1^T M^{-1} 1 = -1^T \eta$ is the Schur complement of $M$. The inverse of the positive definite matrix, $M$, can be computed efficiently from its Cholesky factorisation, via the `SYMINV` algorithm (Seaks, 1972), for example using the LAPACK routine `DTRTI` (Anderson et al., 1999). Let $R = [r_{ij}]_{i,j=1}^\ell$ be the lower triangular Cholesky factor of the positive definite matrix $M$, such that $M = RR^T$. Furthermore, let
$$S = [s_{ij}]_{i,j=1}^\ell = R^{-1}, \quad \text{where} \quad s_{ii} = \frac{1}{r_{ii}} \quad \text{and} \quad s_{ij} = -s_{ii} \sum_{k=1}^{i-1} r_{ik} s_{kj},$$
represent the (lower triangular) inverse of the Cholesky factor. The inverse of $M$ is then given by $M^{-1} = S^T S$. In the case of efficient leave-one-out cross-validation of least-squares support vector machines, we are principally concerned only with the diagonal elements of $M^{-1}$, given by
$$M_{ii}^{-1} = \sum_{j=1}^i s_{ij}^2 \quad \implies \quad C_{ii}^{-1} = \sum_{j=1}^i s_{ij}^2 + \frac{p_i^2}{S_M} \quad \forall \ i \in \{1, 2, \ldots, \ell\}. $$
The computational complexity of the basic training algorithm is $O(\ell^3)$ operations, being dominated by the evaluation of the Cholesky factor. However, the computational complexity of the analytic leave-one-out cross-validation procedure, when performed as a by-product of the training algorithm, is only $O(\ell)$ operations. The computational expense of the leave-one-out cross-validation procedure therefore becomes increasingly negligible as the training set becomes larger.
1.3 Model Selection
The virtual leave-one-out cross-validation procedure described in the previous section provides the basis for a simple automated model selection strategy for the least-squares support vector machine. Perhaps the most basic model selection criterion is provided by the Predicted RESidual Sum of Squares (PRESS) criterion (Allen, 1974), which is simply the leave-one-out estimate of the sum-of-squares error,
$$Q(\theta) = \frac{1}{2} \sum_{i=1}^{\ell} \left[ y_i - \hat{y}_i^{(-i)} \right]^2.$$
A minimum of the model selection criterion is often found via a simple grid-search procedure in the majority of practical applications of kernel learning methods. However, this is rarely necessary and often highly inefficient as a grid-search spends a large amount of time investigating hyper-parameter values outside the neighbourhood of the global optimum. A more efficient approach uses the Nelder-Mead simplex algorithm (Nelder and Mead, 1965), as implemented by the `fminsearch` function of the MATLAB optimisation toolbox. An alternative easily implemented approach uses conjugate gradient methods, with the required gradient information estimated by the method of finite differences, and implemented by the `fminunc` function from the MATLAB optimisation toolbox. In this study however, we use scaled conjugate gradient descent (Williams, 1991), with the required gradient information evaluated analytically, as this is approximately twice as efficient.
1.3.1 Partial Derivatives of the PRESS Model Selection Criterion
Let $\theta = \{\theta_1, \ldots, \theta_n\} = \{\lambda, \eta_1, \ldots, \eta_d\}$ represent the vector of hyper-parameters for a least-squares support vector machine, where $\eta_1, \ldots, \eta_d$ represent the kernel parameters. The PRESS statistic (Allen, 1974) can be written as
$$Q(\theta) = \frac{1}{2} \sum_{i=1}^{\ell} \left[ r_i^{(-i)} \right]^2,$$
where
$$r_i^{(-i)} = y_i - \hat{y}_i^{(-i)} = \frac{\alpha_i}{C_{ii}^{-1}}.$$
Using the chain rule, the partial derivative of the PRESS statistic, with respect to an individual hyper-parameter, $\theta_j$, is given by,
$$\frac{\partial Q(\theta)}{\partial \theta_j} = \sum_{i=1}^{\ell} \frac{\partial Q(\theta)}{\partial r_i^{(-i)}} \frac{\partial r_i^{(-i)}}{\partial \theta_j},$$
where
$$\frac{\partial Q(\theta)}{\partial r_i^{(-i)}} = r_i^{(-i)} = \frac{\alpha_i}{C_{ii}^{-1}} \quad \text{and} \quad \frac{\partial r_i^{(-i)}}{\partial \theta_j} = \frac{\partial \alpha_i}{\partial \theta_j} \frac{1}{C_{ii}^{-1}} - \frac{\alpha_i}{[C_{ii}^{-1}]^2} \frac{\partial C_{ii}^{-1}}{\partial \theta_j},$$
such that
$$\frac{\partial Q(\theta)}{\partial \theta_j} = \sum_{i=1}^{\ell} \frac{\alpha_i}{C_{ii}^{-1}} \left\{ \frac{\partial \alpha_i}{\partial \theta_j} \frac{1}{C_{ii}^{-1}} - \frac{\alpha_i}{[C_{ii}^{-1}]^2} \frac{\partial C_{ii}^{-1}}{\partial \theta_j} \right\}.$$
We begin by deriving the partial derivatives of the model parameters, $[\alpha^T \ b]^T$, with respect to the hyper-parameter $\theta_j$. The model parameters are given by the solution of a system of linear equations, such that
$$[\alpha^T \ b]^T = C^{-1} [y^T \ 0]^T.$$
Using the following identity for the partial derivatives of the inverse of a matrix,
$$\frac{\partial C^{-1}}{\partial \theta_j} = -C^{-1} \frac{\partial C}{\partial \theta_j} C^{-1}, \tag{13}$$
we obtain,
$$\frac{\partial [\alpha^T \ b]^T}{\partial \theta_j} = -C^{-1} \frac{\partial C}{\partial \theta_j} C^{-1} [y^T \ 0] = -C^{-1} \frac{\partial C}{\partial \theta_j} [\alpha^T \ b]^T.$$
Note the computational complexity of evaluating the partial derivatives of the model parameters is $O(\ell^2)$, as only two successive matrix-vector products are required. The partial derivatives of the diagonal elements of $C^{-1}$ can be found using the inverse matrix derivative identity (13). For a kernel parameter, $\partial C / \partial \eta_j$ will generally be fully dense, and so the computational complexity of evaluating the diagonal elements of $\partial C^{-1} / \partial \eta_j$ will be $O(\ell^3)$ operations. If, on the other hand, we consider the regularisation parameter, $\mu$, we have that
$$\frac{\partial C}{\partial \mu} = \begin{bmatrix} I & 0 \\ 0^T & 0 \end{bmatrix},$$
and so the computation of the partial derivatives of the model parameters, with respect to the regularisation parameter, is slightly simplified,
$$\frac{\partial [\alpha^T \ b]^T}{\partial \mu} = -C^{-1} [\alpha^T \ b]^T.$$
More importantly, as $\partial C / \partial \mu$ is diagonal, the diagonal elements of (13) can be evaluated with a computational complexity of only $O(\ell^2)$ operations. This suggests that it may be more efficient to adopt different strategies for optimising the regularisation parameter, $\mu$, and the vector of kernel parameters, $\eta$, (cf., Saadi et al., 2004). For a kernel parameter, $\eta_j$, the partial derivatives of $C$ with respect to $\eta_j$ are given by the partial derivatives of the kernel matrix, that is,
$$\frac{\partial C}{\partial \eta_j} = \begin{bmatrix} \partial K / \partial \eta_j & 0 \\ 0^T & 0 \end{bmatrix}.$$
For the spherical radial basis function kernel, used in this study, the partial derivative with respect to the kernel parameter is given by
$$\frac{\partial K(x, x')}{\partial \eta} = -K(x, x') \|x - x'\|^2.$$
Finally, since the regularisation parameter, $\mu$, and the scale parameter of the radial basis function kernel are strictly positive quantities, in order to permit the use of an unconstrained optimisation procedure, we adopt the parameterisation $\tilde{\theta}_j = \log_2 \theta_j$, such that
$$\frac{\partial Q(\theta)}{\partial \tilde{\theta}_j} = \frac{\partial Q(\theta)}{\partial \theta_j} \frac{\partial \theta_j}{\partial \tilde{\theta}_j} \quad \text{where} \quad \frac{\partial \theta_j}{\partial \tilde{\theta}_j} = \theta_j \log 2.$$
1.3.2 Automatic Relevance Determination
Automatic Relevance Determination (ARD) (e.g., Rasmussen and Williams, 2006), also known as feature scaling (Chapelle et al., 2002; Bo et al., 2006), aims to identify informative input features as a natural consequence of optimising the model selection criterion. This can be most easily achieved using an *elliptical* radial basis function kernel,
$$K(x, x') = \exp\left\{-\sum_{i=1}^{d} \eta_i [x_i - x'_i]^2\right\},$$
that incorporates individual scaling factors for each input dimension. The partial derivatives with respect to the kernel parameters are then given by,
$$\frac{\partial K(x, x')}{\partial \eta_i} = -K(x, x') [x_i - x'_i]^2.$$
Generalisation performance is likely to be enhanced if irrelevant features are down-weighted. It is therefore hoped that minimising the model selection criterion will lead to very small values for the scaling factors associated with redundant input features, allowing them to be identified and pruned from the model.
2. Bayesian Regularisation in Model Selection
In order to overcome the observed over-fitting in model selection using leave-one-out cross-validation based methods, we propose to add a regularisation term (Tikhonov and Arsenin, 1977) to the model selection criterion, which penalises solutions where the kernel parameters take on unduly large values. The regularised model selection criterion is then given by
$$M(\theta) = \zeta Q(\theta) + \xi \Omega(\theta), \quad (14)$$
where $\xi$ and $\zeta$ are additional regularisation parameters, $Q(\theta)$ is the model selection criterion, in this case the PRESS statistic and $\Omega(\theta)$ is a regularisation term,
$$Q(\theta) = \frac{1}{2} \sum_{i=1}^{\ell} \left[y_i - \hat{y}_i^{(-i)}\right]^2 \quad \text{and} \quad \Omega(\theta) = \frac{1}{2} \sum_{i=1}^{d} \eta_i^2.$$
In this study we have left the regularisation parameter, $\mu$, unregularised. However, we have now introduced two further regularisation parameters $\xi$ and $\zeta$ for which good values must also be found. This problem may be solved by taking a Bayesian approach and adopting an ignorance prior and integrating out the additional regularisation parameters analytically in the style of Buntine and Weigend (1991). Adapting the approach taken by Williams (1995), the regularised model selection criterion (14) can be interpreted as the posterior density in the space of the hyper-parameters,
$$P(\theta | D) \propto P(D | \theta) P(\theta),$$
by taking the negative logarithm and neglecting additive constants. Here $P(D | \theta)$ represents the likelihood with respect to the hyper-parameters and $P(\theta)$ represents our prior beliefs regarding the
hyper-parameters, in this case that they should have a small magnitude, corresponding to a relatively simple model. These quantities can be expressed as
\[ P(\mathcal{D}|\theta) = Z_Q^{-1} \exp \left\{ -\xi Q(\theta) \right\} \quad \text{and} \quad P(\theta) = Z_\Omega^{-1} \exp \left\{ -\xi \Omega(\theta) \right\} \]
where \(Z_Q\) and \(Z_\Omega\) are the appropriate normalising constants. Assuming the data represent an i.i.d. sample, the likelihood in this case is Gaussian,
\[ P(\mathcal{D}|\theta) = \prod_{i=1}^{\ell} \frac{1}{\sqrt{2\pi}\sigma} \exp \left\{ -\frac{\left[y_i - y_i^{(-i)}\right]^2}{2\sigma^2} \right\} \quad \text{where} \quad \xi = \frac{1}{\sigma^2} \implies Z_Q = \left( \frac{2\pi}{\xi} \right)^{\ell/2}. \]
Likewise, the prior is a Gaussian, centred on the origin,
\[ P(\theta) = \prod_{i=1}^{d} \frac{1}{\sqrt{2\pi/\xi}} \exp \left\{ -\frac{\xi}{2} \eta_i^2 \right\} \quad \text{such that} \quad Z_\Omega = \left( \frac{2\pi}{\xi} \right)^{d/2}. \]
Minimising (14) is thus equivalent to maximising the posterior density with respect to the hyper-parameters. Note that the use of a prior over the hyper-parameters is in accordance with normal Bayesian practice and has been investigated in the case of Gaussian Process classifiers by Williams and Barber (1998). The combination of frequentist and Bayesian approaches at the first and second levels of inference is however somewhat unusual. The marginal likelihood is dependent on the assumptions of the model, which may not be completely appropriate. Cross-validation based procedures may therefore be more robust in the case of model mis-specification (Wahba, 1990). It seems reasonable for the model to be less sensitive to assumptions at the second level of inference than the first, and so the proposed approach represents a pragmatic combination of techniques.
### 2.1 Elimination of Second Level Regularisation Parameters \( \xi \) and \( \zeta \)
Under the *evidence framework* proposed by MacKay (1992a,b,c) the hyper-parameters \( \xi \) and \( \zeta \) are determined by maximising the marginal likelihood, also known as the Bayesian *evidence* for the model. In this study, however we opt to integrate out the hyper-parameters analytically, extending the work by Buntine and Weigend (1991) and Williams (1995) to consider Bayesian regularisation at the second level of inference, namely the selection of good values for the hyper-parameters. We begin with the prior over the hyper-parameters, which depends on \( \xi \),
\[ P(\theta|\xi) = Z_\Omega(\xi)^{-1} \exp \left\{ -\xi \Omega \right\}. \]
The regularisation parameter \( \xi \) may then be integrated out analytically using a suitable prior, \(P(\xi)\),
\[ P(\theta) = \int P(\theta|\xi)P(\xi)d\xi. \]
The improper Jeffreys’ prior, \(P(\xi) \propto 1/\xi\) is an appropriate ignorance prior in this case as \( \xi \) is a scale parameter, noting that \( \xi \) is strictly positive,
\[ p(\theta) = \frac{1}{(2\pi)^{d/2}} \int_0^\infty \xi^{d/2-1} \exp \left\{ -\xi \Omega \right\} d\xi. \]
Using the Gamma integral \( \int_0^\infty x^{\nu-1} e^{-\mu x} dx = \Gamma(\nu)/\mu^\nu \) (Gradshteyn and Ryzhic, 1994, equation 3.384), we obtain
\[
p(\theta) = \frac{1}{(2\pi)^{d/2}} \frac{\Gamma(d/2)}{\Omega^{d/2}} \implies -\log p(\theta) \propto \frac{d}{2} \log \Omega.
\]
Finally, adopting a similar procedure to eliminate \( \zeta \), we obtain a revised model selection criterion with Bayesian regularisation,
\[
L = \frac{\ell}{2} \log Q(\theta) + \frac{d}{2} \log \Omega(\theta),
\]
(15)
in which the regularisation parameters have been eliminated. As before, this criterion can be optimised via standard methods, such as the Nelder-Mead simplex algorithm (Nelder and Mead, 1965) or scaled conjugate gradient descent (Williams, 1991). The partial derivatives of the proposed Bayesian model selection criterion are given by
\[
\frac{\partial L}{\partial \theta_i} = \frac{\ell}{2Q(\theta)} \frac{\partial Q(\theta)}{\partial \theta_i} + \frac{d}{2\Omega(\theta)} \frac{\partial \Omega(\theta)}{\partial \theta_i} \quad \text{and} \quad \frac{\partial \Omega(\theta)}{\partial \eta_i} = \eta_i.
\]
The additional computational expense involved in Bayesian regularisation of the model selection criterion is only \( O(d) \) operations, and is extremely small in comparison with the \( O(\ell^3) \) operations involved in obtaining the leave-one-out error (including the cost of training the model on the entire data set). Per iteration of the model selection process, the cost of the Bayesian regularisation is therefore minimal. There seems little reason to suppose that the regularisation will have an adverse effect on convergence, and this seems to be the case in practice.
### 2.2 Relationship with the Evidence Framework
Under the evidence framework of MacKay (1992a,b,c) the regularisation parameters, \( \xi \) and \( \zeta \), are selected so as to maximise the marginal likelihood, also known as the Bayesian evidence, for the model. The log-evidence is given by
\[
\log P(D) = -\xi \Omega(\theta) - \zeta Q(\theta) - \frac{1}{2} \log |A| + \frac{d}{2} \log \xi + \frac{\ell}{2} \log \zeta - \frac{\ell}{2} \log \{2\pi\},
\]
where \( A \) is the Hessian of the regularised model selection criterion (14) with respect to the hyper-parameters, \( \theta \). Setting the partial derivatives of the log evidence with respect to the regularisation parameters, \( \xi \) and \( \zeta \), equal to zero, we obtain the familiar update formulae,
\[
\xi_{\text{new}} = \frac{\gamma}{2\Omega(\theta)} \quad \text{and} \quad \zeta_{\text{new}} = \frac{\ell - \gamma}{2Q(\theta)},
\]
where \( \gamma \) is the number of well defined hyper-parameters, that is, the hyper-parameters for which the optimal value is primarily determined by the log-likelihood term, \( Q(\theta) \) rather than by the regulariser, \( \Omega(\theta) \). In the case of the L2 regularisation term, corresponding to a Gaussian prior, the number of well determined hyper-parameters is given by
\[
\gamma = \sum_{j=1}^n \frac{\lambda_j}{\lambda_j + \xi}
\]
where, \( \lambda_1, \ldots, \lambda_n \) represent the eigenvalues of the Hessian of the unregularised model selection criterion, \( Q(\theta) \) with respect to the kernel parameters. Comparing the partial derivatives of the regularised model selection criterion (14) with those of the Bayesian criterion (15), reveals that the
Bayesian regularisation scheme is equivalent to optimising the regularised model selection criterion (14) assuming that the regularisation parameters, $\xi$ and $\zeta$, are continuously updated according to the following update rules,
$$\xi^{\text{eff}} = \frac{d}{2\Omega(\theta)} \quad \text{and} \quad \zeta^{\text{eff}} = \frac{\ell}{2Q(\theta)}.$$
This exactly corresponds to the “cheap and cheerful” approximation of the evidence framework suggested by MacKay (1994), which assumes that all of the hyper-parameters are well-determined and that the number of hyper-parameters is small in relation to the number of training patterns. Since $\gamma \leq d$, it seems self evident that the proposed Bayesian regularisation scheme will be prone to a degree of under-fitting, especially in the case of a feature scaling kernel with many redundant features. The theoretical and practical pros and cons of the integrate-out approach and the evidence framework are discussed in some detail by MacKay (1994) and Bishop (1995) and references therein. However, the integrate-out approach does not require the evaluation of the Hessian matrix of the original selection criterion, $Q(\theta)$, which is likely to prove computationally prohibitive.
3. Results
In this section, we present experimental results demonstrating the benefits of the proposed model selection strategy incorporating Bayesian regularisation to overcome the inherent high variance of leave-one-out cross-validation based selection criteria. Table 2 shows a comparison of the error rates of least-squares support vector machines, using model selection procedures with, and without, Bayesian regularisation, (LS-SVM and LS-SVM-BR respectively) over the suite of thirteen public domain benchmark data sets used in the study by Mika et al. (2000). Results obtained using a Gaussian process classifier (Rasmussen and Williams, 2006), based on the expectation propagation method, are also provided for comparison (EP-GPC). The same set of 100 random partitions of the data (20 in the case of the larger image and splice benchmarks) to form training and test sets used in that study are also used here. In each case, model selection is performed independently for each realisation of the data set, such that the standard errors reflect the variability of both the training algorithm and the model selection procedure with changes in the sampling of the data. Both conventional spherical and elliptical radial basis kernels are used for all kernel learning methods, so that the effectiveness of each algorithm for automatic relevance determination can be assessed. The use of multiple training/test partitions allows an estimate of the statistical significance of differences in performance between algorithms to be computed. Let $\hat{x}$ and $\hat{y}$ represent the means of the performance statistic for a pair of competing algorithms, and $e_x$ and $e_y$ the corresponding standard errors, then the $z$ statistic is computed as
$$z = \frac{\hat{y} - \hat{x}}{\sqrt{e_x^2 + e_y^2}}.$$
The $z$-score can then be converted to a significance level via the normal cumulative distribution function, such that $z = 1.64$ corresponds to a 95% significance level. All statements of statistical significance in the remainder of this section refer to a 95% level of significance.
3.1 Performance of Models Based on the Spherical RBF Kernel
The results shown in the first three data columns of Table 2 show the performance of LS-SVM, LS-SVM-BR and EP-ARD models based on the spherical Gaussian kernel. The performance of
| Data Set | Training Patterns | Testing Patterns | Number of Replications | Input Features |
|-------------|-------------------|------------------|------------------------|----------------|
| Banana | 400 | 4900 | 100 | 2 |
| Breast cancer | 200 | 77 | 100 | 9 |
| Diabetis | 468 | 300 | 100 | 8 |
| Flare solar | 666 | 400 | 100 | 9 |
| German | 700 | 300 | 100 | 20 |
| Heart | 170 | 100 | 100 | 13 |
| Image | 1300 | 1010 | 20 | 18 |
| Ringnorm | 400 | 7000 | 100 | 20 |
| Splice | 1000 | 2175 | 20 | 60 |
| Thyroid | 140 | 75 | 100 | 5 |
| Titanic | 150 | 2051 | 100 | 3 |
| Twonorm | 400 | 7000 | 100 | 20 |
| Waveform | 400 | 4600 | 100 | 21 |
Table 1: Details of data sets used in empirical comparison.
LS-SVM models with and without Bayesian regularisation are very similar, with neither model proving significantly better than the other on any of the data sets. This seems reasonable given that only two hyper-parameters are optimised during model selection and so there is little scope for over-fitting the PRESS model selection criterion and the regularisation term will have little effect. The LS-SVM model with Bayesian regularisation is significantly out-performed by the Gaussian Process classifier on one benchmark (banana), but performs significantly better on a further four (ringnorm, splice, twonorm, waveform). Demšar (2006) recommends the use of the Wilcoxon signed rank test for assessing the statistical significance of differences in performance over multiple data sets. According to this test, neither the LSSVM-BR nor the EP-PGC is statistically superior at the 95% level of significance.
### 3.2 Performance of Models Based on the Elliptical RBF Kernel
The performances of LS-SVM, LS-SVM-BR and EP-GPC models based on the elliptical Gaussian kernel, which includes a separate scale parameter for each input feature, are shown in the last three columns of Table 2. Before evaluating the effects of Bayesian regularisation in model selection it is worth noting that the use of elliptical RBF kernels does not generally improve performance. For the LS-SVM, the elliptical kernel produces significantly better results on only two benchmarks (image and splice) and significantly worse results on a further eight (banana, breast cancer, diabetis, german, heart, ringnorm, twonorm, waveform), with the degradation in performance being very large in some cases (e.g., heart). This seems likely to be a result of the additional degrees of freedom involved in the model selection process, allowing over-fitting of the PRESS model selection criterion as a result of its inherently high variance. Note that fully Bayesian approaches, such as the Gaussian Process Classifier, are also unable to reliably select kernel parameters for the elliptical RBF kernel. The elliptical kernel is significantly better on only three benchmarks (flare solar,
| Data Set | Radial Basis Function | Automatic Relevance Determination |
|-------------|-----------------------|-----------------------------------|
| | LSSVM | LSSVM-BR | EP-GPC | LSSVM | LSSVM-BR | EP-GPC |
| Banana | 10.60 ± 0.052 | 10.59 ± 0.050 | **10.41 ± 0.046** | 10.79 ± 0.072 | 10.73 ± 0.070 | **10.46 ± 0.049** |
| Breast cancer | 26.73 ± 0.466 | 27.08 ± 0.494 | **26.52 ± 0.489** | 29.08 ± 0.415 | 27.81 ± 0.432 | 27.97 ± 0.493 |
| Diabetes | 23.34 ± 0.166 | **23.14 ± 0.166** | 23.28 ± 0.182 | 24.35 ± 0.194 | 23.42 ± 0.177 | 23.86 ± 0.193 |
| Flare solar | 34.22 ± 0.169 | 34.07 ± 0.171 | 34.20 ± 0.175 | 34.39 ± 0.194 | **33.61 ± 0.151** | **33.58 ± 0.182** |
| German | 23.55 ± 0.216 | 23.59 ± 0.216 | **23.36 ± 0.211** | 26.10 ± 0.261 | 23.88 ± 0.217 | 23.77 ± 0.221 |
| Heart | **16.64 ± 0.358** | **16.19 ± 0.348** | 16.65 ± 0.287 | 23.65 ± 0.355 | 17.68 ± 0.623 | 19.68 ± 0.366 |
| Image | 3.00 ± 0.158 | 2.90 ± 0.154 | 2.80 ± 0.123 | **1.96 ± 0.115** | **2.00 ± 0.113** | 2.16 ± 0.068 |
| Ringnorm | **1.61 ± 0.015** | **1.61 ± 0.015** | **4.41 ± 0.064** | 2.11 ± 0.040 | 1.98 ± 0.026 | 8.58 ± 0.096 |
| Splice | 10.97 ± 0.158 | 10.91 ± 0.154 | 11.61 ± 0.181 | **5.86 ± 0.179** | **5.14 ± 0.145** | 7.07 ± 0.765 |
| Thyroid | 4.68 ± 0.232 | 4.63 ± 0.218 | **4.36 ± 0.217** | 4.68 ± 0.199 | 4.71 ± 0.214 | **4.24 ± 0.218** |
| Titanic | **22.47 ± 0.085** | 22.59 ± 0.120 | 22.64 ± 0.134 | **22.58 ± 0.108** | 22.86 ± 0.199 | 22.73 ± 0.134 |
| Twonorm | **2.84 ± 0.021** | **2.84 ± 0.021** | **3.06 ± 0.034** | 5.18 ± 0.072 | 4.53 ± 0.077 | 4.02 ± 0.068 |
| Waveform | 9.79 ± 0.045 | **9.78 ± 0.044** | 10.10 ± 0.047 | 13.56 ± 0.141 | 11.48 ± 0.177 | 11.34 ± 0.195 |
Table 2: Error rates of least-squares support vector machine, with and without Bayesian regularisation of the model selection criterion, in this case the PRESS statistic (Allen, 1974), and Gaussian process classifiers over thirteen benchmark data sets (Rätsch et al., 2001), using both standard radial basis function and automatic relevance determination kernels. The results for the EP-GPC were obtained using the MATLAB software accompanying the book by Rasmussen and Williams (2006). The results for each method are presented in the form of the mean error rate over test data for 100 realisations of each data set (20 in the case of the image and splice data sets), along with the associated standard error. The best results are shown in boldface and the second best in italics (without implication of statistical significance).
image and splice), while being significantly worse on six (breast cancer, diabetis, heart, ringnorm, twonorm and waveform).
In the case of the elliptical RBF kernel, the use of Bayesian regularisation in model selection has a dramatic effect on the performance of LS-SVM models, with the LS-SVM-BR model proving significantly better than the conventional LS-SVM on nine of the thirteen benchmarks (breast cancer, diabetis, flare solar, german, heart, ringnorm, splice, twonorm and waveform) without being significantly worse on any of the remaining four data sets. This demonstrates that over-fitting in model selection, due to the larger number of kernel parameters, is likely to be the significant factor causing the relatively poor performance of models with the elliptical RBF kernel. Again, the Gaussian Process classifier is significantly better than the LS-SVM with Bayesian regularisation on the banana and twonorm data sets, but is significantly worse on four of the remaining eleven (diabetis, heart, ringnorm and splice). Again, according to the Wilcoxon signed rank test, neither the LSSVM-BR nor the EP-PGC is statistically superior at the 95% level of significance. However the magnitude of the difference in performance between LSSVM-BR and EP-GPC approaches tends to be greatest when the LSSVM-BR out-performs EP-GPC, most notably on the heart, splice and ringnorm data sets. This provides some support for the observation of Wahba (1990) that cross-validation based model selection procedures should be more robust against model mis-specification (see also Rasmussen and Williams, 2006).
4. Discussion
The experimental evaluation presented in the previous section demonstrates that over-fitting can occur in model selection, due to the variance of the model selection criterion. In many cases the minimum of the selection criterion using the elliptical RBF kernel is lower than that achievable using the spherical RBF kernel, however this results in a degradation in generalisation performance. Using the PRESS statistic, the over-fitting is likely to be most severe in cases with a small number of training patterns, as the variance of the leave-one-out estimator decreases as the sample size becomes larger. Using the standard LSSVM, the elliptical RBF kernel only out-performs the spherical RBF kernel on two of the thirteen data sets, image and splice, which also happen to be the two largest data sets in terms of the number of training patterns. The greatest degradation in performance is obtained on the heart benchmark, the third smallest. The heart data set also has a relatively large number of input features (13). A large number of input features introduces a many additional degrees of freedom to optimise the model selection criterion, and so will generally tend to encourage over-fitting. However, there may be a compact subset of highly relevant features with the remainder being almost entirely uninformative. In this case the advantage of suppressing the noisy inputs is so great that it overcomes the predisposition towards over-fitting, and so results in improved generalisation (as observed in the case of the image and splice benchmarks). Whether the use of an elliptical RBF kernel will improve or degrade generalisation largely depends on such characteristics of the data that are not known a-priori, and so it seems prudent to consider a range of kernel functions and select the best via cross-validation.
The experimental results indicate that Bayesian regularisation of the hyper-parameters is generally beneficial, without at this stage providing a complete solution to the problem of over-fitting the model selection criterion. The effectiveness of the Bayesian regularisation scheme is to a large extent dependent on the appropriateness of the prior imposed on the hyper-parameters. There is no reason to assume that the simple Gaussian prior used here is in any sense optimal, and this is an
issue where further research is necessary (see Section 4.2). The comparison of the integrate-out approach and the evidence framework highlights a deficiency of the simple Gaussian prior. It suggests that the integrate-out approach is likely to result in mild over-regularisation of the hyper-parameters in the presence of a large number of irrelevant features, as the corresponding hyper-parameters will be ill-determined.
The LSSVM with Bayesian regularisation of the hyper-parameters does not significantly outperform the expectation propagation based Gaussian process classifier over the suite of thirteen benchmark data sets considered. This is not wholly surprising as the EP-GPC is at least very close to the state-of-the-art, indeed it is interesting that the EP-GPC does not out-perform such a comparatively simple model. The EP-GPC uses the marginal likelihood as the model selection criterion, which gives the probability of the data, *given the assumptions of the model* (Rasmussen and Williams, 2006). Cross-validation based approaches, on the other hand, provide an estimate of generalisation performance that does not depend on the model assumptions, and so may be more robust against model mis-specification (Wahba, 1990). The no free lunch theorems suggest that, at least in terms of generalisation performance, there is a lack of inherent superiority of one classification method over another, in the absence of *a-priori* assumptions regarding the data. This implies that if one classifier performs better than another on a particular data set it is because the inductive biases of that classifier provide a better fit to the particular pattern recognition task, rather than to its superiority in a more general sense. A model with strong inductive biases is likely to benefit when these biases are well suited to the data, but will perform badly when they do not. While a model with weak inductive biases will be more robust, it is less likely to perform conspicuously well on any given data set. This means there are complementary advantages and disadvantages to both approaches.
### 4.1 Relationship to Existing Work
The use of a prior over the hyper-parameters is in accordance with normal Bayesian practice and has been used in Gaussian Process classification (Williams and Barber, 1998). The problem of over-fitting in model selection has also been addressed by Qi et al. (2004), in the case of selecting informative features for a logistic regression model using an Automatic Relevance Determination (ARD) prior (cf., Tipping, 2000). In this case, the Expectation Propagation method (Minka, 2001) is used to obtain a deterministic approximation of the posterior, and also (as a by-product) a leave-one-out performance estimate. The latter is then used to implement a form of early-stopping (e.g., Sarle, 1995) to prevent over-fitting resulting from the direct optimization of the marginal likelihood until convergence. It seems likely that this approach would be also be beneficial in the case of tuning the hyper-parameters of the covariance function of Gaussian process model, using either the leave-one-out estimate arising from the EP approximation, or an approximate leave-one-out estimate from the Laplace approximation (cf., Cawley and Talbot, 2007).
### 4.2 Directions for Further Research
In this paper, the regularisation term corresponds to a simple spherical Gaussian prior over the kernel parameters. One direction of research would be to investigate alternative regularisation terms. The
first possibility would be to use a regularisation term corresponding to a separable Laplace prior,
$$\Omega(\theta) = \frac{1}{2} \sum_{i=1}^{d} |\eta_i| \quad \implies \quad p(\theta) = \prod_{i=1}^{d} \frac{\xi}{2} \exp \left\{ -\frac{\xi}{2} |\eta_i| \right\}. $$
Setting the derivative of the regularised model selection criterion (14) to zero, we obtain
$$\left| \frac{\partial Q}{\partial \eta_i} \right| = \frac{\xi}{\zeta} \quad \text{if } |\eta_i| > 0 \quad \text{and} \quad \left| \frac{\partial Q}{\partial \eta_i} \right| < \frac{\xi}{\zeta} \quad \text{if } |\eta_i| = 0,$$
which implies that if the sensitivity of the leave-one-out error, $Q(\theta)$, falls below $\xi/\zeta$, the value of the hyper-parameter, $\eta_i$ will be set exactly to zero, effectively pruning that input from the model. In this way explicit feature selection may be obtained as a consequence of (regularised) model selection. The model selection criterion with Bayesian regularisation then becomes
$$L = \frac{\ell}{2} \log Q(\theta) + N \log \Omega(\theta)$$
where $N$ is the number of input features with non-zero scale factors. This potentially overcomes the propensity towards under-fitting the data that might be expected when using the Gaussian prior, as the pruning action of the Laplace prior means that the values of all remaining hyper-parameters are well-determined by the data. In the case of the Laplace prior, the integrate-out approach is exactly equivalent to continuous updates of the hyper-parameters according to the update formulae under the evidence framework (Williams, 1995). Alternatively, defining a prior over the function of a model seems more in accordance with Bayesian ideals than choosing a prior over the parameters of the model. The use of a prior over the hyper-parameters based on the smoothness of the resulting model also provides a potential direction for future research. In this case, the regularisation term might take the form,
$$\Omega(\theta) = \frac{1}{2\ell} \sum_{i=1}^{\ell} \sum_{j=1}^{d} \left[ \frac{\partial^2 \hat{y}_i}{\partial \sigma_{ij}^2} \right]^2,$$
directly penalising models with excess curvature. This regularisation term corresponds to curvature driven smoothing in multi-layer perceptron networks (Bishop, 1993), except that the model output $\hat{y}_i$ is viewed as a function of the hyper-parameters, rather than of the weights. A penalty term based on the first-order partial derivatives is also feasible (cf., Drucker and Le Cun, 1992).
5. Conclusion
Leave-one-out cross-validation has proved to be an effective means of model selection for a variety of kernel learning methods, provided the number of hyper-parameters to be tuned is relatively small. The use of kernel functions with large numbers of parameters often provides sufficient degrees of freedom to over-fit the model selection criterion, leading to poor generalisation. In this paper, we have proposed the use of regularisation at the second level of inference, that is, model selection. The use of Bayesian regularisation is shown to be effective in reducing over-fitting, by ensuring the values of the kernel parameters remain small, giving a smoother kernel and hence a less complex classifier. This is achieved with only a minimal computational expense as the additional regularisation parameters are integrated out analytically using a reference prior. While a fully Bayesian
model selection strategy is conceptually more elegant, it may also be less robust to model mis-
specification. The use of leave-one-out cross-validation in model selection and Bayesian methods
at the next level seems to be a pragmatic compromise. The effectiveness of this approach is clearly
demonstrated in the experimental evaluation where, on average, the LS-SVM with Bayesian reg-
ularisation out-performs the expectation-propagation based Gaussian process classifier, using both
spherical and elliptical RBF kernels.
Acknowledgments
The authors would like to thank the organisers of the WCCI model selection workshop and per-
formance prediction challenge and the NIPS multi-level inference workshop and model selection
game, and fellow participants for the stimulating discussions that have helped to shape this work.
We also thank Carl Rasmussen and Chris Williams for their advice regarding the EP-GPC and the
anonymous reviewers for their detailed and constructive comments that have significantly improved
this paper.
References
D. M. Allen. The relationship between variable selection and prediction. *Technometrics*, 16:125–
127, 1974.
E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum,
S. Hammarling, A. McKenney, and D. Sorenson. *LAPACK Users’ Guide*. SIAM Press, third
edition, 1999.
C. M. Bishop. Curvature-driven smoothing: a learning algorithm for feedforward networks. *IEEE
Transactions on Neural Networks*, 4(5):882–884, September 1993.
C. M. Bishop. *Neural Networks for Pattern Recognition*. Oxford University Press, 1995.
L. Bo, L. Wang, and L. Jiao. Feature scaling for kernel Fisher discriminant analysis using leave-
one-out cross-validation. *Neural Computation*, 18:961–978, April 2006.
W. L. Buntine and A. S. Weigend. Bayesian back-propagation. *Complex Systems*, 5:603–643, 1991.
G. C. Cawley. Leave-one-out cross-validation based model selection criteria for weighted LS-
SVMs. In *Proceedings of the International Joint Conference on Neural Networks (IJCNN-2006)*,
pages 2970–2977, Vancouver, BC, Canada, July 16–21 2006.
G. C. Cawley and N. L. C. Talbot. Efficient leave-one-out cross-validation of kernel Fisher discrim-
inant classifiers. *Pattern Recognition*, 36(11):2585–2592, November 2003.
G. C. Cawley and N. L. C. Talbot. Fast leave-one-out cross-validation of sparse least-squares support
vector machines. *Neural Networks*, 17(10):1467–1475, December 2004.
G. C. Cawley and N. L. C. Talbot. Approximate leave-one-out cross-validation for kernel logistic
regression. *Machine Learning* (submitted), 2007.
C. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. *Machine Learning*, 46(1):131–159, 2002.
C. Cortes and V. Vapnik. Support vector networks. *Machine Learning*, 20:273–297, 1995.
J. Demšar. Statistical comparisons of classifiers over multiple data sets. *Journal of Machine Learning Research*, 7:1–30, 2006.
H. Drucker and Y. Le Cun. Improving generalization performance using double back-propagation. *IEEE Transactions on Neural Networks*, 3(6):991–997, 1992.
S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma. *Neural Computation*, 4(1):1–58, 1992.
G. H. Golub and C. F. Van Loan. *Matrix Computations*. The Johns Hopkins University Press, Baltimore, third edition edition, 1996.
I. S. Gradshteyn and I. M. Ryzhic. *Table of Integrals, Series and Products*. Academic Press, fifth edition, 1994.
I. Guyon, A. R. Saffari Azar Alamdari, G. Dror, and J. Buhmann. Performance prediction challenge. In *Proceedings of the International Joint Conference on Neural Networks (IJCNN-2006)*, pages 1649–1656, Vancouver, BC, Canada, July 16–21 2006.
T. Joachims. *Learning to Classify Text using Support Vector Machines - Methods, Theory and Algorithms*. Kluwer Academic Publishers, 2002.
S. S. Keerthi, K. B. Duan, S. K. Shevade, and A. N. Poo. A fast dual algorithm for kernel logistic regression. *Machine Learning*, 61(1–3):151–165, November 2005.
G. S. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. *J. Math. Anal. Appl.*, 33:82–95, 1971.
R. Kohavi. A study of cross-validation and bootstrap for accuracy estimation and model selection. In *Proceedings of the Fourteenth International Conference on Artificial Intelligence (IJCAI)*, pages 1137–1143, San Mateo, CA, 1995. Morgan Kaufmann.
P. A. Lachenbruch and M. R. Mickey. Estimation of error rates in discriminant analysis. *Technometrics*, 10(1):1–11, February 1968.
A. Luntz and V. Brailovsky. On estimation of characters obtained in statistical procedure of recognition (in Russian). *Techicheskaya Kibernetica*, 3, 1969.
D. J. C. MacKay. Bayesian interpolation. *Neural Computation*, 4(3):415–447, 1992a.
D. J. C. MacKay. A practical Bayesian framework for backprop networks. *Neural Computation*, 4(3):448–472, 1992b.
D. J. C. MacKay. The evidence framework applied to classification networks. *Neural Computation*, 4(5):720–736, 1992c.
D. J. C. MacKay. Hyperparameters: Optimise or integrate out? In G. Heidbreder, editor, *Maximum Entropy and Bayesian Methods*. Kluwer, 1994.
J. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. *Philosophical Transactions of the Royal Society of London, A*, 209:415–446, 1909.
C. A. Micchelli. Interpolation of scattered data: Distance matrices and conditionally positive definite functions. *Constructive Approximation*, 2:11–22, 1986.
S. Mika, G. Rätsch, J. Weston, B. Schölkopf, and K.-R. Müller. Fisher discriminant analysis with kernels. In *Neural Networks for Signal Processing*, volume IX, pages 41–48. IEEE Press, New York, 1999.
S. Mika, G. Rätsch, J. Weston, B. Schölkopf, A. J. Smola, and K.-R. Müller. Invariant feature extraction and classification in feature spaces. In S. A. Solla, T. K. Leen, and K.-R. Müller, editors, *Advances in Neural Information Processing Systems*, volume 12, pages 526–532. MIT Press, 2000.
T. P. Minka. Expectation propagation for approximate Bayesian inference. In *Proceedings of the 17th Annual Conference on Uncertainty in Artificial Intelligence*, pages 362–369. Morgan Kaufmann, 2001.
J. A. Nelder and R. Mead. A simplex method for function minimization. *Computer Journal*, 7:308–313, 1965.
Y. Qi, T. P. Minka, R. W. Picard, and Z. Ghahramani. Predictive automatic relevance determination by expectation propagation. In *Proceedings of the 21st International Conference on Machine Learning*, pages 671–678, Banff, Alberta, Canada, July 4–8 2004.
C. E. Rasmussen and C. K. I. Williams. *Gaussian Processes for Machine Learning*. Adaptive Computation and Machine Learning. MIT Press, 2006.
G. Rätsch, T. Onoda, and K.-R. Müller. Soft margins for AdaBoost. *Machine Learning*, 42(3):287–320, 2001.
K. Saadi, N. L. C. Talbot, and G. C. Cawley. Optimally regularised kernel Fisher discriminant analysis. In *Proceedings of the 17th International Conference on Pattern Recognition (ICPR-2004)*, volume 2, pages 427–430, Cambridge, United Kingdom, August 23–26 2004.
W. S. Sarle. Stopped training and other remedies for overfitting. In *Proceedings of the 27th Symposium on the Interface of Computer Science and Statistics*, pages 352–360, Pittsburgh, PA, USA, June 21–24 1995.
B. Schölkopf, K. Tsuda, and J.-P. Vert. *Kernel Methods in Computational Biology*. MIT Press, 2004.
T. Seaks. SYMINV : An algorithm for the inversion of a positive definite matrix by the Cholesky decomposition. *Econometrica*, 40(5):961–962, September 1972.
J. Shawe-Taylor and N. Cristianini. *Kernel Methods for Pattern Analysis*. Cambridge University Press, 2004.
M. Stone. Cross-validatory choice and assessment of statistical predictions. *Journal of the Royal Statistical Society*, B 36(1):111–147, 1974.
S. Sundararajan and S. S. Keerthi. Predictive approaches for choosing hyperparameters in Gaussian processes. *Neural Computation*, 13(5):1103–1118, May 2001.
J. A. K. Suykens and J. Vandewalle. Least squares support vector machine classifiers. *Neural Processing Letters*, 9(3):293–300, June 1999.
J. A. K. Suykens, T. Van Gestel, J. De Brabanter, B. De Moor, and J. Vandewalle. *Least Squares Support Vector Machines*. World Scientific, 2002.
A. N. Tikhonov and V. Y. Arsenin. *Solutions of Ill-posed Problems*. John Wiley, New York, 1977.
M. E. Tipping. Sparse Bayesian learning and the relevance vector machine. *Journal of Machine Learning Research*, 1:211–244, June 2000.
G. Wahba. *Spline Models for Observational Data*. SIAM Press, Philadelphia, PA, 1990.
C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 20(12):1342–1351, December 1998.
P. M. Williams. A Marquardt algorithm for choosing the step size in backpropagation learning with conjugate gradients. Technical Report CSRP-229, University of Sussex, February 1991.
P. M. Williams. Bayesian regularization and pruning using a Laplace prior. *Neural Computation*, 7(1):117–143, 1995.
|
Access Free Manual Apex Note Braille
Right here, we have countless book **Manual Apex Note Braille** and collections to check out. We additionally find the money for variant types and plus type of the books to browse. The gratifying book, fiction, history, novel, scientific research, as without difficulty as various supplementary sorts of books are readily available here.
As this **Manual Apex Note Braille**, it ends up bodily one of the favored ebook **Manual Apex Note Braille** collections that we have. This is why you remain in the best website to look the incredible books to have.
**KEY=BRAILLE - DIAZ ISABEL**
**California Access Compliance Reference Manual**
**Digital Audiobook Players**
**OSCEs for Medical Finals**
*John Wiley & Sons* OSCEs for Medical Finals has been written by doctors from a variety of specialties with extensive experience of medical education and of organising and examining OSCEs. The book and website package consists of the most common OSCE scenarios encountered in medical finals, together with checklists, similar to OSCE mark schemes, that cover all of the key learning points students need to succeed. Each topic checklist contains comprehensive exam-focussed advice on how to maximise performance together with a range of 'insider's tips' on OSCE strategy and common OSCE pitfalls. Designed to provide enough coverage for those students who want to gain as many marks as possible in their OSCEs, and not just a book which will ensure students 'scrape a pass', the book is fully supported by a companion website at www.wiley.com/go/khan/osces, containing: OSCE checklists from the book A survey of doctors and students of which OSCEs have a high chance of appearing in finals in each UK medical school
**How to Identify & Resolve Radio-tv Interference Problems**
**Mathematics Under the Microscope**
**Notes on Cognitive Aspects of Mathematical Practice**
*American Mathematical Soc.* The author’s goal is to start a dialogue between mathematicians and cognitive scientists. He discusses, from a working mathematician’s point of view, the mystery of mathematical intuition: why are certain mathematical concepts more intuitive than others? To what extent does the ‘small scale’ structure of mathematical concepts and algorithms reflect the workings of the human brain? What are the ‘elementary particles’ of mathematics that build up the mathematical universe? The book is saturated with amusing examples from a wide range of disciplines--from turbulence to error-correcting codes to logic--as well as with just puzzles and brainteasers. Despite the very serious subject matter, the author’s approach is lighthearted and entertaining. This is an unusual and unusually fascinating book. Readers who never thought about mathematics after their school years will be amazed to discover how many habits of mind, ideas, and even material objects that are inherently mathematical serve as building blocks of our civilization and everyday life. A professional mathematician, reluctantly breaking the daily routine, or pondering on some resisting problem, will open this book and enjoy a sudden return to his or her young days when mathematics was fresh, exciting, and holding all promises. And do not take the word ‘‘microscope’’ in the title too literally: in fact, the author looks around, in
time and space, focusing in turn on a tremendous variety of motives, from mathematical "memes" (genes of culture) to an unusual life of a Hollywood star. --Yuri I. Manin, Max-Planck Institute of Mathematics, Bonn, and Northwestern University
War Surgery
Working with Limited Resources in Armed Conflict and Other Situations of Violence
Accompanying CD-ROM contains graphic footage of various war wound surgeries.
Scientific and Technical Aerospace Reports
Low Vision Manual
Elsevier Health Sciences .this book represents a real milestone for low vision care because it is one of the first low vision books in the world, and the first from the UK, that doesn't just give lip service to multi-disciplinary collaboration- it has a multi-disciplinary authorship. Barbara Ryan, Research Associate, School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK Low Vision Manual is a comprehensive guide and up-to-date reference source, written by clinical and research experts in the fields of disease detection and management; primary and secondary optometric care; low vision optics and prescribing; counselling and rehabilitation. All these areas are explored in this book in four key sections: Section One: Definition of low vision and its epidemiology Section Two: The measurement of visual function of the visually impaired Section Three: The optics and practical tips on prescribing low vision aids Section Four: Rehabilitation strategies and techniques. This is an important reference tool for all professionals involved with the visually impaired. The book covers everything a practitioner will need on a day-to-day basis. Clear layout with practical tips, worked examples and practical pearls will enable the front-line eye-care professional to provide patients with sound, research-based clinical care and rehabilitation. An essential reference for: . Ophthalmology . Optometry . Orthoptics . Ophthalmic nursing . Visual rehabilitation . Occupational therapy . Social work . Peer work . Psychology . Dispensing opticians
Understanding Education for the Visually Impaired
AOSIS The contribution that this book makes to scholarship is regarded as ground-breaking, as it is based on recent research conducted with teachers on the ground-level, as well as on research and experiences of practitioners, gained over many years. In this volume, Understanding education for the visually impaired, the focus falls on understanding visual impairment within the South African context, more specifically on what the education of these learners entails. In addition to the contribution to existing literature in the fields of inclusive education and visual impairment, the publication has practical application value for teachers and practitioners who work with and support such learners.
MRCP Paces Manual
CMJ New Music Report
CMJ New Music Report is the primary source for exclusive charts of non-commercial and college radio airplay and independent and trend-forward retail sales. CMJ's trade publication, compiles playlists for college and non-commercial stations; often a prelude to larger success.
Improving Transport Accessibility for All Guide to Good Practice
OECD Publishing This guide brings the reader information on the latest in good practice regarding improving transport accessibility for all users.
Core Topics in Paediatric Anaesthesia
Cambridge University Press This book covers all of the important elements of paediatric anaesthesia in a concise and structured manner. From the premature infant to the teenager, readers are guided through the complexities they may encounter, with key points at the end of each chapter to summarise the most important information. The common surgical conditions encountered in daily practice are covered along with comprehensive discussion of consent and the law, safeguarding children, and the complexity of drug dosing in the paediatric population. Other topics covered include trauma, burns, resuscitation, principles of intensive care, and transporting a sick child. Each chapter is written by an acknowledged expert in their field, sharing a wealth of relevant, practical experience. Covering the whole curriculum necessary for advanced training, this is essential reading for trainees, general anaesthetists managing children in non-specialist hospitals and anyone aspiring to become a paediatric anaesthetist, as well as those established in the field.
The Rule of Recognition and the U.S. Constitution
Oxford University Press A volume of original essays that discusses the applicability of H. L. A. Hart's rule of recognition model of a legal system to U. S. Constitutional law as discussed in his book "The concept of law".
The Woody Plant Seed Manual
Forest Service
Attention
From Theory to Practice
Oxford University Press Attention has been one of the most popular subjects in basic cognitive-psychology research, and so its study has generated much empirical data and many theoretical explanations. Leading researchers explain how advantage can be taken of all the knowledge amassed on attention in basic-science research.
The Hands-on Guide to Practical Paediatrics
John Wiley & Sons Winner of the Paediatrics category at the BMA Book Awards 2015 About to start a paediatrics rotation? Working with children for the first time? Thinking about a career in paediatrics? The Hands-on Guide to Practical Paediatrics is the ultimate practical guide for medical students encountering paediatrics for the first time, junior doctors thinking about working with children, and new paediatric trainees. It's full of vital information on practical procedures, prescribing for young patients, and communicating with children and young people, as well as guidance on the paediatric training programme and paediatrics as a career. Full of clinical tips, and covering key information on developmental stages, common paediatric emergencies and ethical dilemmas, and child protection, The Hands-on Guide to Practical Paediatrics is also supported by online resources including practice prescribing scenarios and video content at www.wileyhandsonguides.com/paediatrics Take the stress out of paediatrics with The Hands-on Guide!
Expert Oracle Application Express
Apress: Expert Oracle Application Express, 2nd Edition is newly updated for APEX 5.0 and brings deep insight from some of the best APEX practitioners in the field today. You’ll learn about important features in APEX 5.0, and how those can be applied to make your development work easier and with greater impact on your business. Oracle Application Express (APEX) is an entirely web-based development framework that is built into every edition of Oracle Database. The framework rests upon Oracle’s powerful PL/SQL language, enabling power users and developers to rapidly develop applications that easily scale to hundreds, even thousands of concurrent users. APEX has seen meteoric growth and is becoming the tool of choice for ad-hoc application development in the enterprise. The many authors of Expert Oracle Application Express, 2nd Edition build their careers around APEX. They know what it takes to make the product sing—developing secure applications that can be deployed globally to users inside and outside a large enterprise. The authors come together in this book to share some of their deepest and most powerful insights into solving the difficult problems surrounding globalization, configuration and lifecycle management, and more. New in this edition for APEX 5.0 is coverage of Oracle REST Data Services, map integration, jQuery with APEX, and the new Page Designer. You’ll learn about debugging and performance, deep secrets to customizing your application user interface, how to secure applications from intrusion, and about deploying globally in multiple languages. Expert Oracle Application Express, 2nd Edition is truly a book that will move you and your skillset a big step towards the apex of Application Express development. Contains all-new content on Oracle REST Data Services, jQuery in APEX, and map integration Addresses globalization and other concerns of enterprise-level development Shows how to customize APEX for your own application needs.
Paediatrics at a Glance
John Wiley & Sons Paediatrics at a Glance provides an introduction to paediatrics and the problems encountered in child health as they present in primary, community and secondary care, from birth through to adolescence. Thoroughly updated to reflect changes in understanding of childhood illness over the last 5 years, the 4th edition of this best-selling textbook diagrammatically summarises the main differential diagnoses for each presenting symptom, while accompanying text covers important disorders and conditions as well as management information. Paediatrics at a Glance: • Is an accessible, user-friendly guide to the entire paediatric curriculum • Features expanded coverage of psychological issues and ethics in child health • Includes more on advances in genetics, screening and therapy of childhood illness • Contains new videos of procedures and concepts on the companion website • Includes a brand new chapter on Palliative Care - an emerging area in the specialty • Features full colour artwork throughout • Includes a companion website at www.ataglanceseries.com/paediatrics featuring interactive self-assessment case studies, MCQs, videos of the procedures and concepts covered in the book, and links to online resources Paediatrics at a Glance is the ideal companion for anyone about to start a paediatric attachment or module and will appeal to medical students, junior doctors and GP trainees as well as nursing students and other health professionals.
MEG
An Introduction to Methods
Oxford University Press Magnetoencephalography (MEG) is an exciting brain imaging technology that allows real-time tracking of neural activity, making it an invaluable tool for advancing our understanding of brain function. In this comprehensive introduction to MEG, Peter Hansen, Morten Kringelbach, and Riitta Salmelin have brought together the leading researchers to provide the basic tools for planning and executing MEG experiments, as well as analyzing and interpreting the resulting data. Chapters on the basics describe the fundamentals of MEG and its instrumentation, and provide guidelines for designing experiments and performing successful measurements. Chapters on data analysis present it in detail, from general concepts and assumptions to analysis of evoked responses and oscillatory background activity. Chapters on solutions propose potential solutions to the inverse problem using techniques such as minimum norm estimates, spatial filters and beamformers. Chapters on combinations elucidate how MEG can be used to complement other neuroimaging techniques. Chapters on applications provide practical examples of how to use MEG to study sensory processing and cognitive tasks, and how MEG can be used in a clinical setting. These chapters form a complete basic reference source for those interested in exploring or already using MEG that will hopefully inspire them to try to develop new, exciting approaches to designing and analyzing their own studies. This book will be a valuable resource for researchers from diverse fields, including neuroimaging, cognitive
neuroscience, medical imaging, computer modelling, as well as for clinical practitioners.
**Principles of Management**
*Principles of Management* is designed to meet the scope and sequence requirements of the introductory course on management. This is a traditional approach to management using the leading, planning, organizing, and controlling approach. Management is a broad business discipline, and the *Principles of Management* course covers many management areas such as human resource management and strategic management, as well as behavioral areas such as motivation. No one individual can be an expert in all areas of management, so an additional benefit of this text is that specialists in a variety of areas have authored individual chapters. Contributing Authors David S. Bright, Wright State University Anastasia H. Cortes, Virginia Tech University Eva Hartmann, University of Richmond K. Praveen Parboteeah, University of Wisconsin-Whitewater Jon L. Pierce, University of Minnesota-Duluth Monique Reece Amit Shah, Frostburg State University Siri Terjesen, American University Joseph Weiss, Bentley University Margaret A. White, Oklahoma State University Donald G. Gardner, University of Colorado-Colorado Springs Jason Lambert, Texas Woman’s University Laura M. Leduc, James Madison University Joy Leopold, Webster University Jeffrey Muldoon, Emporia State University James S. O’Rourke, University of Notre Dame
**A Shock to Thought**
**Expression after Deleuze and Guattari**
*Routledge* *A Shock to Thought* brings together essays that explore Deleuze and Guattari’s philosophy of expression in a number of contemporary contexts. It will be of interest to all those in philosophy, cultural studies and art theory. The volume also contains an interview with Guattari which clearly restates the ‘aesthetic paradigm’ that organizes both his and Deleuze’s work.
**Handbook of Consumer Psychology**
*Psychology Press* This Handbook contains a unique collection of chapters written by the world’s leading researchers in the dynamic field of consumer psychology. Although these researchers are housed in different academic departments (ie. marketing, psychology, advertising, communications) all have the common goal of attaining a better scientific understanding of cognitive, affective, and behavioral responses to products and services, the marketing of these products and services, and societal and ethical concerns associated with marketing processes. Consumer psychology is a discipline at the interface of marketing, advertising and psychology. The research in this area focuses on fundamental psychological processes as well as on issues associated with the use of theoretical principles in applied contexts. The Handbook presents state-of-the-art research as well as providing a place for authors to put forward suggestions for future research and practice. The Handbook is most appropriate for graduate level courses in marketing, psychology, communications, consumer behavior and advertising.
**Illustrated Textbook of Paediatrics**
*Elsevier Health Sciences* Thoroughly revised and updated, the fifth edition of this prize-winning title retains the high level of illustration and accessibility that has made it so popular worldwide with medical students and trainees approaching clinical specialty exams. *Illustrated Textbook of Paediatrics* has been translated into eight languages over its life. Case studies. Summary boxes. Tips for patient education. Highly illustrated with 100s of colour images. Diseases consistently presented by Clinical features; Investigations; Management; Prognosis; and, where appropriate, Prevention. Separate chapters on Accidents Child protection Diabetes and endocrinology Inborn Errors of Metabolism New chapter on Global child health New co-editor, Will Carroll, Chair of MRCPCH Theory Examinations.
Emergencies in Children's and Young People's Nursing
OUP Oxford This book is a survival guide for all nurses who provide emergency care to children and young people. It helps those nurses who are at the front-line of care to quickly assess the level of emergency and plan the initial management. The consistent layout and the note-style format allows them to find and take in information quickly, whilst on the ward. Written by nurses, for nurses, this quick-reference book contains the most important information nurses need to know when caring for children and young people.
The Oxford Handbook of Aphasia and Language Disorders
Oxford University Press The Oxford Handbook of Aphasia and Language Disorders integrates neural and cognitive perspectives, providing a comprehensive overview of the complex language and communication impairments that arise in individuals with acquired brain damage.
Designing Sidewalks and Trails for Access
Safety and Environmental Standards for Fuel Storage Sites
Final Report
The main purpose of this report is to specify the minimum standards of control which should be in place at all establishments storing large volumes of gasoline.
A Journey of Embedded and Cyber-Physical Systems
Essays Dedicated to Peter Marwedel on the Occasion of His 70th Birthday
Springer Nature This Open Access book celebrates Professor Peter Marwedel’s outstanding achievements in compilers, embedded systems, and cyber-physical systems. The contributions in the book summarize the content of invited lectures given at the workshop “Embedded Systems” held at the Technical University Dortmund in early July 2019 in honor of Professor Marwedel’s seventieth birthday. Provides a comprehensive view from leading researchers with respect to the past, present, and future of the design of embedded and cyber-physical systems; Discusses challenges and (potential) solutions from theoreticians and practitioners on modeling, design, analysis, and optimization for embedded and cyber-physical systems; Includes coverage of model verification, communication, software runtime systems, operating systems and real-time computing.
Development Communication Sourcebook
Broadening the Boundaries of Communication
World Bank Publications The ‘Development Communication Sourcebook’ highlights how the scope and application of communication in the development context are broadening to include a more dialogic approach. This approach facilitates assessment of risks and opportunities, prevents problems and conflicts, and enhances the results and sustainability of projects when implemented at the very beginning of an initiative. The book presents basic concepts and explains key challenges faced in daily practice. Each of the four modules is self-contained, with examples, toolboxes, and more.
Human Rights Manual for District Magistrate
The Architects' Handbook
*John Wiley & Sons* *The Architects' Handbook* provides a comprehensive range of visual and technical information covering the great majority of building types likely to be encountered by architects, designers, building surveyors and others involved in the construction industry. It is organised by building type and concentrates very much on practical examples. Including over 300 case studies, the Handbook is organised by building type and concentrates very much on practical examples. It includes: · a brief introduction to the key design considerations for each building type · numerous plans, sections and elevations for the building examples · references to key technical standards and design guidance · a comprehensive bibliography for most building types The book also includes sections on designing for accessibility, drawing practice, and metric and imperial conversion tables. To browse sample pages please see [http://www.blackwellpublishing.com/architectsdata](http://www.blackwellpublishing.com/architectsdata)
Gramophone
Malawi
Poverty Reduction Strategy Paper
*International Monetary Fund* *The Malawi Growth and Development Strategy II (MGDS-II)* is a poverty reduction strategy for the period 2006–11, which is aimed at fulfilling Malawi’s future developmental aspiration—Vision 2020. The strategy identifies broad thematic areas and key priority areas to bring about sustained economic growth. A striking feature of this strategy is that the various governmental organizations, private sector, and general public are equal stakeholders. However, successful implementation of MGDS-II will largely depend on sound macroeconomic management and a stable political environment.
World History
Cultures, States, and Societies To 1500
*Annotation World History: Cultures, States, and Societies to 1500* offers a comprehensive introduction to the history of humankind from prehistory to 1500. Authored by six USG faculty members with advance degrees in History, this textbook offers up-to-date original scholarship. It covers such cultures, states, and societies as Ancient Mesopotamia, Ancient Israel, Dynastic Egypt, India’s Classical Age, the Dynasties of China, Archaic Greece, the Roman Empire, Islam, Medieval Africa, the Americas, and the Khanates of Central Asia. It includes 350 high-quality images and maps, chronologies, and learning questions to help guide student learning. Its digital nature allows students to follow links to applicable sources and videos, expanding their educational experience beyond the textbook. It provides a new and free alternative to traditional textbooks, making World History an invaluable resource in our modern age of technology and advancement.
The Cambridge Handbook of Applied Perception Research
*Cambridge University Press* *The Cambridge Handbook of Applied Perception Research* covers core areas of research in perception with an emphasis on its application to real-world environments. Topics include multisensory processing of information, time perception, sustained attention, and signal detection, as well as pedagogical issues surrounding the training of applied perception researchers. In addition to familiar topics, such as perceptual learning, the Handbook focuses on emerging areas of importance, such as human-robot
coordination, haptic interfaces, and issues facing societies in the twenty-first century (such as terrorism and threat detection, medical errors, and the broader implications of automation). Organized into sections representing major areas of theoretical and practical importance for the application of perception psychology to human performance and the design and operation of human-technology interdependence, it also addresses the challenges to basic research, including the problem of quantifying information, defining cognitive resources, and theoretical advances in the nature of attention and perceptual processes.
**Advanced Higher Biology**
'Official SOA Past Papers' provide perfect exam preparation. As well as delivering at least three years of actual past papers - including the 2008 exam - all papers are accompanied by examiner-approved answers to show students how to write the best responses for the most marks.
**Designing Inclusive Educational Spaces for Autism**
**Lettre Sur Les Aveugles a L'usage De Ceux Qui Voient**
*Createspace Independent Publishing Platform* Dans ce texte, Denis Diderot se penche sur la question de la perception visuelle, un sujet renouvelé à l'époque par le succès d'opérations chirurgicales permettant de donner la vue à certains aveugles de naissance. Les spéculations sont nombreuses en ce temps-là sur ce que la vue et l'usage qu'un individu peut en faire doivent à la seule perception, ou bien à l'habitude et l'expérience, par exemple pour se repérer dans l'espace, identifier des formes, percevoir les distances et les volumes, distinguer un tableau réaliste de la réalité. Diderot explique qu'un aveugle qui se met soudainement à voir ne comprend pas immédiatement ce qu'il voit, et qu'il mettra du temps à faire le rapport entre son expérience des formes et des distances acquises par le toucher, et les images qu'il perçoit avec son oeil.
**Inclusive Design Patterns**
**Coding Accessibility Into Web Design**
We make inaccessible and unusable websites and apps all the time, but it's not for lack of skill or talent. It's just a case of doing things the wrong way. We try to build the best experiences we can, but we only make them for ourselves and for people like us. This book looks at common interface patterns from the perspective of an inclusive designer—someone trained in building experiences that cater to the huge diversity of abilities, preferences and circumstances out there. There's no such thing as an 'average' user, but there is such a thing as an average developer. This book will take you from average to expert in the area that matters the most: making things more readable and more usable to more people.
|
The Western North Carolina Dulcimer Collective is a member-supported group of players of mountain and hammered dulcimers, and those who enjoy listening to dulcimers and/or playing other traditional instruments with them. The group meets once per month to share tunes and information. Dues are $5.00 per year payable to WNCDC – Mail checks to Carl Cochrane, 12 Pheasant Dr, Asheville, NC 28803.
**Dulcimer Club News**
More big news: We now have our first two “Tune Learning” CD’s available! They have tunes played slowly on just the melody string, and then again up to speed with chords, and some a third time finger picked. Two years’ tunes fit on each CD – 1996-97 and 1998-99 are ready, with 2000-01 the next one planned. When we sell enough of each, I’ll start on the next, until all six (1990-2001) are done! They’re professionally duplicated and printed, not home-made CD-R’s, but we’re still able to charge less than I’d thought last month - $3.50 each, or $5.00 with shipping. (If we ship more than one, it’ll be $5.00 for the first and $4.00 for each one after that, in the same shipment. $5/1, $9/2, $13/3, etc.)
As I mentioned last month, we’re also now able to email the newsletter in “.pdf” format. Right now we’re emailing it to over 40 dulcimer friends around the country. Let me know if you’d like it that way.
More big news – we now have a website! Bruce Ford at EverythingDulcimer.com has offered free website hosting to dulcimer clubs, and I jumped on board. Our site isn’t very fancy, but you can check it out at [http://www.EverythingDulcimer.com/wncdc](http://www.EverythingDulcimer.com/wncdc). And don’t forget – I’m posting all of our monthly song review pieces at [http://www.EverythingDulcimer.com](http://www.EverythingDulcimer.com) so that when you get the newsletter you can download and print out all of the pieces you don’t already have, to practice them before the club meeting!
The monthly tunes this quarter are from a book given to me by a fellow ESLtutor – Thanks, Catherine! It’s “The Folk Song Abecedary” by James Leisy. (Hawthorn Books, NY, 1966.) The October tune is a southern mountain tune called “All The Good Times” and will probably sound familiar to you. The November tune is a fun tune called “Buffalo Boy”. The December tune is a sweet ballad (...) called “The Jealous Lover”, for those of you who like tunes like “Banks of the Ohio”. We’ll make up for that with a Christmas tune you should recognize, “The Holly and The Ivy”.
**Song Review Schedule**
| Month | Tunes | Quarter |
|-----------|--------------------------------------------|---------------|
| October | Boney | (2nd Quarter, 1999) |
| | Little Rosewood Casket | (3rd Quarter, 1999) |
| | Old North State | (3rd Quarter, 1999) |
| November | Come Here, Lord | (4th Quarter, 2000) |
| | Drink To Me Only With Thine Eyes | (4th Quarter, 1999) |
| | In The Bleak Midwinter | (4th Quarter, 1994) |
| December | Carol Of The Hay | (4th Quarter, 1998) |
| | Er Is Een Kindeke Geboren Op Aard' | (4th Quarter, 1997) |
| | My Faith Looks Up To Thee | (3rd Quarter, 1999) |
All The Good Times
1. All the good times are past and gone.
All the good times are o’er;
All the good times are past and gone.
Darlin’, don’t weep no more.
2. I wish to the Lord I’d never been born,
Or had died when I was young;
And never had seen your sparkling blue eyes
Or heard your flattering tongue.
3. Oh, don’t you see that distant train
A-comin’ round the bend.
It’ll take me away from this old town;
Never to return again.
4. Oh, don’t you see that lonesome dove
That flies from pine to pine.
He’s mourning for his own true love
Just like I mourn for mine.
Buffalo Boy
1. When are we gonna get married, married, married?
When are we gonna get married,
Dear old buffalo boy?
2. I guess we’ll marry in a week,
in a week, in a week,
I guess we’ll marry in a week,
That is, if the weather be good.
3. How’re you gonna come to the wedding,
to the wedding, to the wedding?
How’re you gonna come to the wedding,
Dear old buffalo boy?
4. I guess I’ll come in my ox cart...
That is, if the weather be good.
5. Why don’t you come in your buggy...
Dear old buffalo boy?
6. My ox won’t fit in the buggy...
Not even if the weather be good.
7. Who’re you gonna bring to the wedding...
Dear old buffalo boy?
8. I guess I’ll bring my children...
That is, if the weather be good.
9. I didn’t know you had no children...
Dear old buffalo boy.
10. Oh, yes, I have five children...
Six – if the weather be good.
11. There ain’t gonna be no wedding...
Not even if the weather be good.
The Jealous Lover
1. Way down in love’s green valley
Where roses bloom and fade,
There was a jealous lover
In love with a beautiful maid.
2. One night the moon shone brightly,
The stars were shining, too,
When to Florella’s window
The jealous lover drew.
3. “Come, love, and let us wander
Down where the woods are gay.
And strolling we will ponder
Upon our wedding day.”
4. So arm in arm they wandered;
The night birds sang above;
The jealous lover grew angry
With the beautiful woman he loved.
5. The night grew dark and dreary,
Florella was afraid to stay.
“I am so tired and weary,
I must retrace my way.”
6. “Retrace your steps? No, never!
For you have met your doom.
Farewell to you forever,
To parents, friends and home.”
7. Down on her knees before him,
Florella pleaded for her life.
And deep into her bosom,
He plunged his fatal knife.
8. Down on his knees he bended,
Saying, “Oh, God, what have I done?
I murdered my own Florella,
As true as the rising sun.”
9. Now in that lonely valley,
Where the willows weep o’er her grave,
Florella lies forgotten,
Where the merry sunbeams play.
All The Good Times
Mountain Dulcimer: D-A-dd and D-A-AA
Arrangement: Steve Smith
| | D | D7 | G | D |
|----------|---|----|---|---|
| All the good times are past and gone. | F# - - F# F# - | D - E - F# - | E - D - B - | A - - - - |
| Notes | 0 0 0 | 0 0 0 | 0 0 0 | 0 |
| D-A-dd | 0 0 0 | 2 2 2 | 1 1 1 | 0 |
| | 2 - - 2 2 - | 0 - 1 - 2 - | 1 - 0 - - | - - - - |
| D-A-AA | 3 3 3 | 2 2 2 | 1 1 1 | 0 |
| | 5 - - 5 5 - | 3 - 4 - 5 - | 4 - 3 - 1 - | 0 |
| | D | E7 | A7 |
|----------|---|----|----|
| All the good times are o'er: | F# - - F# F# - | D - E - F# - | E - - - - - - - - |
| Notes | 0 0 0 | 0 0 0 | 0 |
| D-A-dd | 0 0 0 | 0 0 0 | 1 |
| | 2 - - 2 2 - | 0 - 1 - 2 - | 1 - - - - 1 - - - - |
| D-A-AA | 3 3 3 | 0 0 3 | 3 |
| | 5 - - 5 5 - | 3 - 4 - 5 - | 4 - - - - 4 - - - - |
| | D | D7 | G | D |
|----------|---|----|---|---|
| All the good times are past and gone. | F# - - F# F# - | D - E - F# - | E - D - B - | A - - - - |
| Notes | 0 0 0 | 0 0 0 | 0 0 0 | 0 |
| D-A-dd | 0 0 0 | 2 2 2 | 1 1 1 | 0 |
| | 2 - - 2 2 - | 0 - 1 - 2 - | 1 - 0 - - | - - - - |
| D-A-AA | 3 3 3 | 2 2 2 | 1 1 1 | 0 |
| | 5 - - 5 5 - | 3 - 4 - 5 - | 4 - 3 - 1 - | 0 |
| | D | A7 | D |
|----------|---|----|---|
| Darlin', don't weep no more. | A - D - F# - | E - F# - E - | D - - - - - - - - |
| Notes | 0 0 0 | 3 3 3 | 0 |
| D-A-dd | 0 0 0 | 0 0 0 | 0 |
| | - 0 - 2 - | 1 - 2 - 1 - | 0 - - - - - - - - |
| D-A-AA | 0 0 0 | 4 4 3 | 2 |
| | 0 0 0 | 0 0 0 | 0 |
| | 0 - 3 - 5 - | 4 - 5 - 4 - | 3 - - - - - - - - |
Buffalo Boy
Mountain Dulcimer: D-A-dd and D-A-AA
Arrangement: Steve Smith
When are we gonna get married,
D D D - D D E -
0 0 0 0 0 0 0
0 0 0 - 0 0 1 -
2 - 4 - 4 - -
Dear old buffalo boy?
E - A - F# F# E -
1 1 1 1 1 0
0 0 0 0 0 0
1 - 4 - 2 2 1 -
0 0 0 0 0 0
4 - 7 - 5 5 4 -
Notes:
D D D - D D E -
0 0 0 0 0 0 0
0 0 0 - 0 0 1 -
2 - 4 - 4 - -
A7 D
married, married?
E - A - A - - -
1 1 1 3 0 0
0 0 0 3 0 0
1 - 4 - 4 - 4 -
5 - 4 - 4 - -
D
When are we gonna get married,
D D D - D D E -
0 0 0 0 0 0 0
0 0 0 - 0 0 1 -
2 - 4 - 4 - -
A7 D
Dear old buffalo boy?
E - A - F# F# E -
1 1 1 1 1 0
0 0 0 0 0 0
1 - 4 - 2 2 1 -
0 0 0 0 0 0
4 - 7 - 5 5 4 -
Way down in love's green valley,
Where roses bloom and fade,
There was a jealous lover
In love with a beautiful maid.
Mountain Dulcimer: D-A-dd and D-A-AA
Arrangement: Steve Smith
The Holly and The Ivy
Mountain Dulcimer: D-A-dd and D-A-AA
Arrangement: Steve Smith
1. The holly and the ivy, When they are both full grown,
Of all the trees that are in the wood, The holly bears the crown.
Chorus
The rising of the sun, And the running of the deer,
The playing of the merry organ, Sweet singing in the choir.
2. The holly bears a blossom, As white as the lily flower,
And Mary bore sweet Jesus Christ To be our sweet saviour.
3. The holly bears a berry, As red as any blood,
And Mary bore sweet Jesus Christ To do poor sinners good.
4. The holly bears a prickle, As sharp as any thorn,
And Mary bore sweet Jesus Christ On Christmas Day in the morn
5. The holly bears a bark, As bitter as any gall,
And Mary bore sweet Jesus Christ For to redeem us all.
6. The holly and the ivy, When they are both full grown,
Of all the trees that are in the wood, The holly bears the crown.
MEETING DATES
October 13, 2002 - Regular Meeting
November 10, 2002 - Regular Meeting
December 8, 2002 - Regular Meeting
MEETING LOCATION/TIME
Second Sunday of each month from 2:30-5:00 at
The Folk Art Center Upstairs Gallery, Blue Ridge Parkway,
Asheville
The Folk Art Center is located on the Blue Ridge Parkway at Milepost 382,
about 1/2 mile North of US 70, just East of Asheville. Take I-40 Exit 55 to
Highway 70, then left to the Parkway, or take I-240 Exit 7 and go East on
Highway 70 to the Parkway. The Club meets in the upstairs gallery, across
from the top of the ramp as you enter the Folk Art Center.
Handicapped Access is available: From Highway 70, go West from the
Parkway just past the VA Medical Center to Riceville Road. Go to the Folk
Art Center Service Entrance. A ramp leads to a second floor entrance next to
where we set up.
Western North Carolina
Dulcimer Collective
c/o Steve Smith
607 East Blue Ridge Road
East Flat Rock, NC 28726
|
Further errors in polymorph identification: furosemide and finasteride
Reassessment of the reported single-crystal X-ray diffraction characterization of polymorphs of furosemide and finasteride shows that, in each case, incomplete data collections have resulted in the mistaken identification of two forms that are, in fact, identical.
1. Introduction
Over the years, a variety of techniques have been used for the purpose of characterization of solid forms in general and polymorphs in particular. These have included powder X-ray diffraction (PXRD), vibration spectroscopy, solid-state NMR spectroscopy, microscopy and differential scanning calorimetry. From time to time, full crystal structure determinations have been achieved in order to provide what are expected to be unequivocal characterizations of the solid forms. However, the assumption that single-crystal structure determination provides a clear benchmark in the identification of solid forms has to be viewed with some caution. A recent publication by Clemente & Marzotto (2004) drew attention to the kinds of errors that can be made in the interpretation of single-crystal diffraction data, leading to false structural representations. The paper focuses on errors that occur through misinterpretations of space group symmetry, and amongst their examples are two examples of ‘false polymorphism’. In a related development to this topic, van de Streek & Motherwell (2005) have recently described automatic procedures for searching the Cambridge Structural Database (CSD; Version 5.27 of November 2005; Allen, 2002) for, as yet, unrecognized polymorphs. These methods rely on comparisons of either or both computed PXRD patterns or reduced cell dimensions. The procedure can also identify ‘false positives’, i.e. pairs of structures that are falsely identified as polymorphs. In addition, the authors discuss the problems that can occur that might frustrate the automated process.
In the course of our own detailed study of polymorphic and related systems, we have discovered errors in the reporting of claimed polymorphism that have their origins in a form of misinterpretation that was not considered by Clemente & Marzotto (2004). We report here on two such examples. A referee has kindly directed us to another report of similar occurrences to those discussed here (Hao et al., 2005).
One of the compounds we have studied is the important diuretic furosemide (also called frusemide), (I), and we have identified a significant inconsistency in the published single-crystal work. We have also come across a similar occurrence in the case of finasteride, (II), a treatment for benign prostatic hyperplasia.
Details of our new interpretations are given below.
2. Furosemide
The first single-crystal structure determination was reported by Fronckowiak & Hauptmann (1976; CSD recode FURSEM). The structure was described as triclinic, with cell dimensions $a = 5.251$, $b = 8.771$, $c = 15.038$ Å, $\alpha = 101.77^\circ$, $\beta = 89.05^\circ$, $\gamma = 97.57^\circ$, $V = 672.09$ Å$^3$, space group $P\bar{1}$, with $Z = 2$ ($Z' = 1$). Lamotte et al. (1978; CSD recode FURSEM01) reported a second structure determination of a pure form of the compound, also triclinic, with cell dimensions $a = 10.467$, $b = 15.801$, $c = 9.584$ Å, $\alpha = 71.07^\circ$, $\beta = 115.04^\circ$, $\gamma = 108.48^\circ$, $V = 1332.84$ Å$^3$, space group $P\bar{1}$, with $Z = 4$ ($Z' = 2$). These authors commented on the fact that the two structures had the $b^*c^*$ planes in common, with the $a$ axis of the new structure twice that of the older one, but did not take the comparison further owing to lack of coordinate data for the first structure. The CSD also includes references to two reports of a further structure determination by Shin & Jeon (1983; CSD refcodes FURSEM02 and FURSEM12), in which the structure was assigned a cell with $a = 5.234$, $b = 8.751$, $c = 15.948$ Å, $\alpha = 103.68^\circ$, $\beta = 69.94^\circ$, $\gamma = 95.59^\circ$, $V = 666.58$ Å$^3$, space group $P\bar{1}$, with $Z = 2$ ($Z' = 1$). Comparison of the ‘reduced cells’ (obtained by an automated procedure in which the flexibility in choosing a triclinic lattice is controlled by setting certain conventions) for this structure and that in the 1976 report indicates that the two are, in fact, the same. Use of the reduced cell comparison procedure does not provide an obvious link between the common structures of Shin & Jeon (1983) and Lamotte et al. (1978).
In our work on polymorph screening for this compound, one crystallization from methanol produced small crystals growing around the side of the flask, whilst larger specimens grew on the floor of the flask. Using one of the small crystals, we found the smaller unit cell. Using a larger crystal, from the other region of the crystallization vessel, a unit cell analogous to that reported by Lamotte et al. (1978) was found but with different orientations of the unit-cell axes. Detailed analysis showed that the Lamotte et al. structure has two independent molecules in the asymmetric unit that differ only in the orientation of the two furan rings, whilst the small cell structure has disordered furan orientations. Fig. 1 shows the ‘disordered’ molecule of the first determination, together with the two molecules in the asymmetric unit of the correct determination.
The interpretation of this result is then quite simple. The arrangement of the molecules in the unit cell of the Lamotte et al. (1978) structure is such that the independent molecules lie alternately in analogous orientations along the $a$ axis; they differ in the furan orientations only. As a result the $h$ odd X-ray reflections receive contributions mainly from the atoms of the furan groups and are generally weak. In the cell determination of the small crystal, these were not picked up, so the halved cell was obtained, as in the published work. The choice of axes and angles for this cell corresponds to conventions for defining triclinic cells, which are encoded into most diffractometer software packages. The same convention, applied in the case of the doubled up, correct cell, obviously chose a cell that does not have obvious metric links to the cell for the halved structure. This is the reason for the failure to recognize the true relationship between the two results – they actually relate to only one form.
This interpretation is confirmed by computing the powder patterns using the published coordinates of Lamotte et al. (1978) and Shin & Jeon (1983). In the data for the Shin & Jeon structure, stored in the CSD, the coordinates of the minor fraction of the disorder are not included, as noted in the experimental comments. Note, however, that the 0.55/0.45 disorder split should be 0.50/0.50 if our interpretation is accepted. The patterns obtained are given in Fig. 2. The similarities are considerable but there are some differences. As can be seen, the missing fractional atoms affect a number of the intensities somewhat, which would give a low comparison figure in an automated scan. Subsequent inclusion of these atoms does improve the match considerably. This point is worthy of note for any use of CSD data where there is disorder.
3. Finasteride
Wawrzycka et al. (1999) reported data on four forms of finasteride. Two forms, 1 and 2 (CSD recode WOLXOK), were described as pure compound polymorphs and two, forms 1a (CSD recode WOLXEA) and 1b (CSD recode WOLXIE), as pseudopolymorphs.
Form 1 has cell dimensions $a = 6.451$, $b = 12.741$, $c = 25.979$ Å, $\alpha = \beta = \gamma = 90^\circ$, $V = 2135.2$ Å$^3$, space group $P2_12_12_1$, with $Z = 4$ ($Z' = 1$). Full characterization was made by single-crystal structure determination, and the polymorph was confirmed to be a pure form. Form 2 was described as monoclinic, having cell dimensions $a = 10.236$, $b = 7.948$, $c = 13.896$ Å, $\beta = 95.84^\circ$. The space group was not reported, nor was the structure further determined. Wenslow et al. (2000) later reported the structure of a ‘new’ form of the pure compound (form II; CSD recode WOLXOK03), with monoclinic cell dimensions $a = 16.387$, $b = 7.958$, $c = 18.115$ Å, $\beta = 107.25^\circ$, $V = 2256$ Å$^3$, space group $P2_1$, with $Z = 8$ ($Z' = 2$). In this case, an immediate link is seen with the earlier structure, in that the $b$-axis dimensions are analogous. A detailed study shows that this unit cell corresponds to a doubling up...
of the cell of the monoclinic form 2 of Wawrzyczka et al. (1999), with a redefinition of axes. The transformation matrix from the Wawrzyczka et al. form 2 cell to the Wenslow et al. cell is \((101/0\bar{1}0/101)\), leading to cell dimensions of \(a = 16.399\), \(b = 7.948\), \(c = 18.078\) Å, \(\beta = 107.33^\circ\). These compare well with the cell for the Wenslow et al. form II. We are not able to provide a complete confirmation of this duplication, nor compare computed powder patterns, since the coordinates are not available for the earlier structure and we have not repeated the experimental work in this case. However, the Wenslow et al. form II structure shows strong pseudo-\(B\) centring symmetry, which is an obvious source, again, of systematic weak reflections. We are very confident that the Wenslow et al. form II is the same as the claimed Wawrzyczka et al. form 2 and is a genuine second polymorphic form.
4. Conclusions
These findings confirm quite clearly the value, in polymorphism studies in particular, of computing idealized PXRD patterns from single-crystal results. These are (a) independent of cell orientation choice and (b) easily compared with any experimental patterns. Whilst the automated approach to the detection of possible polymorphism is a very useful alerting process, any similarity coefficients that incorporate intensity values must be compared with soft constraints. Visual assessment of the patterns by an experimenter, who will likely have a quite flexible judgement, will also be advantageous. However, based on our experience in the furosemide study, when full crystal structural data are available, we find the Xpac procedure (Gelbrich & Hursthouse, 2005) to give the clearest indication of any similarity or equivalence between structures.
References
Allen, F. H. (2002). *Acta Cryst.* **B58**, 380–388.
Clemente, D. A. & Marzotto, A. (2004). *Acta Cryst.* **B60**, 287–292.
Fronkowiak, M. & Hauptmann, H. (1976). *Am. Abstr. Papers (Winter)*, p. 9.
Gelbrich, T. & Hursthouse, M. B. (2005). *CrystEngComm*, **7**, 324–336.
Hao, X., Parkin, S. & Brock, C. P. (2005). *Acta Cryst.* **B61**, 675–688.
Lamotte, J., Campsteyn, H., Dupont, L. & Vermeire, M. (1978). *Acta Cryst.* **B34**, 1657–1661.
Shin, W. & Jeon, G. S. (1983). *Proc. Collect. Naur. Sci. SNU*, **8**, 45–51.
Streek, J. van de & Motherwell, S. (2005). *Acta Cryst.* **B61**, 504–510.
Wawrzyczka, I., Stepniak, K., Matyjaszczyk, S., Koziol, A. E., Lis, T. & Abboud, K. A. (1999). *J. Mol. Struct.* **474**, 157–166.
Wenslow, R. M., Baum, M. W., Ball, R. G., McCauley, J. A. & Varsolona, R. J. (2000). *J. Pharm. Sci.* **89**, 1271–1277.
|
Static studies of absorption and emission spectra of acid yellow 17-An azo dye
Neena Jaggi\textsuperscript{1*}, Kanta Yadav\textsuperscript{1} & Manoj Giri\textsuperscript{2}
\textsuperscript{1}Department of Physics, National Institute of Technology, Kurukshetra 136 119, Haryana, India
Department of Physics, Haryana College of Technology \& Management, Kaithal 136 027, Haryana, India
\textsuperscript{*}E-mail: firstname.lastname@example.org
Received 21 June 2013; revised 9 June 2014; accepted 28 August 2014
In the present paper, the absorption and emission spectra of acid yellow 17 ($C_{14}H_{10}Cl_2N_2Na_2O_5S_2$), a fluorescent azo dye, have been recorded in water at concentrations between $10^{-3}$ M and $10^{-6}$ M. This dye is used in many scientific and industrial applications. This has led to the determination of optimum concentrations to record the absorption and emission spectra of the molecule. The absorption spectrum shows three peaks at 224, 254 and 400 nm, recorded in the spectral region 200-600 nm, which have been assigned to $(\pi^{\#}\leftarrow\pi)^1L_a\leftarrow A$ (primary), $(\pi^{\#}\leftarrow\pi)^1L_b\leftarrow A$ (secondary) and $(\pi^{\#}\leftarrow n)^1W\leftarrow A$ transitions, respectively. Emission spectra of the compound show four peaks at 295, 306, 412 and 437 nm corresponding to absorption peaks at 224, 254 and 400 nm. Their corresponding Stokes shifts have also been determined.
Keywords: Dyes, Absorption, Emission, Oscillator strength, $\pi^{\#}\leftarrow\pi$ \& $(\pi^{\#}\leftarrow n)$ transition, Stokes shift
1 Introduction
Dyes are those organic compounds which absorb in ultraviolet, visible and near infrared regions. Dyes\textsuperscript{1} in general, constitute a very important class of fluorescent materials as they have, besides their well known industrial use in colouration of textiles, plastics, cosmetics, many scientific and technological applications such as in laser dyes, photonics, biological study and nonlinear optical devices\textsuperscript{2-10} etc. Azo dyes are a distinct class of dyes characterized by the presence of one or more azo ($-\text{N}=\text{N}-$) groups\textsuperscript{11}. The possibility of connecting an almost unlimited number of different molecules by way of the azo bridges is the reason behind the large number of representatives of this group\textsuperscript{12}. These are the most important commercial colorants because of their wide colour range, good fastness properties, colour density, which is better than that of any other class of dyes. Azo compounds are also known for their medicinal applications and well recognized for their use as, antidiabetics, antibacterial and antitumor\textsuperscript{13-16}. In the light of variety of diverse applications of azo dyes, it is conceivable to study such azo dyes and their derivatives in order to unfold more potential of such compounds. The steady state fluorescence analysis of azo dyes is required due to their potential applications as infrared laser dyes and photosensitive species in photographic systems. Therefore, by analyzing the importance of these dyes especially acid yellow 17, it is justified to carry out systematic spectroscopic studies for it\textsuperscript{17-22}. In the present study, the assignment of transitions involved in the absorption peaks of this compound have been made and the corresponding extinction molar coefficient and oscillator strength have been calculated. By analyzing fluorescence spectra, the Stokes shift of the molecule has been determined. The contradiction of mirror image rule is observed here and the quenching effect has also been observed in the AY 17 molecule. The molecular structure of this compound is shown in Fig. 1. It is an azo dye soluble in water to give intense yellow colour. It is used to colour all kind of natural fibres like wool, cotton and silk as well as to synthetics like polyesters, acrylic and rayon. This is also applied in paints, inks, plastics and leather.
2 Experimental Details
Analytical reagent quality acid yellow 17 was obtained from M/S Sigma Aldrich Chemical Company, Inc., USA and used without further purification. Its absorption spectra in double distilled water at concentrations between $10^{-3}$ and $10^{-4}$ M are recorded on a CAMSPEC-M550, UV-Visible spectrophotometer using quartz cell of path length 10 mm. Figure 2 shows the absorption spectrum of the compound at concentration $10^{-5}$ M, recorded only in the spectral region 200-600 nm. In this region, three absorption bands at 224, 254 and 400 nm are observed.
The wavelength of the maximum absorbance is a characteristic value, designated as $\lambda_{\text{max}}$. Different molecules have very different absorption maxima and absorbance. From the analysis of absorption spectrum of AY 17, the compound shows maximum absorbance at 400 nm ($\lambda_{\text{max}}$). This observed value is confirmed as we compare the curve as shown in (Fig. 2) with the reported curve which also shows $\lambda_{\text{max}}$ at the same wavelength. Excitation and emission spectra of the compound in distilled water for all the above mentioned concentrations were recorded on Shimadzu photoluminescence spectrophotometer utilizing xenon flash lamp. The excitation slit width and emission slit width were kept at 10 nm during the experiment so as to get an optimum intensity in excitation and emission spectra.
3 Analysis and Discussion
Although absorption spectra of the compound were recorded at four concentrations between $10^{-3}$ and $10^{-6}$ M in the range 200-600 nm, those at only two concentrations i.e. $10^{-4}$ and $10^{-5}$ M were found to be regular in form and with enough absorbance or optical density (O D). The absorption spectra at both these concentrations showed three absorption peaks at about the same wavelengths viz 224, 254 and 400 nm (Fig. 2). In case of ultraviolet and visible spectroscopy, the transitions which occur in the absorption, are transitions between electronic energy levels. The position and intensity are the main features of an absorption peak. The position relates to the wavelength of radiation whose energy is equal to that required for an electronic transition. The intensity of absorption mainly depends on two factors: (i) probability of interaction between radiation and electronic system to raise the molecule from ground level to an excited level and (ii) polarity of the excited state. This probability of electronic transition is proportional to the square of the transition moment. This transition moment is also known as dipole moment of transition is proportional to the change in the electronic charge distribution occurring during excitation. An intense absorption peak occurs when a transition takes place by a great change in the transition moment. The low probability transitions are forbidden transitions. In the present study, $\lambda_{\text{max}}$ is observed at 400 nm. A plot of optical density for these observed bands against wave number $\nu$ (cm$^{-1}$) for the concentration $10^{-5}$ M is shown in Fig. 3 From this plot, the value of maximum molar extinction coefficient ($\varepsilon_{\text{max}}$) for the absorption bands I, II and III at frequencies $4.46 \times 10^4$ cm$^{-1}$, $3.93 \times 10^4$ cm$^{-1}$ and $2.50 \times 10^4$ cm$^{-1}$ have been calculated as $67.0 \times 10^3$, $34.0 \times 10^3$ and $62.0 \times 10^3$ (M$^{-1}$ cm$^{-1}$), respectively. The oscillator strength, $f = 4.315 \times 10^{-9} \int \varepsilon \, dv$, for these bands have also been calculated as $22.4 \times 10^{-2}$, $19.8 \times 10^{-2}$ and $47.4 \times 10^{-2}$ (M$^{-1}$ cm$^{-2}$), respectively. Acid yellow 17 consists of chloro-substituted sodium salt of benzene sulphonic acid which is connected
with pyrazole linked with sodium benzene sulphonic through azo group (Fig. 1). The comparison of the absorption bands of moieties involved in this molecule may be useful in deciding the transitions involved in the observed absorption peaks of the compound. An $\pi^* \leftarrow n$ transition has been observed at 387 nm in azomethane\textsuperscript{25}. As such due to the substitution of azo group with pyrazole, it is expected to change this transition in to visible region in the present molecule at a higher wavelength due to resonance effect. Therefore, we say here that the $-\text{N}=\text{N}-$ group gives the molecule $(\pi^* \leftarrow n) \ ^1\text{W} \leftarrow ^1\text{A}$ transition, which is involved in the absorption peak observed at 400 nm. The bands observed at 224 and 254 nm for acid yellow 17 may be correlated to transitions of benzene ring associated with additional group, because benzene ring\textsuperscript{25} shows two absorption transitions in this region one at 204 (primary $\pi^* \leftarrow \pi) \ ^1\text{L}_a \leftarrow ^1\text{A}$ and another weak at 254 nm (secondary $\pi^* \leftarrow \pi) \ ^1\text{L}_b \leftarrow ^1\text{A}$. So, the bands observed in the present molecule at 224 nm (having high absorbance) and 254 nm (having less absorbance) may be correlated to primary and secondary ($\pi^* \leftarrow \pi$) transitions of phenyl ring. The absorption spectrum of pyrazole shows absorption at 210 nm and on substitution it indicates a red shift. A possible masking of the band present in case of pyrazole at 210 nm may be due to through these two transitions of phenyl ring. Emission spectra of the compound under study were recorded by choosing different excitation wavelengths ($\lambda_{\text{ex}}$) of the source. Because an excitation spectrum is dependent on emission intensity at a single wavelength ($\lambda_{\text{em}}$), upon various excitation wavelengths. In other words, it gives the intensity contribution to the observed emission at a given wavelength by different excitations wavelengths for which the sample is exposed. The excitation spectrum of acid yellow 17 shows two excitation peaks at 250 and 360 nm at concentration $10^{-4}$ M in water with $\lambda_{\text{em}}=400$ nm as shown in Fig. 4. The purpose of the excitation spectrum is to know about the suitable excitation wavelengths by which we excite the sample to get the maximum emission. At another concentration, it shows the same peaks as observed.
The emission spectra show two nearby emission peaks clearly at 412 nm and 437 nm at concentrations $10^{-4}$ M and $10^{-5}$ M, when dye is exposed with excitation wavelength $\lambda_{\text{ex}} = 360$ nm in desired wavelength regions (Figs 5 and 6). The intensity of the emission peaks is shown to be increased at concentration $10^{-4}$ M as compared to $10^{-5}$ M. These observed emission peaks are related to broad absorption band observed at 400 nm giving stokes shift of 12 and 37 nm, respectively. This observed phenomenon of absorption and emission is
4 Conclusions
The absorption spectrum of the compound recorded at the concentration $10^{-5}$ M shows three absorption bands at 224, 254 and 400 nm in the spectral region 200-600 nm. These peaks have been assigned to $\pi^* \leftarrow \pi$ (primary) $^1L_a \leftarrow ^1A$, $\pi^* \leftarrow \pi$ (secondary) $^1L_b \leftarrow ^1A$ and $(\pi^* \leftarrow n) \ ^1W \leftarrow ^1A$ transitions, respectively. The extinction coefficient and oscillator strength of these bands have also been determined. The emission spectra of the compound show four emission peaks at 295, 306, 412 and 437 nm corresponding to absorption peaks at 224, 254 and 400 nm. Their relatively Stokes shifts have been calculated. It is observed that $10^{-4}$ M is the optimum concentrations for the spectroscopic study of the compound. Mirror image rule is not verified for absorption bands III and its related emission bands at 412 and 437 nm.
References
1. Griffiths J, *J S of Dyers and Colors* 104 (1988) 416.
2. Drexhage K H, *Dye Lasers* (ed) Schafer F P (Berlin, Springer-Verlag) (1973) 144.
3. Chiara A Betrolino, Glueppe Caputo, Claudia Barolo *et al.*, *J Fluorescence*, 16 (2006) 221.
4. Marc Sutter, Sabrina Oliveira, Niek N Sanders *et al.*, *J Fluorescence*, 17 (2007) 181.
5. Jaiswal R M P, Nafa Singh, Jaggi Neena & Giri Manoj, *Indian J Pure & Appl Phys*, 40 (2002) 89.
6. Sanghi S, Dodain N, Meenakshi & A Agarwal, *Indian J Pure & Appl Phys*, 42 (2004) 89.
7. Chandrasekhar K, Naik L R, Suresh Kumar H M & Math N N, *Indian J Pure & Appl Phys*, 44 (2006) 292.
8. Bhat N V, Nate M M, Bhat R M & Bhatt B C, *Indian J Pure & Appl Phys*, 45 (2007) 545.
9. Aziz M S & El-Mallah H M, *Indian J Pure & Appl Phys*, 47 (2009) 530.
10. Melavanki R M, Patil N R, Patil H D, Kusanur R A & Kadadevaramath J S, *Indian J Pure & Appl Phys*, 49 (2011) 748.
11. Venkatraman K, *The Chemistry of Synthetic Dyes* (Academic Press Inc NY) 1 (1952) 409.
12. Henry Gilman (Ed.) *Organic Chemistry*, Vol III (New York: John Wiley & Sons) Ch 4 (1953).
13. Garg H G & Praksh C, *J Med Chem*, 15(1972) 435.
14. Khalid A, Arshad M & Crowley D E, *Appl Microbiol Biotech*, 78 (2008) 361-69.
15. Pagga U & Brown D, *Chemosphere*, 15 (1986) 479.
16. AjaniI Olayinka O, Akinremi Oluwabunmi E & Ajani Alice O. 3 (2013) 28, *Phys Rev & R Int*, 3 (2013) 28.
17. Barry Chiswell & Kelvin R O Halloran, *Analytica Chimica Acta*, 249 (1991) 519.
18. Wang Qian, Feng Yongjun & Feng Junting *et al.*, *J of Solid State Chemistry*, 184 (2011) 1551.
19. Patel Himanshu & Vashi R T, *The Canadian J of Chemical Eng*, 90 (2012) 180.
20 Rui-Zhou Zhang, Xio-Hong Li & Xian-Zhou Zhang, *Indian J Pure & Appl Phys*, 51 (2013) 399.
21 Melavanki R M & Patil N R *et al.*, *Indian J Pure & Appl Phys*, 51 (2013) 499.
22 Jaggi Neena, Giri Manoj & Yadav Kanta *Indian J Pure & Appl Phys*, 51 (2013) 833.
23 Aldrich *Handbook of Fine Chemicals and Laboratory Equipment* (USA, Aldrich Chemical Co. Inc) (2003-04) 32.
24 Silverstein R M & Bassler G C, *Spectrometric Identification of Organic Compounds*, 2nd Ed (New York: John Wiley) Ch 5 (1968).
25 Jaffe H H & Orchin M, *Theory and Applications of Ultraviolet Spectroscopy* (New York, John Wiley) Ch 9, 12 & 14 (1962).
26 J R Lakowicz, *Principles of Fluorescence Spectroscopy* (New York, Springer) Ch 1 (2006).
|
1. CALL TO ORDER
Chair Duckett called the meeting to order at 1:33 p.m.
2. APPROVE THE MINUTES FROM THE MAY 24, 2018 POLICY COMMITTEE MEETING
Councilor Sharer moved to approve the minutes from the May 24, 2018 Policy Committee meeting. Mr. Hathaway seconded the motion. The motion was approved unanimously.
3. PRESENTATION – RED APPLE TRANSIT OPERATIONAL ANALYSIS STUDY
| Subject: | Presentation – Red Apple Transit Operational Analysis Study |
|----------|-------------------------------------------------------------|
| Date: | June 28, 2018 |
Mr. Ken Hosen, Vice President of KFH Group will present on the Operational Analysis study for the Red Apple Transit hub relocation. This study began in February 2018 and is now complete.
In 2017, the FMPO approved under Section 4.4 of the MPO’s current UPWP the allocation of up to $30,000 of FTA 5303 funds for this project (Specifically from the MPO’s FFY2017 carryover funds, which must be spent by September 30, 2018). It appears that this project will require less than the $30,000, and Red Apple has requested that the balance be used for an additional study recommended by the consultant, a matter that can be considered in a future amendment to the UPWP.
Mr. Andrew Montoya, Transit Administrator for the City of Farmington and local management of Red Apple Transit managed the project and provided monthly reports to the Technical Committee.
**PRESENTATION:** Mr. Hosen presented the final report on the Operational Analysis for the Red Apple Transit Hub. Below is a summary of Mr. Hosen’s presentation:
**Project Goals & Objectives**
Review the two proposed transit hub center sites:
- Both on Orchard & Animas (one on the northwest corner (Site A) and the other on the northeast corner (Site B))
- Analyze each location, size, ease of bus and pedestrian access
- Determine facility size and interior/exterior elements
- Allow connecting services to transfer (e.g.: North Central Regional Transit District (NCRTD) and Navajo Transit)
Conduct outreach efforts to gather input from the public and stakeholders
- Business focus group
- Rider focus group
- Speed of service was biggest issue (can be easily remedied) along with safety and security of hub location
Revise routes to properly access the new transit hub
- All buses arrive and leave at same time; allow five minutes for transfers
**Review of Transfer Facility: Site Selection**
1. At or adjacent to a major destination(s)
2. Excellent access for buses (avoid need for left turns; good clearances)
3. Safe and inviting location (including some level of security)
4. Accessible/safe pathways for pedestrians
5. Adequate space for future expansion
6. Centrally located to each route
7. Public/private partnership potential (possible leasing space waiting for growth to occur)
**Site Selection – Land/Site Costs**
- Both sites are in the downtown area
- Size of properties: Site A is larger and the City of Farmington owns part of this property, but demolition and reconstruction will cost more overall ($1,610,550) for Site A; Site B is lot behind the Wells Fargo drive-in banking ($1,109,220).
Facility
- Public functions, office functions, additional uses:
- Spare office for expansion;
- Potential to lease space;
- Space for other bus systems (NCRTD, Navajo Transit, Roadrunner);
- Bus bays (spare bay for expansion)
- Analysis shows less space needed (7,315 sq. ft. compared to previous study, which showed 9,000 sq. ft.) even with the spare offices and room for expansion.
Facility Costs
- Estimated construction costs: $1,280,125 vs. previous estimate of $1,703,100
- Total cost estimate for facility and land is $423,650.
System Redesign
- The existing service has not had a full route review for quite some time;
- Loop routes require excessive travel time (up to two hours/round trip);
- Excessive timing points which slows down service;
- Bus stops should be (for the most part) one-quarter mile apart;
- Slow speed – transit systems similar to Red Apple typically average 18 mph; Red Apple buses average 12 mph.
Route Changes
- Changes to yellow and red routes to ensure all routes meet at the transit hub;
- Consider rebranding to, maybe, Red Apple Express: implies faster service and would avoid the acronym “RAT”;
- Operating costs associated with service will not change.
Summary
- Recommend a facility with room to expand is recommended;
- As the crossroad of the Four Corners, have the facility open to Navajo Transit, NCRTD, and service from the Durango area;
- Need a full route review (needed every five to ten years) to:
- Provide faster and more direct service (avoid loop routes);
- Timed connections;
- No additional operating costs.
Mayor Duckett asked if growth in the public transit sector is seen in other communities. Mr. Hosen replied that it depends on the community. For instance, when the NCRTD routes were changed so people could get to and from work, ridership skyrocketed. Cities like Flagstaff and Show Low are filling their buses. Many millennials do not want to bother with a vehicle so are using public transit and riding bicycles to get around. With the increase in Uber, Lyft and other ride-sharing companies, transit use in larger cities has dropped, but in the smaller communities ridership has increased.
Mayor Duckett also asked about a “call for service” system used in Arlington, Texas. Mr. Hosen said that this system does not have any fixed routes and did not believe it was adequate for the public. These types of routes are mainly for the elderly and the disabled.
4. FTA REQUIREMENT FOR THE MPO TO ANNUALLY ADOPT TRANSIT PERFORMANCE TARGETS FOR RED APPLE AND POLICY COMMITTEE RESOLUTION 2018-2 TO ADOPT THOSE PERFORMANCE TARGETS
Subject: FTA Requirement for the MPO to Annually Adopt Transit Performance Targets for Red Apple and Policy Committee Resolution 2018-2 to Adopt those Performance Targets
Prepared by: Mary L Holton, AICP, MPO Officer
Date: June 28, 2018
BACKGROUND
- In January 2017, the Federal Transit Administration and David Harris/NMDOT informed FMPO of requirements of the Transit Asset Management (TAM) Final Rule and the Metropolitan and Statewide and Nonmetropolitan Transportation Planning Final Rule published in 2016.
- The TAM Final Rule required transit providers to set performance targets for state of good repair by January 1, 2017. The Planning Rule required each MPO to establish targets no later than 180 days (June 30, 2017) after the date on which the relevant state or provider of public transportation established its performance targets. The FTA requires MPOs to adopt Performance Targets annually.
- The FMPO Policy Committee adopted the original TAM Performance Measures on June 14, 2017.
- The Technical Committee will consider recommending approval of proposed PC Resolution 2018-2 on June 13, 2018.
- Andrew Montoya/Red Apple advises that there were no changes to the targets from last year. You are referred to him regarding Red Apple’s current Transit Asset Management (TAM) Plan.
- The Technical Committee recommended approval on June 13, 2018.
ACTION ITEM
- Hold a public hearing on proposed PC Resolution 2018-2.
- The Technical Committee consideration this item at their meeting, and voted to forward a recommendation of approval to the Policy Committee for the targets and PC Resolution 2018-2.
- The adoption of the Transit Performance Targets is due to NMDOT on June 30, 2018.
APPLICABLE CITATIONS
- 49 CFR Parts 625 and 630.
DISCUSSION: Ms. Holton reported that Red Apple Transit’s asset plan was prepared and adopted by Red Apple last year. The FHWA and FTA require that the MPO adopt Red Apple Transit’s plan performance measurements and targets. This considered adoption is what the MPO did last year at this same time.
Ms. Holton referred to the Policy Committee Resolution 2018-2 and Exhibit A on Pages 4 and 5 of the Agenda. The targets shown on Exhibit A are the same TAM targets that were adopted by the Policy Committee last year. MPO Staff worked closed with Andrew
Montoya, the Transit Manager to develop the Resolution and the targets. Staff recommends approval of Policy Committee Resolution 2018-2.
Mayor Duckett opened the public hearing. There were no comments from the audience. Ms. Holton also stated that no comments were received by Staff in the mail or via e-mail. Mayor Duckett closed the public hearing.
**ACTION:** Councilor Sharer moved to adopt Policy Committee Resolution 2018-2 and the Transit Performance Targets. Mr. Hathaway seconded the motion. The motion was passed unanimously.
5. **FFY2019-2020 PROPOSED UNIFIED PLANNING WORK PROGRAM (UPWP)**
| Subject: | FFY2019-2020 Proposed Unified Planning Work Program (UPWP) |
| Prepared by: | Mary Holton, AICP, MPO Officer |
| Date: | June 28, 2018 |
**BACKGROUND**
- The Unified Planning Work Program (UPWP) is the MPO’s work plan for two federal fiscal years. The UPWP pairs the MPO’s required work tasks/products with the MPO’s available funding.
- The FFY2019-2020 UPWP will cover planning activities and work products to be completed from October 1, 2018 to September 30, 2020.
- Based on the Planning Procedures Manual (PPM), the MPO is required to submit the adopted FFY2019-2020 UPWP to NMDOT before July 1, 2018 (NMDOT advised that they had no comments on the draft prior to their June 1 deadline).
- Both the Committees reviewed the proposed FFY2019-2020 UPWP during their May 2018 meetings.
- A 30-day public comment period was noticed from April 22, 2018 to May 21, 2018, which was extended to June 26, 2018.
- On page 9 of the document, there are five (5) major work program tasks listed as headings. These headings are pretty much standardized amongst MPOs. Subtasks are listed below. You should be aware that the same numbering system is utilized in the MPO’s invoicing system and in our financial reports, including the Annual Performance & Expenditure Report (APER), which we submit at the end of the FFY.
- The Technical Committee voted on June 13 to forward a recommendation of approval of the proposed UPWP and PC Resolution 2018-3 to the Policy Committee.
**CURRENT WORK**
- Annual activities will include administering the MPO’s programs, TIP development and management, development of performance measures, GIS activities, Safe Routes to School activities, transit data collection and mapping.
Major activities will include the preparation and completion of the 2045 MTP Update, completion of the Bike & Ped Update, land-use and transportation scenario planning activities, and travel demand modeling updates.
Per NMDOT direction, staff projects that PL funds of $228,637 and FTA 5303 funds of $72,856 (both including local matches) for each of the two (2) federal fiscal years will be available. As these amounts are currently placeholders, the exact funding amounts – once known – may prompt the need to amend the UPWP.
The proposed UPWP anticipates the transition from the City of Farmington to the NWNMCOG for the management of the MPO, expected changes in personnel, as well as changes currently proposed in the JPA to the local matches.
**ACTION ITEM**
- MPO Staff recommends approval of both the proposed FFY2019-2020 UPWP and PC Resolution 2018-3.
**DISCUSSION:** Ms. Holton stated that Policy Committee Resolution 2018-3 to consider approval of the proposed FFY2019-2020 Unified Planning Work Program has been reviewed during the last several Policy Committee meetings.
Ms. Holton noted several recent revisions to the proposed UPWP:
- MPO Officer is noted as “TBD”: This proposed UPWP will be take effect until October 1 at which time the NWNMCOG is expected to be staffing the MPO and will be hiring the MPO Officer;
- NMDOT reviewed the draft and had no additional comments.
The Technical Committee recommended approval at their meeting on June 13, 2018; Staff also recommends approval.
Mayor Duckett opened the public hearing on proposed Policy Committee 2018-3 and the FFY2019-2020 UPWP. He noted that no mailed or e-mailed comments were received by Staff on this item. There were no public comments made by the audience. Mayor Duckett closed the public hearing.
**ACTION:** Mr. Hathaway moved to approve of Policy Committee Resolution 2018-3 and the FFY2019-2010 UPWP with the changes noted. Councilor Sharer seconded the motion. The motion was passed unanimously.
**6. REVIEW & CONSIDER APPROVAL OF THE JOINT POWERS AGREEMENT (JPA) AND COMMITTEE BYLAWS & OPERATING PROCEDURES PROPOSALS**
Subject: Review and approval of the Joint Powers Agreement (JPA) and Committee Bylaws and Operating Procedures proposals, which have
BACKGROUND
- The cities of Aztec, Bloomfield, and Farmington, and San Juan County formed and have participated in the Metropolitan Planning Organization through the Joint Powers Agreement (JPA) since 2003.
- As discussed previously, MPO Staff have been coordinating to add the Town of Kirtland to the MPO beginning last year.
- The Kirtland Board of Trustees voted on December 12, 2017, to join the MPO.
- Proposed revisions to the JPA and Committee Bylaws documents to include Kirtland were prepared starting in July 2017.
- The proposed documents were finalized by the Policy Committee in May 2018 after considering them at the following meetings: In 2017: Aug; Sept; and, Oct. In 2018: Feb and May.
- Note that many of the proposed revisions in both documents, including changes to the local match funding formula and the increase in representation on the MPO committees, were included in the proposals reviewed by the Policy Committee since August 2017.
- NMDOT Staff have continually reviewed the proposed documents and have provided MPO Staff with input.
CURRENT WORK
- The Town of Kirtland is being invited to attend all MPO Meetings, and is included in all MPO emails.
- However, a Designated Kirtland representative cannot be seated on the MPO committees until the adoption/approval process is complete, which is currently anticipated in late July or early August.
- The 30-day public review period has been noticed.
ACTION ITEM
- The Tech Committee reviewed the proposed JPA and Committee Bylaws documents at their meeting on June 13, and voted to forward a recommendation of approval to the Policy Committee, along with one (1) spelling correction (“through” not “though” in the fifth line of the first WHEREAS). Staff also notes that additional text was added to the JPA prior to Tech Committee consideration to clarify that both the Fiscal Agent and the MPO Officer must co-sign NMDOT cooperative agreements. The proclamation language, as directed by the Policy Committee, to add to the Bylaws is located on page 7.
- MPO Staff recommends approval of the documents.
- If approved by the Policy Committee on June 28, consideration of the proposed JPA by the boards, councils, or commissions of the individual member entities will be scheduled in July. Upon completion of the adoption
process, the JPA will be sent to the NM Department of Finance and Administration for State Review/Approval.
**APPLICABLE CITATIONS**
- 23 U.S. Code § 134 - Metropolitan transportation planning
- 23 CFR 450.310 - Metropolitan planning organization designation and redesignation
- 23 CFR 450.314 - Metropolitan planning agreements
- 23 U.S. Code § 134 - Metropolitan transportation planning
- Joint Powers Agreement Act, being Sections 11-1-1 et. Seq., NMSA 1978, as amended.
- NMDOT Planning Procedures Manual, Metropolitan Planning Organizations, Internal Structure, pages 46-48
**DISCUSSION:** Ms. Holton stated that the proposed revisions to the Joint Powers Agreement (JPA) and Committee Bylaws have been discussed by the Policy Committee since July 2017. At the May 2018 meeting, the Policy Committee was asked to finalize both documents. This documents were presented to the Technical Committee on June 13 and they recommended approval of the proposed JPA and Committee Bylaws.
Ms. Holton explained that following the MPO Quarterly meeting, several changes were recommended to the JPA:
- Page 12 of the Agenda: Section Three: Fiscal Agent
- “…all funds necessary to operate the MPO, including co-signing the MPO’s cooperative “agreements with the NMDOT”;
- Page 18 of the Agenda: E1e added: Co-signing the MPO’s cooperative agreements with the NMDOT;
And included in the Committee Bylaws:
- Page 34 of the Agenda: top paragraph: “The Policy Committee reserves the right to issue a proclamation upon a majority vote of that committee and after the proposal has been placed on the agenda as an action item in accordance with the New Mexico Open Meetings Act.”
Mayor Duckett opened the public hearing noting that no mailed in or e-mailed comments were received by Staff prior to the meeting. There were no comments from the audience. Mayor Duckett closed the public hearing.
**ACTION:** Councilor Sharer moved to approve the Joint Powers Agreement (JPA) and Committee Bylaws. Mr. Hathaway seconded the motion. The motion to approve was unanimous.
**7. REPORTS FROM NMDOT**
**District 5 – Lawrence Lopez**
Mr. Lopez reported that the final two phases of US 64 were going to final production this week with letting expected on August 24. He introduced Monroe Maestas who is the NMDOT project development engineer for these two projects. Mr. Maestas is also
involved with the Local Government Program. He works at the North Region Design Center and is assisting Brad Fisher with various aspects of the program.
Mr. Lopez stated that NMDOT has elected to extend the deadline for the City of Aztec to achieve the obligations necessary to get the project completed in 2018. NMDOT is working with their Finance Director, Kathy Lamb and are very hopeful that the project can be completed before the deadline for FY2018 funding.
**Planning Bureau – Shannon Glendenning**
Ms. Glendenning reported that NMDOT will be hosting a project feasibility form training. This will correspond with the opening of the TAP/RTP call for projects starting in FFY2020 and beyond. The training will be in preparation for the first step in the process when an entity applies for funding. Brad Fisher has requested to be involved in the PFF meetings to help review project estimates and scopes to better ensure project success.
The Transportation Alternative Program (TAP) and Recreational Trails Program (RTP) call for projects was issued on June 1, 2018. All applications must be received by NMDOT by November 30.
Ms. Glendenning commented on the Volkswagen Settlement Fund that gave New Mexico approximately $18,000,000. This settlement was the result of Volkswagen’s violation of clean air emissions standards and their use of defeat devices on their diesel passenger 2.0 and 3.0 liter vehicles. New Mexico Environment Division is accepting applications for vehicle replacements this year and then for infrastructure funding next year. More information: [www.env.nm.gov/vw-settlement/](http://www.env.nm.gov/vw-settlement/) (505) 476-4300 NMED Air Quality Bureau.
### 8. COMMITTEE MEMBER DISCUSSION ITEMS
| Subject: | Committee Member Discussion Items |
|----------|-----------------------------------|
| Date: | June 28, 2018 |
**DISCUSSION ITEMS**
Mayor Duckett recognized and welcomed the City of Aztec’s Mayor, Victor Snover, to the Policy Committee meeting. He also welcomed and thanked Mr. Maestas for his participation in today’s meeting.
### 9. INFORMATION ITEMS
| Subject: | Information Items |
|----------|-------------------|
| Prepared by: | Mary L Holton, AICP, MPO Officer |
| Date: | June 28, 2018 |
a. **Update on the FMPO Bike/Ped Plan.** The Bike & Ped Plan Update project was kicked off on April 11. The Update is expected to be completed in January 2019. The interactive Project Website, including several on-line maps and an online survey, is up and live at [https://bikewalkfmpo.com/](https://bikewalkfmpo.com/). The consultant also developed a flyer for distribution. Links have been set-up on the MPO’s website and Facebook page, member entities websites and social media sites, and ads are to be published in area periodicals. Staff attended two (2) Public Outreach events in May (RiverFest) and in June (Downtown Art Walk). One invoice ($7,559.75) to the consultant (Russell Planning & Engineering) was processed in June.
b. **MPO Quarterly Meeting.** The Farmington MPO hosted the MPO Quarterly on June 4 and 5, 2018. A copy of the agenda is attached.
c. **New MPO Banner for Outdoor Public Events.** MPO Staff purchased a new banner for use during public outreach events. It will be displayed in the MPO office when not being used for events.
d. **FFY2020-21 Transportation Alternatives Program (TAP) & Recreation Trails Program (RTP) Call for Applications.**
Monday June 11, 2018 – Release of Call for Projects/Applications
Wednesday August 01, 2018 – Deadline to Submit Project Feasibility Form Electronically to MPO Staff (5:00PM)
Monday August 13, - Friday August 17, 2018 – Technical Review & Support of submitted PFFs with MPO & NMDOT staff – Meeting will be set up and scheduled by the MPO.
Friday October 26, 2018 – Deadline for Agencies to Submit TAP/RTP Project Application Electronically to MPO Staff (5:00PM)
Friday November 30, 2018 – Deadline for the Farmington MPO to Submit TAP/RTP Project Applications to the NMDOT TAP/RTP Coordinator (5:00PM)
e. **Upcoming Performance Measures.** DRAFT from NMDOT: The USDOT establishes national performance measures via final rules and State DOTs, MPOs, and Providers of Public Transportation establish performance targets based on those rules.
The final rules outline the performance requirements for States and MPOs under the Transportation Performance Management (TPM) program. The NMDOT, MPOs and public transportation providers must jointly agree upon and develop specific written provisions for cooperatively developing and sharing information related to transportation performance data, including:
- Gathering data for national performance measures;
- Performance target setting at state and MPO level;
• Coordination between States and MPOs;
• Reporting on performance at regular intervals; and
• Collecting data for the State asset management plan for the National Highway System (NHS).
State DOTs have one year from the effective date of each final rule to set two-and four-year performance targets (450.206(c)(2)), for 2020 and 2022.
The deadline for MPOs to set performance targets is no later than 180 days after the State DOT or Public Transportation Provider establishes performance targets (450.306(d)(3)).
| Performance Area | CFR | FHWA/FTA | Final Rule Publication Date | Final Rule Effective Date | State DOT Target-Setting Deadline | MPO Target-Setting Deadline* |
|------------------------------------------------------|--------------|----------|-----------------------------|---------------------------|-----------------------------------|-------------------------------|
| Transit Asset Management – TAM Plan | 49 CFR 625 and 630 | FTA | July 26, 2016 | October 1, 2016 | Transit Providers: January 1, 2017 | June 30, 2017 |
| PM1 - Safety Performance Measures | 23 CFR 490 | FHWA | March 15, 2016 | April 14, 2016 | August 31, 2017 | February 27, 2018 |
| Statewide & Metropolitan Planning; Non-Metropolitan Planning | 23 CFR 450 | FHWA & FTA | May 27, 2016 | June 27, 2016 | June 27, 2017 | December 27, 2017 |
| Highway Asset Management Plan | 23 CFR 515 and 667 | FHWA | October 24, 2016 | October 2, 2016; Part 667 effective November 23, 2016 | April 30, 2018 (draft); June 30, 2019 (final) | N/A |
| PM2 - Pavement & Bridge Perf. Measures | 23 CFR 490 | FHWA | January 18, 2017 | May 20, 2017 | May 20, 2018 | November 20, 2018 |
| PM3 - NHS, Freight, & CMAQ Performance Measures | 23 CFR 490 | FHWA | January 18, 2017 | May 20, 2017 | May 20, 2018 | November 20, 2018 |
*MPO must set targets within 180 days of NMDOt setting its targets; these dates represent the latest possible date for establishment.
Additional information related to the performance measures final rule can be found at https://www.fhwa.dot.gov/tpm/rule.cfm.
f. **MPO Interim Plan.** As many of you are aware, Derrick Garcia’s last day with the MPO was June 21. The position will not be filled until after the ISA between the City of Farmington and Northwest New Mexico has been executed and the new MPO Officer has started with the MPO.
We wanted you to be aware that during the interim:
• Russell Planning & Engineering will cover at additional cost to the contract the Bike & Ped Plan Public Outreach Events, MPO Staff had planned to cover.
• Any MPO GIS needs will be referred to the GIS Division with the City of Farmington.
• MPO website maintenance will be handled by June, with assistance from City of Farmington employees
• MPO Staff Attendance and Coordination of Project Prioritization Subcommittee Meetings, and similar meetings, will be placed on hold until the new MPO Officer is on board.
• Further calibration of CommunityViz is on hold until the new MPO Officer is on board.
The following tasks previously assigned to Derrick have been assigned to Mary Holton until the new MPO Officer is on board:
- TIP Management, including eSTIP maintenance and calls for TIP amendments
- Coordinating the MPO’s Traffic Count Program with NMDOT
- Coordinating Performance Measures with NMDOT
- Coordinating the FFY2020-21 TAP/RTP project submittals to NMDOT, including attending the Project Feasibility Meetings August 13-17 in Santa Fe
- Attending required NMDOT trainings and meetings
- Committee Member Orientation trainings
- Maintaining the Red Apple Transit Database, including inputting boarding and alighting counts and locations provided by Red Apple Transit
**DISCUSSION:** a. Ms. Holton stated that the MPO Bike/Ped Plan Update is ongoing. The consultant is working with the Technical Committee as they are the steering committee for the Update. The consultant met with members of the Technical Committee on June 19 to get some ideas about public outreach events and a brief report from that meeting was provided to the Policy Committee. Ms. Holton also provided a copy of a new flyer focusing more on making walking and biking safety in the region. Several newspaper ads will be coming up in the near future to encourage people to get involved with the process.
The MPO hosted the MPO Quarterly on June 4-5, 2018. A copy of the agenda for that meeting is on Page 48 of the Agenda.
The MPO purchased a new banner for the outdoor public events to help focus in on the bike/ped plan update. Kirtland has been included on the banner in anticipation of their joining the MPO. However, this will not be official until all the entities have approved and signed the Joint Powers Agreement and it has also been reviewed and approved by the DFA. This process is projected to be completed sometime in August.
The TAP/RTP call for application deadlines (shown under 9d above) are internal MPO deadlines, but are necessary for the MPO to stay on track.
A major topic of discussion during the MPO Quarterly was performance measures. Ms. Holton referred to Page 46 of the Agenda which provided some performance area information. Highlighted in red are PM2 (Pavement & Bridge Performance Measure) and PM3 (NHS, Freight & CMAQ Performance Measures which will be upcoming for adoption in November. PM 1 (Safety Performance Measures) was adopted in January 2018. Specific information for Policy Committee consideration will begin to be provided in September.
Ms. Holton stated that with Derrick Garcia’s resignation and before the NWNMCOG has the MPO Officer on board, the following are the interim plans:
- Russell Planning & Engineering will cover at additional cost to the contract the Bike & Ped Plan Public Outreach Events, MPO Staff had planned to cover.
• Any MPO GIS needs will be referred to the GIS Division with the City of Farmington.
• MPO website maintenance will be handled by June, with assistance from City of Farmington employees
• MPO Staff Attendance and Coordination of Project Prioritization Subcommittee Meetings, and similar meetings, will be placed on hold until the new MPO Officer is on board.
• Further calibration of CommunityViz is on hold until the new MPO Officer is on board.
Additionally during the interim, Ms. Holton will be handling:
• TIP Management, including eSTIP maintenance and calls for TIP amendments (TIP Amendment #4 call for projects has been issued)
• Coordinating the MPO’s Traffic Count Program with NMDOT
• Coordinating Performance Measures with NMDOT
• Coordinating the FFY2020-21 TAP/RTP project submittals to NMDOT, including attending the Project Feasibility Meetings August 13-17 in Santa Fe
• Attending required NMDOT trainings and meetings
• Trainings for Committee Member Orientation
• Maintaining the Red Apple Transit Database, including inputting boarding and alighting counts and locations provided by Red Apple Transit
Ms. Holton also commented on separate documents provided to the Policy Committee on the Transportation Performance Management (TPM) in Title 23 of the U.S. Code which is an overview of the performance measures in MAP-21 and the FAST Act. The second sheet shows the progression of the performance measures beginning with the national goals as defined in MAP-21 and then continued under the FAST Act.
10. BUSINESS FROM THE CHAIRMAN, MEMBERS AND STAFF
There was no business from the Chairman, Members and Staff.
11. PUBLIC COMMENT ON ANY ISSUES NOT ON THE AGENDA
There was no public comment on any issues not on the agenda
12. ADJOURNMENT
Councilor Sharer moved to adjourn the meeting. Mr. Hathaway seconded the motion. The motion was approved unanimously. Chair Duckett adjourned the meeting at 2:21 p.m.
Nate Duckett, Policy Committee Chair
June Markle, Administrative Assistant
|
Miller v. Perry: Further Complications in Determining Diversity Jurisdiction
Follow this and additional works at: https://scholarlycommons.law.wlu.edu/wlulr
Part of the Jurisdiction Commons
Recommended Citation
Miller v. Perry: Further Complications in Determining Diversity Jurisdiction, 30 Wash. & Lee L. Rev. 282 (1973), https://scholarlycommons.law.wlu.edu/wlulr/vol30/iss2/6
This Article is brought to you for free and open access by the Washington and Lee Law Review at Washington & Lee University School of Law Scholarly Commons. It has been accepted for inclusion in Washington and Lee Law Review by an authorized editor of Washington & Lee University School of Law Scholarly Commons. For more information, please contact email@example.com.
MILLER v. PERRY: FURTHER COMPLICATIONS IN DETERMINING DIVERSITY JURISDICTION
In a small number of jurisdictions, statutes provide that only a resident administrator may bring a wrongful death action on behalf of his decedent.\(^1\) Such statutes prevent an out-of-state administrator from bringing a wrongful death action in the state courts without the benefit of a resident ancillary administrator. Since an ancillary administrator would be necessary to sue resident defendants in state courts, federal diversity jurisdiction would be seemingly impossible to obtain.\(^2\) With the exception of statutory interpleader cases,\(^3\) the rule of "complete diversity" has been followed by federal courts since it was first laid down by Chief Justice Marshall in *Strawbridge v. Curtiss*.\(^4\) After citing to the jurisdictional statute,\(^5\) Marshall explained: ". . . where the interest is joint, each of the persons concerned in that interest must be competent to sue, or liable to be sued in those courts."\(^6\) If complete diversity is to
\(^1\)E.g., GA. CODE ANN. § 113-1203 (1959), § 105-1309 (1968), which in certain circumstances may operate to exclude foreign administrators from pursuing wrongful death actions in courts sitting in Georgia; N.C. GEN. STAT. §§ 28-8(2), 28-173 (Repl. vol. 1966); VA. CODE ANN. §§ 26-59, 8-634 (Repl. vol. 1969), construed in Holt v. Middlebrook, 214 F.2d 187 (4th Cir. 1954); W. VA. CODE ANN. §§ 44-5-3, 55-7-6 (1966), construed in Rybolt v. Jarrett, 112 F.2d 642 (4th Cir. 1940).
Kentucky law formerly provided that a foreign representative could not maintain a wrongful death action in Kentucky courts without securing the appointment of an ancillary administrator. Vassill's Adm'r v. Scarsella, 292 Ky. 153, 166 S.W.2d 64 (1942); see Seymour v. Johnson, 235 F.2d 181 (6th Cir. 1956). However, a Kentucky statute now allows a nonresident administrator to qualify in the state under most circumstances. KY. REV. STAT. ANN. § 395.005 (1972). As a consequence, a wrongful death action may now be maintained in a Kentucky court by a nonresident administrator who has qualified in Kentucky or in a federal district court in Kentucky, with diversity of citizenship as the jurisdictional basis. Nonresident administrators who have also qualified in the state of defendant's residence are not deemed to be a resident of that state for diversity purposes. See Mason v. Helms, 97 F. Supp. 312 (E.D.S.C. 1951) (applying a similar South Carolina Statute).
Statutes in Idaho and Oregon formerly excluded nonresident administrators from bringing wrongful death actions without securing ancillary administrators. Cf. Elliott v. Day, 218 F. Supp. 90, 93 (D. Ore. 1962).
\(^2\)Since this action is based upon diversity of citizenship, a federal court, in dealing with a right of recovery created by a state, will follow the state law. Guaranty Trust Co. v. York, 326 U.S. 99, 108-109 (1945), Erie R.R. v. Tompkins, 304 U.S. 64, 78 (1938).
\(^3\)28 U.S.C. § 1335 (1970), the statutory interpleader section, requires "two or more adverse claimants of diverse citizenship." This provision has been interpreted as demanding only minimal, as opposed to complete, diversity. State Farm Fire & Cas. Co. v. Tashire, 386 U.S. 523, 531 (1967).
\(^4\)7 U.S. (3 Cranch) 267 (1806).
\(^5\)Judiciary Act of 1789, ch. 20, § 11, 1 Stat. 78. This was the precursor of the modern diversity statute, 28 U.S.C. § 1332 (1970).
\(^6\)7 U.S. (3 Cranch) at 267.
obtain, the statute, 28 U.S.C. § 1332,\textsuperscript{7} requires that no defendant be of the same citizenship as any of the plaintiffs. However, in \textit{Miller v. Perry},\textsuperscript{8} the Fourth Circuit held that diversity was present in a case involving North Carolina statutes which appeared to be destructive of diversity jurisdiction. The \textit{Miller} court said that the citizenship of an ancillary administrator is of no consequence in the determination of diversity; it held that the beneficiaries should be looked to in making such a determination.\textsuperscript{9}
\textit{Miller v. Perry} originated with the death, in a North Carolina automobile accident, of a minor Florida citizen, allegedly through the fault of the Perrys, residents of North Carolina. The decedent's father, after qualifying as his administrator in Florida, instituted an action against the Perrys under the North Carolina wrongful death statute\textsuperscript{10} in the United States District Court for the Eastern District of North Carolina. This action was dismissed by the district court on its own motion, the court noting that the father was not qualified to bring the action since he was not a North Carolina citizen, as required by a statute which allows only a North Carolinian to qualify as an administrator of an intestate's estate.\textsuperscript{11}
\textsuperscript{7}28 U.S.C. § 1332(a) (1970) provides:
The district courts shall have original jurisdiction of all civil actions where the matter in controversy exceeds the sum or value of $10,000, exclusive of interests and costs, and is between—
(1) citizens of different States;
(2) citizens of a State, and foreign states or citizens or subjects thereof; and
(3) citizens of different States and in which foreign states or citizens or subjects thereof are additional parties.
\textsuperscript{8}456 F.2d 63 (4th Cir. 1972).
\textsuperscript{9}\textit{Id.} at 67.
\textsuperscript{10}N.C. Gen. Stat. § 28-173 (Repl. vol. 1966). This section requires the wrongful death action to be brought by the administrator or executor of the decedent's estate.
\textsuperscript{11}456 F.2d at 64. N.C. Gen. Stat. § 28-8 (Repl. vol. 1966) provides in part:
The clerk shall not issue letters of administration or letters testamentary to any person who, at the time of appearing to qualify—
(2) Is a non-resident of this state; but a non-resident may qualify as executor.
The residency requirement of § 28-8 extends to plaintiff administrators in wrongful death suits. \textit{See} Monfils v. Hazlewood, 218 N.C. 215, 10 S.E.2d 673 (1940), \textit{cert. denied}, 312 U.S. 684 (1941). It has been held that a local administrator is needed only when suit is filed in a court sitting in North Carolina. General Steel Tank Co. v. Conner, 387 F.2d 372 (5th Cir. 1967). In \textit{Conner}, the Fifth Circuit held that the North Carolina law had no extra-territorial effect; therefore, when a district court in Georgia, following Georgia's conflict rule applied North Carolina's wrongful death statute, the resident administrator requirement [of § 28-8(2)] had no effect. \textit{Id.} at 373.
Still within the statutory period, the decedent's grandfather, a resident of North Carolina, was qualified as ancillary administrator by a North Carolina court. A second action was brought, in the name of the grandfather, in which the father joined. This action was also dismissed for lack of complete diversity between the parties.\textsuperscript{12} Since the ancillary administrator, as the only party vested with the statutory right to sue,\textsuperscript{13} was the real party in interest, the district court held that diversity did not exist.\textsuperscript{14} On appeal, the Fourth Circuit reversed, holding that for diversity purposes the beneficiaries\textsuperscript{15} were the real parties in interest in cases where a particular state law required the appointment of an ancillary administrator.\textsuperscript{16}
The facts of \textit{Miller} presented the Fourth Circuit with two obvious alternatives. It could have granted diversity jurisdiction by striking down N.C. Gen. Stat. § 28-8(2) as unconstitutional on the ground that a state statute exercising dominion over federal jurisdiction violates the Supremacy Clause.\textsuperscript{17} To have chosen this avenue of approach would have required an extension of prior Supreme Court decisions\textsuperscript{18} and would also have put the Fourth Circuit in conflict with another circuit;\textsuperscript{19} however, only a limited number of jurisdictions would be affected by such a prece-
\textsuperscript{12}Miller v. Perry, 307 F. Supp. 633 (E.D.N.C. 1969).
\textsuperscript{13}The right to sue for wrongful death is purely a creature of statute; the right did not exist at common law. See Horney v. Meredith Swimming Pool Co., 267 N.C. 521, 148 S.E.2d 554 (1966).
\textsuperscript{14}307 F. Supp. at 637.
\textsuperscript{15}In this factual configuration, the decedent's father was at once the Florida principal administrator and the beneficiary of the estate. Under either Florida or North Carolina law, the parents would share equally, or the surviving parent would take the estate completely, in the absence of a spouse or lineal heirs of the decedent. Fla. Stat. Ann. § 731.23(4) (1964); N.C. Gen. Stat. § 29-15(3) (Repl. vol. 1966). The decedent was nineteen years old and unmarried.
\textsuperscript{16}456 F.2d at 67.
\textsuperscript{17}U.S. Const. art. VI. The portion of article VI, known as the Supremacy Clause, provides:
This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.
\textsuperscript{18}See text accompanying notes 75-80 infra.
\textsuperscript{19}See Seymour v. Johnson, 235 F.2d 181 (6th Cir. 1956); Ockerman v. Wise, 202 F.2d 144 (6th Cir. 1953). In both \textit{Seymour} and \textit{Ockerman}, the Sixth Circuit felt compelled to deny diversity jurisdiction in a wrongful death suit where the appointment of a resident ancillary administrator was required by Kentucky law. The Kentucky legislature has since changed its qualifications for administrators, allowing personal representatives of intestates to be non-residents in certain cases. Ky. Rev. Stat. Ann. § 395.005 (1972).
dent.\textsuperscript{20} Alternatively, the court could have affirmed the district court by following existing case law as exemplified by \textit{Mecom v. Fitzsimmons Drilling Co.}\textsuperscript{21} In \textit{Mecom}, the Supreme Court decided that in cases involving executors and administrators, the citizenship of such representatives controls for diversity purposes when they have the power to bring suits or be sued.\textsuperscript{22} To have chosen this second alternative would not have resulted in any conflict with other circuits and would have followed precedent\textsuperscript{23} which developed after \textit{Mecom}.
However, neither alternative was apparently palatable to the \textit{Miller} court; it was unwilling to deal with the constitutional issue, yet it was desirous of reaching an equitable result based on the congressional purpose underlying diversity jurisdiction.\textsuperscript{24} Consequently, the Fourth Circuit decided to allow jurisdiction, distinguishing substantial precedent by obscure reasoning.\textsuperscript{25}
The \textit{Miller} court initiated its analysis by acknowledging that there was a potential constitutional question involved.\textsuperscript{26} The Supreme Court has held state statutes unconstitutional which explicitly limited the availability of wrongful death actions to their own courts\textsuperscript{27} as violative of the Supremacy Clause of the Constitution.\textsuperscript{28} However, the Fourth Circuit contended that the North Carolina statute presented no such problem, in spite of the Supreme Court's \textit{Mecom} holding.\textsuperscript{29} The court reasoned that unless \textit{Mecom}'s rule was "constitutionally required," and an "inflexible, essential ingredient,"\textsuperscript{30} there would be no need to label the North Carolina statute unconstitutional, and it could be given "full force and recognition . . . without attribution . . . of impermissible dominion and control
\begin{itemize}
\item \textsuperscript{20}E.g., Georgia, North Carolina, Virginia, and West Virginia. See note 1 \textit{supra}.
\item \textsuperscript{21}284 U.S. 183 (1931).
\item \textsuperscript{22}Id. at 186-87.
\item \textsuperscript{23}See Hot Oil Service, Inc. v. Hall, 366 F.2d 295 (9th Cir. 1966); Seymour v. Johnson, 235 F.2d 181 (6th Cir. 1956); Ockerman v. Wise, 202 F.2d 144 (6th Cir. 1953). See also Grady v. Irvine, 254 F.2d 224 (4th Cir.), \textit{cert. denied}, 358 U.S. 819 (1958).
\item \textsuperscript{24}456 F.2d at 67. The Fourth Circuit stated: "Diversity jurisdiction exists for the protection of the noncitizen who is obliged to sue or to be sued in the state of his adversary." That this was the purpose of the drafters of the Constitution in establishing diversity jurisdiction was first asserted by Chief Justice Marshall in \textit{Bank of the United States v. Deveaux}, 9 U.S. (5 Cranch) 61, 87 (1809). Not all scholars accept this as the underlying purpose of diversity. See Friendly, \textit{The Historic Basis of Diversity Jurisdiction}, 41 HARV. L. REV. 483 (1928).
\item \textsuperscript{25}456 F.2d at 68.
\item \textsuperscript{26}Id. at 64.
\item \textsuperscript{27}See Railway Co. v. Whitton's Adm'r, 80 U.S. (13 Wall.) 270, 286 (1871); cf. First Nat'l Bank v. United Air Lines, Inc., 342 U.S. 396 (1952).
\item \textsuperscript{28}Note 17 \textit{supra}.
\item \textsuperscript{29}456 F.2d at 65.
\item \textsuperscript{30}Id.
\end{itemize}
over federal jurisdiction." With this declaration of intent, the court proceeded to its decision by limiting the Mecom rule, using as its source the Supreme Court's decision in Kramer v. Carribean Mills, Inc.
The court was quick to point out that it considered Mecom to be a case of an "extraordinary" nature; certainly the facts of the case were complex. An Oklahoma widow was the administratrix of her husband's estate. She commenced a wrongful death action against the defendant, a Louisiana corporation, in an Oklahoma state court. The defendant then removed to federal court, whereupon the plaintiff secured a voluntary dismissal. These same events occurred twice more, each subsequent action being cut short by a voluntary dismissal. Finally, the plaintiff administratrix resigned and upon her request the Oklahoma probate court appointed Mecom, a Louisiana attorney, in her place. He then filed another wrongful death action in the Oklahoma court, and again the defendant removed to federal court; a motion to remand to the state court was denied and a trial on the merits followed. On appeal, the district court's retention of jurisdiction was affirmed by the Tenth Circuit because the former administratrix was held to be the "real party in interest" and the Louisiana attorney but a "nominal" party.
The Supreme Court came to the opposite result, finding a relationship of trust on the part of the administrator: "The applicable statutes make the administrator the trustee of an express trust and require the suit to be brought and controlled by him." Since the administrator had such responsibilities, his citizenship was determinative for jurisdictional purposes. Even though he did not sign his own bond, did not go to Oklahoma to be appointed, and upon appointment named the former administratrix his Oklahoma agent, he was not the nominal party the Tenth Circuit thought him to be. Since his citizenship was the same as that of the defendant corporation, there was no right to removal, regardless of the motive behind his appointment.
---
31 Id.
32 394 U.S. 823 (1969).
33 456 F.2d at 65.
34 Mecom v. Fitzsimmons Drilling Co., 47 F.2d 28 (10th Cir. 1931).
35 Id. at 30. Since this decision was prior to the adoption of the Federal Rules of Civil Procedure, the notion of real party in interest was not as clearly defined as it is today under Rule 17(a). The Tenth Circuit based its holding on the cestui que use concept of parties, and consequently looked to the widow-beneficiary as the cestui que use. The Fourth Circuit in Miller did much the same without the use of the common law language. For a further discussion of nominal parties generally and in the Miller context, see note 66 and text accompanying notes 66-71 infra.
36 284 U.S. at 186-87.
37 Id. at 188.
38 Id. at 190.
This factual configuration hardly made *Mecom* an exceptional case; rather *Mecom* represented a continuance of earlier Supreme Court decisions.\(^{39}\) Yet the *Miller* court felt that citizenship of the representative was not "constitutionally or inflexibly the criterion for ultimate determination of diversity,"\(^{40}\) as *Mecom* has been previously read.\(^{41}\) This would seem to be a correct evaluation of *Mecom*; what made the citizenship of the Louisiana administrator important in that case was his power to bring suit and responsibility to the beneficiaries. In such a trustee capacity, he was deemed the real party in interest under federal standards.\(^{42}\)
However, the *Miller* court's use of *Kramer* to give flexibility to *Mecom* was not appropriate; *Kramer* did not go to the same issues as did *Mecom*. *Kramer* concerned a Panamanian corporation's assignment to a Texas attorney of its rights under a contract made with respondent Carribean Mills, a Haitian corporation.\(^{43}\) Carribean Mills breached the contract, at which point the Panamanian corporation then assigned its interest in the contract to Kramer for the stated consideration of one dollar. Thereafter, Kramer, by separate agreement of the same day, agreed to give to the Panamanian corporation ninety-five percent of any amount recovered by an action on the contract. The Supreme Court saw *Kramer* as concerned with one issue: "whether Kramer was 'improperly or collusively made' a party 'to invoke the jurisdiction' of the District Court, within the meaning of 28 U.S.C. § 1359."\(^{44}\) The Court found it
\(^{39}\)See Rice v. Houston, 80 U.S. (13 Wall.) 66 (1871); Childress v. Emory, 21 U.S. (8 Wheat.) 642 (1823); Chappedelaine v. Dechenaux, 8 U.S. (4 Cranch) 306 (1808). In *Childress* and *Chappedelaine*, the Supreme Court asserted that the citizenship of the person having the legal right to sue and to represent those having a beneficial interest in recovery, rather than the citizenship of those whom he represents, is controlling for diversity purposes, provided that the representative has actual power to control the suit. Chief Justice Marshall felt the rule of *Childress* and *Chappedelaine* to be "the universally received construction" that "jurisdiction is neither given nor ousted by the relative situation of the parties concerned in interest, but by the relative situation of the parties named on the record." Osborn v. Bank of the United States, 22 U.S. (9 Wheat.) 738, 856 (1824).
\(^{40}456\) F.2d at 65.
\(^{41}\)Bush v. Carpenter Brothers, Inc., 447 F.2d 707, 711 (5th Cir. 1971). See also Missouri v. Homesteaders Life Ass'n, 90 F.2d 543, 549 (8th Cir. 1937); Worcester County Trust v. Long, 14 F. Supp. 754, 757 (D. Mass. 1936).
\(^{42}284\) U.S. at 186-87.
\(^{43}394\) U.S. 823 (1969).
\(^{44}\)Id. at 825. 28 U.S.C. § 1359 (1970) provides:
A district court shall not have jurisdiction of a civil action in which any party, by assignment or otherwise, has been improperly or collusively made or joined to invoke the jurisdiction of such court.
This section has created, in effect, an exception to the principle that the citizenship of the real party in interest is controlling in diversity determinations. See 3A J. MOORE, FEDERAL PRACTICE ¶ 17.06[2], at 206-07 (2d ed. 1970).
unnecessary to consider whether motive in the creation of diversity in cases required to be brought by an administrator was improper or collusive under § 1359.\textsuperscript{45} The Court also distinguished the positions of administrators from assignees to buttress its unwillingness to consider administrators and other personal representatives.\textsuperscript{46}
The Fourth Circuit, however, found that
\begin{quote}
[a]t the very least, Kramer authorizes attention to the substantive relation of the administrator, the beneficiaries and others to the controversy before an undiscriminating decision that the citizenship of a representative controls the determination of diversity jurisdiction.\textsuperscript{47}
\end{quote}
Such a construction of Kramer seems at best questionable. The Kramer decision might indeed have been relevant if there was any inference of collusion to create diversity in Miller, but such was clearly not the case. Miller’s ancillary administrator was appointed pursuant to statute, not to obtain federal jurisdiction, but in order that the action could be brought.\textsuperscript{48} Additionally, the statutorily required appointment of the ancillary administrator in Miller had the effect of destroying diversity rather than creating it. The Fourth Circuit treated the distinction between creation and destruction as “insignificant.”\textsuperscript{49} However, an examination of situations involving § 1359 reveal that such distinction may indeed be significant.\textsuperscript{50}
Section 1359 is not applicable to situations involving the destruction of diversity, whether collusive or not;\textsuperscript{51} rather it controls only as to the
\textsuperscript{45}394 U.S. at 828 n.9.
\textsuperscript{46}Id.
\textsuperscript{47}456 F.2d at 66.
\textsuperscript{48}The first dismissal was sua sponte by Judge Larkins because the Florida administrator was unable to bring a wrongful death action. Motive was not at issue; while motive is important in § 1359 cases, it was not in Miller, since without the appointment of an ancillary administrator, no action was possible. It can be reasonably inferred that the “motive” behind the ancillary administrator’s appointment was to allow suit to be brought.
\textsuperscript{49}456 F.2d at 66. But see Lester v. McFaddon, 415 F.2d 1101, 1104 (4th Cir. 1969), in which the Fourth Circuit gave greater significance to the distinction between the creation and the destruction of diversity. See also McSparran v. Weist, 402 F.2d 867, 875 (3d Cir. 1968), cert. denied, 395 U.S. 903 (1969).
\textsuperscript{50}See Bass v. Texas Power & Light Co., 432 F.2d 763 (5th Cir. 1970), cert. denied, 401 U.S. 975 (1971); O’Brien v. Avco Corp., 425 F.2d 1030 (2d Cir. 1969); Lester v. McFaddon, 415 F.2d 1101 (4th Cir. 1969); McSparran v. Weist, 402 F.2d 867 (3d Cir. 1968), cert. denied, 395 U.S. 903 (1969).
\textsuperscript{51}See Oakley v. Goodnow, 118 U.S. 43 (1886). Oakley involved Ch. 137, § 5 of the Act of March 3, 1875, 18 Stat. 470, the predecessor of 28 U.S.C. § 1359. The Supreme Court stated:
While, therefore, the courts of the United States have under the act of
creation of diversity. Therefore Kramer, which involved the collusive creation of diversity and turned on an examination of the meaning of § 1359, could have little to do with Mecom, a case involving the destruction of diversity, with which § 1359 does not deal at all. Rather than view Kramer in this light however, the Miller court attributed to it substantially more import than an analysis of the application of § 1359. To the Fourth Circuit, the significance of Kramer lay in a grant of authority to examine duties and responsibilities of a representative before determining the existence of diversity jurisdiction. To be sure such an examination was necessary in Kramer because of the mandate of the statute,\textsuperscript{52} but § 1359 does not require any such determination when diversity destruction is involved. It is therefore difficult to imagine how Kramer, dealing as it did with § 1359, can be construed to give any sort of blanket authority to consider representative duties and responsibilities when the actions proscribed by § 1359 are not at all involved.
The Fourth Circuit subsequently supported its contention that Kramer modified Mecom by pointing to prior circuit court decisions.\textsuperscript{53} Consequently, the court felt justified in examining the parties named on the record before it. It occurred to the court that this was really a case where federal diversity jurisdiction was present:
In every real sense, this is a diversity case. Had the young Floridian survived, he clearly could have held the North Carolina defendants accountable in a federal court. Since his beneficiaries are Floridians, the controversy is no less interstate after his death than before.\textsuperscript{54}
As a result, the Miller court felt that looking to the beneficiaries was justifiable where a resident ancillary administrator was required to repre-
\textsuperscript{52}1875 the power to dismiss or remand a case, if it appears that a colorable assignment has been made for the purpose of imposing on their jurisdiction, no authority has as yet been given them to take jurisdiction of a case by removal from a State court when a colorable assignment has been made to prevent such a removal.
\textit{Id.} at 45. There is no statute similar to § 1359 requiring that diversity jurisdiction, which has been artificially defeated, be sustained, 3A J. MOORE, \textsc{Federal Practice} ¶ 17.05[2], at 152 (2d ed. 1970).
\textsuperscript{53}See text of 28 U.S.C. § 1359 at note 44 \textit{supra}.
\textsuperscript{54}456 F.2d at 66. The Fourth Circuit drew support from Green v. Hale, 433 F.2d 324 (5th Cir. 1970); Bass v. Texas Power & Light Co., 432 F.2d 763 (5th Cir. 1970), \textit{cert. denied}, 401 U.S. 975 (1971); O'Brien v. Avco Corp., 425 F.2d 1030 (2d Cir. 1969); and McSparran v. Weist, 402 F.2d 867 (3d Cir. 1968), \textit{cert. denied}, 395 U.S. 903 (1969). Because all these cases dealt with § 1359 and the collusive creation of diversity, it is difficult to see how they can be used to support the proposition that Kramer, a § 1359 case, expanded the analysis required by Mecom, a destruction of diversity case.
\textsuperscript{54}456 F.2d at 67.
sent the interests of noncitizen beneficiaries as a consequence of the laws of the state in which the claim arose.\textsuperscript{55} To do so, the court contended, would be in fact what the Supreme Court of North Carolina had done. To support its decision, the court looked to North Carolina decisions defining real party in interest.\textsuperscript{56} Yet the notion of real party in interest is far from concrete in North Carolina, and it is questionable to what extent North Carolina’s concept of the term has any relationship to that implicit in the Federal Rules of Civil Procedure.\textsuperscript{57}
Real party in interest is a procedural matter,\textsuperscript{58} and therefore federal law controls, even in diversity cases.\textsuperscript{59} If the Fourth Circuit was going to make a true real party in interest analysis, it should have looked first to federal law for the definition of the term,\textsuperscript{60} because most authorities agree that the real party in interest under federal law is “the party who, by substantive law, possesses the right sought to be enforced, and not necessarily the person who might finally benefit from any action.”\textsuperscript{61} Then the
\textsuperscript{55}Id.
\textsuperscript{56}Id.
\textsuperscript{57}Fed. R. Civ. P. 17(a) provides:
Every action shall be prosecuted in the name of the real party in interest. An executor, administrator, guardian, bailee, trustee of an express trust, a party with whom or in whose name a contract has been made for the benefit of another, or a party authorized by statute may sue in his own name without joining with him the party for whose benefit the action is brought; and when a statute of the United States so provides, an action for the use or benefit of another shall be brought in the name of the United States. No action shall be dismissed on the ground that it is not prosecuted in the name of the real party in interest until a reasonable time has been allowed after objection for ratification of commencement of the action by, or joinder or substitution of, the real party in interest; and such ratification, joinder, or substitution shall have the same effect as if the action had been commenced in the name of the real party in interest.
For an excellent explanation of real party in interest in federal courts, see Allen v. Baker, 327 F. Supp. 706, 710 (N.D. Miss. 1968).
\textsuperscript{58}Montgomery Ward & Co. v. Callahan, 127 F.2d 32, 36 (10th Cir. 1942); Hughey v. Aetna Cas. & Sur. Co., 32 F.R.D. 340, 341 (D. Del. 1963); DuVaul v. Miller, 13 F.R.D. 197, 198 (W.D. Mo. 1952).
\textsuperscript{59}Hanna v. Plumer, 380 U.S. 460, 469-70 (1965); see Erie R.R. v. Tompkins, 304 U.S. 64, 78, 92 (1938). As Professor Wright observes, “. . . there is no longer an \textit{Erie} problem on matters covered by the Civil Rules. If the rule is valid, and if it applies to the case it is controlling, and no regard need be paid to contrary state provisions.” C. WRIGHT, LAW OF FEDERAL COURTS § 59, at 245 (2d ed. 1970) [hereinafter cited as WRIGHT].
\textsuperscript{60}American Fidelity & Cas. Co. v. All American Bus Lines, 179 F.2d 7, 10 (10th Cir. 1949); Silvius v. Helmick, 291 F. Supp. 716, 717 (N.D. W. Va. 1968); McNeil Constr. Co. v. Livingston State Bank, 185 F. Supp. 197, 200-01 (D. Mont. 1960), \textit{aff’d}, 330 F.2d 88 (9th Cir. 1962); cf. Horton v. Liberty Mut. Ins. Co., 367 U.S. 348, 352-53 (1961).
\textsuperscript{61}WRIGHT § 70, at 293. See Gagliano ex rel. Gagliano v. Bernsen, 243 F.2d 880 (5th Cir. 1957); Dixey v. Federal Compress & Warehouse Co., 132 F.2d 275 (8th Cir. 1942);
court should have looked to state law to determine the party who has such a right, which would have led in turn to the North Carolina wrongful death statute.\textsuperscript{62} As Professor Wright points out, the definition of real party in interest of the forum state is not applicable because it governs only a party's right to sue in state courts.\textsuperscript{63} Since the federal courts, under Rule 17(a),\textsuperscript{64} look only to that part of state law which grants the right to sue, the Fourth Circuit's consideration of North Carolina definitions of the real party in interest and illustrative cases such as \textit{Broadfoot v. Everett}\textsuperscript{65} for support of its position is an exercise in futility; it makes no difference whom North Carolina law defines as the real party in interest.
Likewise, it is difficult to see the ancillary administrator, required by the North Carolina statute, as a nominal party under the Federal Rules as the Fourth Circuit seems to have done.\textsuperscript{66} The properly appointed administrator must exist for any action to be prosecuted.\textsuperscript{67} He is the proper party plaintiff in a wrongful death action\textsuperscript{68} and has authority and responsibility; he is not a mere figurehead.\textsuperscript{69} Such an administrator must exist;
\begin{itemize}
\item Clark & Moore \textit{A New Federal Civil Procedure II. Pleadings and Parties}, 44 Yale L.J. 1291 (1935); F. James, \textit{Civil Procedure}, § 9.2 (1965).
\item "Cases cited at note 60 \textit{supra}.
\item "6 C. Wright & A. Miller, \textit{Federal Practice and Procedure: Civil} § 1544 at 647-48 (1971).
\item "Text of Rule 17(a) appears at note 57 \textit{supra}.
\item "270 N.C. 429, 154 S.E.2d 522 (1967). The language in \textit{Broadfoot} which the Fourth Circuit found "illustrative" was apparently: "His intestate's widow and two surviving children, not he [the administrator], are the real parties in interest." 154 S.E.2d at 525.
\item "456 F.2d at 67. Since the \textit{Miller} court did not consider the ancillary administrator the real party in interest, it could only have meant "nominal" party, since only nominal parties may be disregarded in the determination of diversity jurisdiction. Salem Trust Co. v. Manufacturers' Fin. Co., 264 U.S. 182 (1924). In order to be disregarded as a nominal party, however, the party cannot possess actual powers with regard to the litigation. Only "[w]here the representative cannot prevent the institution or prosecution of actions, or exercise any control over them . . . ." may he be treated as a nominal party. \textit{Wright} § 29, at 94. See Susquehanna & Wyo. Valley Ry. & Coal Co. v. Blatchford, 78 U.S. (11 Wall.) 172, 177 (1870). \textit{See also} Howard v. United States, 184 U.S. 676 (1902) (formal obligee of a bond as a nominal party); Boon's Heirs v. Chiles, 33 U.S. (8 Pet.) 532 (1834) (dry passive trustee as nominal party).
\item "Young v. Marshburn, 10 N.C. App. 729, 180 S.E.2d 43, 45 (1971).
\item "Brendle v. Gen. Tire and Rubber Co., 408 F.2d 116, 118 (4th Cir. 1969) (applying North Carolina law).
\item "First Union Nat'l Bank v. Hackney, 266 N.C. 17, 145 S.E.2d 352 (1965). The \textit{Miller} court distinguished the result of this case, saying that for the North Carolina Supreme Court to have done otherwise would have been irrational, and indicated that \textit{Hackney} posed no problem to \textit{Broadfoot}, 456 F.2d at 67. \textit{Broadfoot}, however, was concerned with a conflicts problem, and the ancillary administrator in question was a Pennsylvania citizen. 154 S.E.2d at 523.
\end{itemize}
North Carolina cases vacillate considerably, holding that beneficiaries are the real parties in interest for purposes of North Carolina law, \textit{but} the administrators have substanif no one has applied for or been issued letters of administration within six months, a North Carolina statute provides that a public administrator shall be issued letters,\textsuperscript{70} even though only a right of action for wrongful death may exist.\textsuperscript{71} Therefore, even if an examination of North Carolina law was necessary, it would seem that an administrator would have been appointed whether or not the decedent's father, the Florida administrator, had wanted one appointed. This casts doubt upon any claim of "nominal" status for the ancillary administrator, regardless of whether or not he is considered the real party in interest. Describing the ancillary administrator as a nominal party would be equivalent to discarding the analysis required by Rule 17(a) in the event of a conflict with the policy behind diversity.
While the \textit{Miller} court said that the North Carolina cases it cited were illustrative,\textsuperscript{72} the use of the word "illustrate" is misleading. The court confirmed its real objective in its closing remarks which made reference to the American Law Institute's \textit{Study of the Division of Jurisdiction Between State and Federal Courts}.\textsuperscript{73} The court's language indicates a desire to follow that proposal and that its \textit{Miller} holding was the nearest thing possible, since the proposal is not yet law.\textsuperscript{74}
The Fourth Circuit had, however, another alternative,\textsuperscript{75} striking down
\begin{itemize}
\item \textit{See}, e.g., In re Ives' Estate, 248 N.C. 176, 102 S.E.2d 807 (1958); McCoy v. Atlantic Coast Line R.R., 229 N.C. 57, 47 S.E.2d 532 (1948); Davenport v. Patrick, 227 N.C. 686, 44 S.E.2d 203 (1947).
\item N.C. Gen. Stat. § 28-20 (Repl. vol. 1966).
\item \textit{See} N.C. Gen. Stat. § 28-2.3 (Repl. vol. 1966). Even though no assets exist within the state except a right of action for wrongful death, an administrator can be appointed. The right of action is itself an asset. In re Scarbourough, 261 N.C. 565, 135 S.E.2d 529, 531 (1964).
\item 456 F.2d at 68.
\item ALI Study of the Division of Jurisdiction Between State and Federal Courts § 1301 (1969 Official Draft).
\item 456 F.2d at 68.
\item It might be suggested that the Fourth Circuit could have granted diversity jurisdiction without finding the North Carolina statute unconstitutional or torturing Kramer. If the first dismissal could be viewed as a result of a determination that the doctrine of Erie R.R. v. Tompkins, 304 U.S. 64 (1938), required dismissal because of noncompliance with section 28-8(2), the Fourth Circuit might have analyzed \textit{Miller} on the basis of Byrd v. Blue Ridge Rural Elec. Cooperative, Inc., 356 U.S. 525 (1958). In \textit{Byrd} the Supreme Court directed that federal policy as to how federal courts should be run be added as an affirmative countervailing consideration to the balance when testing the control of state law. Using the \textit{Byrd} analysis, the Fourth Circuit might have viewed the policy underlying diversity as an affirmative countervailing consideration outweighing North Carolina's interest in statutorily requiring resident administrators when the statute works to destroy diversity. Such a result was approached in Szantay v. Beech Aircraft Corp., 349 F.2d 60 (4th Cir. 1965), in which the Fourth Circuit held that the federal policy behind diversity, among other federal policy considerations, overrode a South Carolina door-closing statute.
\end{itemize}
the North Carolina statute requiring a resident administrator as unconstitutional. Such a result seems to arise from looking, at least initially, to *Railway Co. v. Whitton's Administrator*.\textsuperscript{76} There the Supreme Court held:
Whenever a general rule as to property or personal rights, or injuries to either, is established by State legislation, its enforcement by a Federal court in a case between proper parties is a matter of course, and the jurisdiction of the court, in such case, is not subject to State limitation.\textsuperscript{77}
However, an analysis using *Whitton* may not be completely satisfactory because it can be distinguished from *Miller* in that *Whitton* involved a statute which \textit{explicitly} limited a wrongful death action to the state courts, while the North Carolina statute is but an \textit{implicit} limitation. *Mexican Central Railway v. Pinkney*\textsuperscript{78} is more on point in a situation involving an implicit limitation. In *Pinkney* it was argued that a Texas statute which provided that jurisdictional immunity would be waived by a representative appearance was binding on the federal courts. The Supreme Court held to the contrary:
[T]he jurisdiction of the Circuit Courts of the United States has been defined and limited by the acts of Congress, and can be neither restricted nor enlarged by the statutes of a State.\textsuperscript{79}
Seemingly then, a permutation of federal jurisdiction by states, be it direct or indirect, has been rejected by the Supreme Court. Unless a state closes its judicial doors to a particular class of persons in other than a purely procedural matter, states cannot limit indirectly federal jurisdiction.\textsuperscript{80}
To have found the statute unconstitutional would have allowed the Millers a federal forum without complicating diversity determination
\textsuperscript{76}80 U.S. (13 Wall.) 270 (1871).
\textsuperscript{77}Id. at 286.
\textsuperscript{78}149 U.S. 194 (1893).
\textsuperscript{79}Id. at 206.
\textsuperscript{80}The North Carolina statute does not "close the doors" of state courts to claims of nonresident administrators, since the claims can be prosecuted using the ancillary administrator device. See generally Wright § 46, at 174-77. See also Stewart, The Federal "Door Closing" Doctrine, 11 Wash. & Lee L. Rev. 154 (1954).
procedure. To have done so would also have eliminated potential equal protection objections to the North Carolina statute. While § 28-8(2) disqualifies anyone who is not a North Carolina resident from being an administrator of an intestate's estate, it does not disqualify a non-resident executor of a testate's estate,\textsuperscript{81} albeit the powers and responsibilities of executors and resident administrators are substantially the same insofar as powers to institute or defend actions are concerned.\textsuperscript{82} All a non-resident executor need do is appoint a resident process agent,\textsuperscript{83} while an administrator must himself be a resident of the state.\textsuperscript{84} This would seem to be not only an unnecessary complication but also an effective penalizing of the heirs of those who die intestate by virtue of allowing more limited representation than for those who die testate. The striking down of the North Carolina statute as unconstitutional would have admittedly put the Fourth Circuit in conflict with the Sixth Circuit.\textsuperscript{85} However, to look to the beneficiaries still puts the Fourth Circuit in conflict with other circuits, as well as with the Supreme Court and "conventional wisdom."\textsuperscript{86}
Seemingly overlooked by the \textit{Miller} court is another ramification of its decision. If followed, it will lead to extensive procedural complications, substantially delaying trial by necessitating an in depth party examination. The simplicity of \textit{Mecom} and earlier holdings\textsuperscript{87} allows an instant determination of diversity; only in § 1359 situations does more extensive examination become necessary, that is, only in cases involving allegedly collusively created diversity. However, the \textit{Miller} decision will lead to mandatory, presumably pretrial, party exploration, perhaps even going substantially beyond the pleadings. This would be going further than the requirements of Rule 17(a), which limits its examination to that party who is entitled, under substantive law, to bring suit.\textsuperscript{88} While this could perhaps be rectified by an inclusion in the pleadings of a reference to the existence of a beneficiary/administrator conflict, this too would be going beyond the Federal Rules which make no provision for such a complicated procedure. If in fact one of the primary purposes of the Federal Rules is to simplify pleading,\textsuperscript{89} any complication going at cross-purposes to such aim would seem to be hard to justify.
The Fourth Circuit felt that the federal forum should be allowed the
\begin{itemize}
\item See note 11 \textit{supra}.
\item N.C. Gen. Stat. §§ 28-8(2), 28-182 (Repl. vol. 1966).
\item N.C. Gen. Stat. § 28-186 (Repl. vol. 1966).
\item See note 11 \textit{supra}.
\item See note 19 \textit{supra}.
\item Wright § 29 n.19 (Supp. 1972).
\item Note 41 \textit{supra}.
\item See text accompanying note 57 \textit{supra}.
\item Conley v. Gibson, 355 U.S. 41, 47-48 (1957).
\end{itemize}
|
Polk County Transportation System: Winter Haven Area Transit - System Safety Review Report
Lisa Staes
*University of South Florida*
Amber Reep
*University of South Florida*
Follow this and additional works at: [https://scholarcommons.usf.edu/cutr_reports](https://scholarcommons.usf.edu/cutr_reports)
**Scholar Commons Citation**
Staes, Lisa and Reep, Amber, "Polk County Transportation System: Winter Haven Area Transit - System Safety Review Report" (2002). *CUTR Research Reports*. 104.
[https://scholarcommons.usf.edu/cutr_reports/104](https://scholarcommons.usf.edu/cutr_reports/104)
POLK COUNTY TRANSPORTATION SYSTEM
WINTER HAVEN AREA TRANSIT
SYSTEM SAFETY REVIEW REPORT
SEPTEMBER 16, 2002
PREPARED FOR:
FLORIDA DEPARTMENT OF TRANSPORTATION
DISTRICT 1 - PUBLIC TRANSIT OFFICE
PREPARED BY:
Lisa Staes and Amber Reep
CENTER FOR URBAN TRANSPORTATION RESEARCH
UNIVERSITY OF SOUTH FLORIDA
4202 E. FOWLER AVENUE
TAMPA, FL 33620
Bus Transit System Safety Review
Polk County Transportation System/
Winter Haven Area Transit
Conducted For: FDOT District 1, Public Transit Office
Review Dates: August 22 – 23, and 27, 2002
Report Date: September 16, 2002
FDOT Manager: Jan Parham, District Headquarters – Bartow
Reviewer(s): Amber Reep, Lisa Staes
Contractor: Center for Urban Transportation Research
Address: 4202 E. Fowler Avenue, Tampa, Florida 33620
Phone Number: 813-974-9787
I. INTRODUCTION
On August 22 – 23, 2002, the Center for Urban Transportation Research (CUTR) conducted an on-site Bus Transit System Safety Review of the Polk County Transportation System, including the Winter Haven Area Transit, at its facilities located at 1290 Golfview Avenue in Bartow, Florida.
The reviewers from CUTR included:
Amber Reep, Research Associate
Lisa Staes, Program Director, Transit Technical Assistance and Training
The agency representatives who participated in the review included:
Paul Simmons, Program Supervisor II
Diane Slaybaugh, Planner II
Howard McMillan, Transit Supervisor
Orrin Schaal, Quality Assurance Manager
Larry Tullis, Vehicle Maintenance Coordinator
Doug Bloom, Vehicle Maintenance Technician
Bonnie Ewing, Transit/Paratransit Supervisor
Bobbie Hinde, Assistant Director Human Services Division
Steve Githens, Director, Lakeland Area Mass Transit District (LAMTD)
Charles Sperry, Accountant, LAMTD
The purpose of this review was to determine Polk County Transportation System's compliance with the provisions of Chapter 14-90, Florida Administrative Code, as amended on November 10, 1992 and August 2, 1994. The provisions include the development of and compliance with a locally developed and adopted System Safety Program Plan (SSPP), performance of safety inspections of all operational buses, documentation of compliance with equipment and operational safety standards, and safety monitoring of covered contractors. This review was conducted in accordance with the Florida Department of Transportation (FDOT) "Bus Transit System Safety Program" Procedure, Topic Number 725-030-009.
II. SAFETY REVIEW CHECKLIST ITEMS
The Reviewers examined the following items during the Bus Transit System Safety Program review:
(1) System description/general information
(2) Adoption, retention, and compliance with a SSPP
(3) Bus system safety inspection program
(4) Record of valid driver licenses
(5) Driver training program and requirements
(6) Written operational and safety procedures
(7) Driving hours and work periods
(8) Pre-employment medical examinations
(9) Biennial medical examinations
(10) Bus maintenance program
(11) Bus accident records
(12) Drivers' daily bus inspection procedures
(13) Bus emergency and safety equipment
(14) Adoption of safety standards and safety monitoring of contractors
(15) Drug-Free Workplace Act compliance
III. DEFINITIONS
Area of Concern: Weakness in the adoption and implementation of the SSPP and weaknesses with regard to addressing and complying FDOT safety standards and guidelines. Recommended practices or a recommended corrective action may be provided to address an area of concern or improve the effectiveness of the transit system safety program.
Deficiency: Area in which the bus transit system is found to be non-compliant, deficient or inadequate in complying with its SSPP or FDOT's safety standards and guidelines. Corrective action(s) and an implementation schedule(s) will be provided for any deficiency.
Corrective Action: An action or requirement that must be prepared and implemented to minimize, control, warn of, or eliminate a finding of deficiency or area of concern identified by the review and completed within a time specified by the FDOT.
Observation: An offered suggestion, view, or comment regarding safety performance. An observation may address or refer to information obtained during the review.
IV. BUS SYSTEM SAFETY REVIEW FINDINGS
The following section describes the specific findings derived from the inspection of each of the 15 system safety areas covered by this review. Findings shall consist of actual information obtained during the review and will be identified as an "Area of Concern," "Deficiency," or "Observation," as applicable. Observations do not reflect a finding of "non-compliance."
(1) General Information:
The Polk County Transportation System is the designated Community Transportation Coordinator for Polk County. Transportation Disadvantaged services are provided directly by Polk County Transportation and contracted service providers. Polk County Transportation also operates three Intercity Routes, an express route running between Winter Haven and Bartow, and the Winter Haven Area Transit. All transportation providers are given the opportunity to develop their own SSPP or subscribe to the SSPP adopted by Polk County Transportation.
| Polk | WHAT |
|---------------|------|
| Total Number of Drivers: | 44 | 10 |
| Full-time | 40 | 4 |
| Part-time | 4 | 0 |
| Backup Drivers| 6 | |
| Number of Operational Buses: | 50 (5 sedans) | 4 |
| Type I: | 4 | 0 |
| Type II: | 46 | 4 |
| W/C Accessible: | 32 | 4 |
Winter Haven Area Transit service is currently provided through the use of four 35' Orion low floor buses owned by the Lakeland Area Mass Transit District. In October 2002, the Lakeland Area Mass Transit District will be leasing four 30 feet 2002 Gillig coaches from Polk County to provide the transportation services in the Winter Haven system.
Maintenance Locations – PCTS
Polk County Fleet Management, Golfview Avenue, Bartow, Florida
Eloise Resource Center, Eloise, Florida (management offices)
Maintenance Location – LAMTD
Lakeland Area Mass Transit District, 1212 George Jenkins Boulevard, Lakeland.
The following transportation providers are under contract with PCTS:
- Lakeland Area Mass Transit District (LAMTD), 1212 George Jenkins Boulevard, Lakeland – Maintenance performed on site
- Independent Community Transport, Inc., 2020 Combee Road, Lakeland
- Southeast Christian Assembly Transportation
The following agencies are under a coordination agreement with PCTS:
- Polk County Association for Handicapped Citizens, 1038 Sunshine Drive, E., Lakeland
- Polk Training Center, 111 Creek Road, Lake Alfred
- Winter Haven Hospital – Esteem, 200 Avenue F, N.E., Winter Haven
- Sunrise Communities (currently in process of acquiring a formal agreement)
- Peace River Center for Personal Development, Inc., 1745 U.S. Highway 17 South, Bartow
(2) Adoption, retention, compliance with and minimum annual update of SSPP
Polk County has a very comprehensive, well-developed System Safety Program Plan that was updated in July 2002. The Bus Transit System Safety Annual Safety Certifications, signed by Sandra Winegar, Director, Polk County Transit Services Division on February 8, 2002, is included as an exhibit to the SSPP.
Polk County Transportation’s SSPP addresses all required elements contained in Section 14-90.004(1)(a), including safety considerations and standards for the following areas: management; vehicles and equipment; operational functions; driving requirements; maintenance; equipment for transporting wheel chairs; training; Federal, state, and local regulations, ordinances, and laws; and private contract bus operators.
Evidence was presented through the examination of the annual safety reviews of PCTS’s contracted providers that most had developed their own SSPPs. In most cases, it appeared the SSPPs prepared by these agencies were comprehensive containing the minimum elements required by Chapter 14-90, Florida Administrative Code.
Areas of Concern: NONE
Deficiencies: NONE
Observations: NONE
(3) Records of minimum annual safety inspections of all vehicles
Larry Tullis, Vehicle Maintenance Coordinator, performs annual safety inspections during the months of June and July each year. In addition, safety inspections are completed monthly during preventative maintenance activities and are performed by Polk County Fleet Services. Records of both annual and monthly safety inspections are kept in each vehicle file in the office of the Vehicle Maintenance Coordinator in Eloise. Polk County certified to its compliance with
this requirement via the Bus System Safety Annual Safety Certifications signed by the Transit Director on February 8, 2002.
**Areas of Concern:** NONE
**Deficiencies:** NONE
**Observations:** NONE
(4) **Photo static copy of valid driver's license in each driver's file**
Reviewers examined driver files for all Polk County Transportation System and Winter Haven Area Transit, as well as employees eligible to operate Polk County vehicles including the Transit Supervisor, the Vehicle Maintenance Coordinator, the Assistant Vehicle Maintenance Coordinator, the Transit/Paratransit Supervisor, and the Program Supervisor. A current photo static driver's license for each of these individuals was contained in their respective personnel files. In accordance with Polk County Transportation System policy, all PCTS drivers, supervisors and maintenance coordinators, must have a CDL, Class C, at a minimum, with the Passenger endorsement.
**Areas of Concern:** NONE
**Deficiencies:** NONE
**Observations:** NONE
(5) **Records or documentation of driver training performed for type(s) of equipment operated**
**Polk County Transportation System**
The Polk County Transportation System utilizes an information spreadsheet to hire date, date of pre-employment drug test, biennial physicals, date(s) of road training, first aid training, CPR, passenger assistance training, defensive driving training, sexual harassment training, 15 passenger van comprehensive safety awareness program, and drug/alcohol training.
The review team examined 44 (100%) Polk County Transportation System drivers' files to determine if required documentation was included and if that documentation was current. One driver appeared to be overdue for the biennial CPR training, but it was later noted that she did complete the class but had not yet received her certificate. One driver had receipts for the Drug Free Workplace policy and the Employee Personnel Handbook that were not completed or signed. This was corrected during the visit. There were no other discrepancies noted in the examination.
Winter Haven Area Transit
Lakeland Area Mass Transit District drivers provide the Winter Haven Area Transit services. LAMTD drivers, including those providing WHAT services, are required to complete an extensive driver training program, including, but not limited to the following training: defensive driving, fire safety, wheelchair securement procedures, ADA proficiency, passenger relations, employee/drug material awareness, the bus system safety program plan, bio-hazard cleanup procedures, radio procedures, over-the-road training, and training on various LAMTD/WHAT routes.
The review team examined 10 (100%) LAMTD/WHAT drivers' files, including four full-time drivers and six WHAT back-up drivers', to determine if required documentation was included and if that documentation was current. In all instances, the drivers' files contained certificates and other documentation supporting their compliance with Chapter 14-90, Florida Administrative Code and driver training requirements identified in the LAMTD SSPP and the Driver's Handbook.
Areas of Concern: NONE
Deficiencies: NONE
Observations: NONE
(6) Records of written operational and safety procedures provided to drivers driving without supervision
Polk County Transportation System
Polk County Transportation Services drivers are required to read and certify that they have received the Polk County Transportation System Employee Personnel Handbook and the PCTS Employee Handbook. All operational and driving requirements are contained within these documents. Polk County Transportation System drivers are required to sign a receipt that they have received the PCTS Employee Handbook. Drivers do not receive a copy of the SSPP. All pertinent information pertaining to the drivers is covered in the PCTS Employee Handbook of which they do receive a copy. Additional information is covered during the initial training process and then on an as needed basis. Each driver's file reviewed contained a signed certification that they had received the PCTS Employee Handbook.
Winter Haven Area Transit
LAMTD has an Operational and Procedures Policy Book that is distributed to all LAMTD drivers. Each driver is required to sign a "Receipt Form" that provides a
certification that the driver has read and understood the policies identified in the manual. In addition, a "Department of Transportation and Citrus Connection Pre-Employment and Student Control Sheet," that is included in each driver file personnel folder, includes a notation of the specific dates that drivers were given policies, manuals, and procedures, and each line of notation is signed and dated by both the employee and the supervisor.
**Areas of Concern:** NONE
**Deficiencies:** NONE
**Observations:** NONE
(7) Documentation of compliance with Section 14-90.006 by records of each driver's work period to include documentation of:
- Total days worked
- On-duty hours
- Driving hours
- On and off reporting times
**Polk County Transportation System**
The reviewers examined the time sheet files for all Polk County Transportation System drivers for the period of March 3, 2002 through August 4, 2002. During the initial observations of the time sheets, it appeared there were a few drivers who had driven for more than 12 hours during a 24-hour period. However, a subsequent review of the Trapeze database, with the assistance of Paul Simmons, provided additional information related specifically to driving times. It was determined, based on this additional research, that there were no instances of drivers driving more than 12 hours per day. In addition there were no instances of drivers being on-duty for 16 hours per day or more or for more than 70 hours during any seven consecutive calendar days.
**Winter Haven Area Transit**
The reviewers examined the time sheet files for all WHAT dedicated and back-up drivers for the period of July 1, 2002 through August 10, 2002. In no instances were there drivers driving for over 12 hours during a 24-hour period, nor were there any instances of drivers being on-duty for 16 hours during a 24-hour period.
**Areas of Concern:** NONE
**Deficiencies:** NONE
**Observations:** NONE
(8) **Records of each driver's pre-employment examinations**
The reviewers examined all PCTS and WHAT drivers' files to document this requirement. In all cases, an examination of the driver files confirmed that pre-employment medical examinations had been conducted. The PCTS driver information spreadsheet was used to track the pre-employment and biennial medical examination dates for each PCTS driver. The LAMTD Driver Identification File is used to track pre-employment and biennial medical examination dates for WHAT drivers.
**Areas of Concern:** NONE
**Deficiencies:** NONE
**Observations:** NONE
(9) **Records of biennial driver medical examinations**
The reviewers examined all PCTS and WHAT drivers' files to document this requirement. In all cases, the drivers' files confirmed that biennial medical examinations had been conducted. A copy of the physician signed examination form is contained within each driver's file, both at PCTS and LAMTD. The driver information spreadsheet was used to track the pre-employment and biennial medical examination dates for each PCTS driver. The LAMTD Driver Identification File is used to track pre-employment and biennial medical examination dates for WHAT drivers.
**Areas of Concern:** NONE
**Deficiencies:** NONE
**Observations:** NONE
(10) **Records of vehicle maintenance including:**
- Types of maintenance and inspection
- Maintenance intervals
- Dates of preventative maintenance for each vehicle
- Documentation of any contracted maintenance activities
- All other information required by Section 14-90.004(d), FAC
Polk County Transportation Services and the County's Fleet Management Department coordinate all daily repairs and standard preventative maintenance activities. Preventative maintenance activities for PCTS vehicles occur at 7,000-mile intervals for gasoline powered vehicles and at 5,000-mile intervals for diesel
powered vehicles. Vehicle inspections are conducted on a monthly basis, with annuals occurring in June and July of each year. The Vehicle Maintenance Coordinator conducts all annual vehicle inspections. The Lakeland Area Mass Transit District conducts their standard preventative maintenance activities or "Type A inspections" at 7,000-mile intervals. Additionally, PCTS contracts maintenance with several companies. Wikert Ford in Lake Wales and Shifters in Mulberry perform transmission repairs. Budget Glass in Winter Haven performs glass repairs on PCTS vehicles. Body work repair sites are selected through the low bid procurement process.
Reviewers examined the maintenance records for all Polk County Transit System vehicles and those LAMTD vehicles utilized in the WHAT system. In every instance, proper documentation was included confirming the accomplishment of preventative maintenance activities in a timely manner.
In addition, reviewers pulled driver's vehicle inspection reports for the month of August for each PCTS vehicle and vehicle inspection reports for the two weeks prior to the review for LAMTD vehicles. Any vehicle defects noted by the drivers were reviewed by maintenance personnel and repaired, if warranted. Evidence was provided that these repairs had been made.
**Areas of Concern:** NONE
**Deficiencies:** NONE
**Observations:** NONE
(11) **Accident reporting, evaluation, and record maintenance system**
PCTS maintains a comprehensive accident reporting system that includes accident evaluation and record maintenance activities. All incident reports are kept on file at PCTS and all vehicle accident reports are filed in the office of Risk Management. All accidents are reviewed by the Transportation Program Supervisor II to determine the nature and cause(s) and to determine what actions or procedures should be implemented to prevent any reoccurrence. PCTS operates under the policies established by the Polk County Board of County Commissioners. These procedures are provided as an exhibit to the SSPP. The PCTS Transportation Supervisor and the Polk County Safety Officer investigate all incidents/accidents. In addition, the Polk County Crash Review Committee, established under the purview of the County Safety Committee and the County Safety Manager, review all accidents/collisions. Disciplinary actions are determined by both the number of occurrences by driver and number of points assessed to a driver's license.
(12) Documentation of vehicle daily inspections in accordance with Section 14-90.006(7)
Polk County Transportation System
Polk County Transportation System and LAMTD drivers are required to conduct pre-trip inspections prior to going into service. The reviewers randomly selected pre-trip inspection reports for the Polk County Transportation System at their Maintenance Department office in Eloise. The period of time chosen for the review was July 24, 2002 through August 20, 2002. Many of the documented defects were minor in nature or resulting from a driver not being completely familiar with his or her equipment (example, dark exhaust noted in diesel vehicles when parking brake is deployed – a common occurrence in diesel vehicles) many of these not requiring maintenance service. Examples of these include notations of tag lights being out (many times a result of dirt gathering in the recessed area where the light is contained) or other light fixtures needing to be replaced (generally corrected without being sent to Polk County Fleet Management). Other items noted were repaired in a timely manner with documentation available supporting the completion of the repairs.
Winter Haven Area Transit
The reviewers randomly selected pre-trip inspection reports at LAMTD for the period of August 6, 2002 through August 21, 2002 for the four vehicles utilized by LAMTD for the WHAT service (1040-1044). In each case, drivers had completed all daily pre-trip inspections. There were few defects noted. Evidence was produced indicating that all defects had been repaired in a timely manner.
Areas of Concern: NONE
Deficiencies: NONE
Observations: NONE
(13) Existence of required vehicle emergency and safety equipment including:
- Standee line warning
- Identification of emergency exits
- Driver's and passenger's seat belts
- Fire extinguisher
- Portable reflectors
- Manufacturer's wheelchair lift certification
Reviewers examined 23 vehicles, including the four LAMTD vehicles utilized in the WHAT service. In all instances, vehicles were clean and well maintained. Standee line and emergency exit markings were clearly visible, driver and
passenger seat belts were present and in excellent condition, portable reflectors were present, and the manufacturer's lift certification was visible on each vehicle. In addition, restraint harnesses and belts were properly secured and available. Tread wear on tires was acceptable. The fire extinguishers in 22 of the vehicles were recently charged and/or inspected. In vehicles number T-37, the last inspection had occurred in July 2001.
**Areas of Concern:** NONE
**Deficiencies:** NONE
**Observations:** The fire extinguisher in vehicle number T-37 should be examined to ensure that it is properly charged and has a current inspection sticker.
(14) **Adoption and monitoring of safety standards for contracted operators**
Polk County currently contracts with the following subcontractors: Lakeland Area Mass Transit District (for the Winter Haven Area Transit services); Independent Community Transport, Inc.; Peace River Center for Personal Development, Inc.; Polk County Association for Handicapped Citizens, Inc.; Southeast Christian Assemblies of God, Inc.; Polk Training Center for Handicapped Citizens, Inc.; and the Fellowship Dining Transportation Program (housed within the Polk County Human Services Department). These operators provide transportation services for their own clients as well as transportation disadvantaged and Medicaid transportation. In the case of LAMTD, fixed route services are provided for Polk County to serve the transportation needs of the City of Winter Haven. Evidence provided through the examination of the annual safety reviews conducted at these agencies by the PCTS Quality Assurance Unit indicated that each subscribes to their own SSPP.
Reviewers examined the most recent monitoring reports conducted by the PCTS Quality Assurance Unit for each of these providers. The reviews that had been conducted were exceptionally thorough. Where findings were noted, agencies were given the opportunity to correct the deficiencies within a specified timeframe. Documentation of follow-up visits by PCTS Quality Assurance Unit personnel was provided to the reviewers.
**Areas of Concern:** NONE
**Deficiencies:** NONE
**Observations:** NONE
(15) Compliance with "Drug Free Workplace Act"
PCTS has an established Drug Free Workplace Act policy that was adopted by the Polk County Board of County Commissioners and is included as an attachment to the PCTS SSPP.
Areas of Concern: NONE
Deficiencies: NONE
Observations: NONE
V. COMPLIANCE TIMETABLE
There were no items of non-compliance found during this review.
VI. SUMMARY OF REVIEW AND ADDITIONAL COMMENTS
The review conducted at PCTS revealed a very well run, efficient and effective organization. The courtesy and responsiveness of the staff involved in the review was exceptional.
|
Development and Verification of a Series Car Modelica/Dymola Multi-Body Model to Investigate Vehicle Dynamics Systems
Christian Knobel\textsuperscript{a} Gabriel Janin\textsuperscript{b} Andrew Woodruff\textsuperscript{c}
\textsuperscript{a}BMW Group Research and Technology, Munich, Germany, firstname.lastname@example.org
\textsuperscript{b}École Nationale Supérieure de Techniques Avancées, Paris, France, email@example.com
\textsuperscript{c}Modelon AB, Lund, Sweden, firstname.lastname@example.org
Abstract
The development and the verification of a Multi-body model of a series production vehicle in Modelica/Dymola is presented. The model is used to investigate and to compare any possible configuration of actuators to control vehicle dynamics with a general control approach based on model inversion and a non-linear online optimization.
Keywords: Multi-body Vehicle Model, Vehicle Dynamics, Model Verification, Model Validation, Active Vehicle Dynamics Systems, Tire Model, Suspension Kinematics, Suspension Compliance
1 Introduction
Systems for control of Vehicle Dynamics went to series production for the first time in 1978 with the limitation of brake pressure to avoid locking the wheels to ensure cornering under all braking conditions. Braking individual wheels (independently from the drivers commands) to stabilize the vehicle at the driving limit went to series production in 1995. In the last few years, control systems for vehicle dynamics with additional actuators to control steering, drive torque distribution and wheel load distribution have entered the market.
All of these systems acting on the force allocation from the center of gravity (CG) to the four tire contact patches (TCP) and on the force transfer at the TCPs. This strong interdependence between these systems\textsuperscript{1} is the reason why independent operation of more than one of them is only possible with a loss of potential to prevent critical interferences. In [2] (cf. also [3], [4], [5], [6] and [7]) a general approach was introduced to investigate the reference behavior (best possible allocation and transfer of forces acting on the vehicle) for any configuration of actuators controlling vehicle dynamics including all steering angles $\delta = \begin{bmatrix} \delta_1 & \delta_2 & \delta_3 & \delta_4 \end{bmatrix}^T$, brake/drive torques $M = \begin{bmatrix} M_1 & M_2 & M_3 & M_4 \end{bmatrix}^T$, wheel loads $F_z = \begin{bmatrix} F_{z1} & F_{z2} & F_{z3} & F_{z4} \end{bmatrix}^T$ and even camber angles $\gamma = \begin{bmatrix} \gamma_1 & \gamma_2 & \gamma_3 & \gamma_4 \end{bmatrix}^T$. The comparison of the reference operation of different configurations may support decisions in the future development of vehicle dynamics. The investigation of the reference behavior also supports controller development and the dynamic specification of the actuators, or makes it possible to investigate the potential loss if the applied actuators have less dynamics compared to the ideal required dynamics. The reconfigurable behavior of the general approach allows further investigation of the impact of actuator failures on vehicle dynamics for reliability investigations.
Figure 1: Planar vehicle model with influencing variables and vehicle motion $y$
Because the approach presented in [2] is based on model inversion of a plane vehicle model with the plane motion $y = \begin{bmatrix} \psi & \beta & v \end{bmatrix}^T$ described by the yaw rate $\psi$, the body slip angle $\beta$ and the velocity $v$ (cf. Figure 1), a verified multi-body model is needed as ve-
\textsuperscript{1}cf. [1] for an overview and a detailed classification of systems for vehicle motion control.
hicle model to ensure that the control approach works also with all effects neglected during the controller design. Using the verified Modelica/Dymola Multi-body vehicle model presented in this paper as a vehicle model for the control approach allows investigation and comparison of all possible configurations of available actuators quickly and easily.
The possibility to model multi-body suspension assemblies, controllers, hydraulic and mechatronic actuators in one and the same environment was the reason to choose Dymola as the modeling and simulation tool. Development and verification of the model was done bottom-up. Multi-body front and rear suspension, tires, steering system, power train and the body were modeled and then verified separately with test rig results as shown in Section 2. Creating the multi-body vehicle model by connecting those subsystems together is presented in Section 3 as well as the verification process of the full vehicle through objective test maneuvers with a series car equipped with additional measurement technology. Finally, an application example is presented in Section 4.
2 Development and Verification of Subsystems
Using the Multi-Body Library [8] and the Vehicle Dynamics Library [9], tire, suspension, body and environment for the multi-body model were constructed. Simple models for the steering and the power train system are developed only for the verification of the multi-body vehicle model with a conventional series production vehicle.
2.1 Tire
Pacejka’s Magic Formula [10] is used to model the plane transfer behavior of the tire, which calculates first the pure forces
\[
F'_{xoi} = D \sin (C \arctan (B \kappa - E(B \kappa - \arctan B \kappa))) \\
F'_{yoi} = D \sin (C \arctan (B \alpha - E(B \alpha - \arctan B \alpha)))
\]
\(i \in \{1 \ldots 4\}\) designated with ' to indicate the representation with the wheel coordinate system out of the inputs longitudinal slip \( \kappa \), tire side slip angle \( \alpha \), wheel load \( F_z \) and camber angle \( \gamma \). Secondly,
\[
F'_x = F'_{xoi} \cos (C \arctan (B \alpha - E(B \alpha - \arctan B \alpha))) \\
F'_y = F'_{yoi} \cos (C \arctan (B \kappa - E(B \kappa - \arctan B \kappa)))
\]
the interdependence between the longitudinal and lateral tire forces is considered where the peak parameter \(D(F_z, \gamma)\), the shape parameter \(C\), the stiffness parameter \(B(F_z, \gamma)\) and the curvature parameter \(E(F_z, \gamma)\) are different for (1), (2) and for longitudinal and lateral directions, respectively. For the identification of these parameters, an error minimization is used to fit the model result to the test rig results of the tire used (cf. Figure 2 and Figure 3). Because braking (negative longitudinal tire forces) is more important for vehicle dynamics control than accelerating (positive longitudinal tire forces), different weighting factors are used to get a better correlation of the negative longitudinal tire forces.


2.2 Suspension
For the front and rear suspension, ADAMS models could be used as source to model the McPherson front suspension (cf. Figure 4) and the semi-trailing arm rear suspension design (cf. Figure 5) in Modelica/Dymola.
The default models for those suspension designs in the Vehicle Dynamics Library [9] could not be used without customizing and modifying the design, the joint location and the kinematic relationships to match the behavior of the source models in ADAMS.
The McPherson front suspension used independent lower rods instead of the conventional control arm in [9]. The rear suspension used a trailing arm design with two guiding links, and the body spring and anti-roll subsystems were attached to the upper guiding link (cf. [11]). Non-linear bump stops were added on the damper’s tube of the front and rear suspension.
The hard points for all joint locations of the model need to be adjusted to agree with the ADAMS source model.
The kinematics of the suspension models are verified using a vertical travel sweep test rig which needs to be modeled in Modelica/Dymola as well. Camber and toe changes are plotted to verify the kinematic behavior of the suspension models with the source model and the real suspension of the used series vehicle (cf. Figure 6).
Figure 4: McPherson front suspension design
Figure 5: Semi-trailing arm rear suspension design
Figure 6: Kinematic analysis: camber in degrees and toe in angular minutes for front (left) and rear suspension (right)
Simulation of the rigid suspension model (without any bushings) was impossible and caused singularity errors. After the implementation of bushings, simulation of the multi-body front and rear suspensions was possible. However, a pure investigation of the rigid kinematics was only possible using very stiff bushings, which is the reason for the differences between the source model and the Modelica/Dymola model in Figure 6.
The kinematics of the real suspension could only be verified with bushings, which is again the main reason for the differences between the model results and the real test rig results shown in Figure 6.
2.3 Further Subsystems of the Vehicle
After modeling the tires and suspensions, adding body, power train and steering system models, as well as the vehicle’s environment, is necessary to the complete multi-body vehicle model.
The vehicle’s body is considered to be rigid and its mass is distributed as follows: one summarized sprung
mass, including the driver, one passenger and fuel is in the body whereas the unsprung mass of the wheels including brake caliper and rotor and their links is distributed to the four wheels. The vehicle’s inertias at the CG are identified on a pendulum test rig.
The complexity and accuracy of the power train and steering models are rather low because they are only used to get reasonable connections between the driver’s inputs and the brake/drive torques $M$ and the wheel steer angles $\delta$. The former uses a speed controller and a differential gear to distribute the torques to the four wheels similar to the series production vehicle. The latter consists a rack-and-pinion steering system including a rotational spring in the steering shaft.
The study of a full-vehicle model requires the modeling of its environment. The equations of motion can only be solved by having a complete description of physical system. Therefore, the interaction between the vehicle and the world must be taken into account. It consists the interactions between vehicle and driver, vehicle and air, and tire and road. The road is modeled by a flat surface with a road friction coefficient $\mu$. The aerodynamic drag force $F_{\text{drag}}^x = -\frac{1}{2} A \rho C_d v^2$ applied at the center of gravity simplifies the interaction between air and vehicle. These environments are from the Vehicle Dynamics Library [9]. Only driver models need to be built up to be able to simulate the objective test maneuvers for the verification in Section 3.
The active control actuators are modeled by ideal revolute joints inserted at the rigid connection between the suspension and wheel subsystems. This means the wheels could be manipulated directly and without altering the suspension geometry. Passive systems are represented by constant values as inputs for the ideal actuators.
### 3 Development and Verification of the Multi-body Vehicle Model
Connecting the subsystems from Section 2 creates the multi-body vehicle model.
Important steps are the definition, design and implementation of the model. A more important step is to check if the model matches real vehicle behavior. Therefore, the vehicle and the model behavior are compared with objective test maneuvers such as steady state cornering (steady state behavior), steering steps (dynamic behavior in the time domain) and sine-sweeps (dynamic behavior in the frequency domain). Body side slip angle $\beta$ and velocity $v$ are measured using an optical Correvit sensor, roll $\varphi$ and pitch $\theta$ are measured indirectly by suspension travel sensors at all four wheels, and all translational accelerations $a_x, a_y, a_z$ as well as the rotational rates $\psi, \phi, \theta$ are measured by a sensor cluster located at the CG. The steering wheel angle $\delta_{sw}$ and the steering wheel torque $M_{sw}$ are measured using an instrumented steering wheel.
The results of the verification without any fitting of uncertain parameters like stiffness of the bushings, or friction coefficient $\mu$ of the road are presented in Figure 8 and Figure 9.
The main reason for the higher yaw rate generated by the model in both maneuvers is the uncertain road friction coefficient $\mu$. The higher roll in both maneuvers is caused by different stiffnesses of the body springs used in the model and the real vehicle.
4 Application of the Model
Exchanging the conventional steering and power train system of the verified vehicle model and using the ideal actuators as described in 2.3 leads to a generic configuration for vehicle dynamics control. All steering angles $\delta$, drive/brake torques $M$, wheel loads $F_z$ and camber angles $\gamma$ (cf. Figure 1) could be used passively or actively controlled by the general allocation approach presented in [2]. A non-linear online optimization calculates the arbitrary parameters for the under determined inverses of the over-actuated\textsuperscript{2} plane motion vehicle model (cf. 1). The number of the arbitrary parameters depends on the available actuators for the influencing variables. These arbitrary parameters are always used by the non-linear online optimization to minimize the maximum adhesion potential utilization
$$\eta_i^2 = \left( \frac{F_{xi}}{F_{xi\text{max}}} \right)^2 + \left( \frac{F_{yi}}{F_{yi\text{max}}} \right)^2$$
$(0 \leq \eta_i \leq 1)$ of the four tires $i \in \{1, \ldots, 4\}$ is approximated by an elliptic relation. The forces $F_{xi\text{max}}$ and $F_{yi\text{max}}$ depend on the wheel load $F_z$ and the camber angles $\gamma$. The control commands out of these optimization are used as inputs for the multi-body vehicle (cf. Figure 10). Inputs for the optimization are the torque and forces $u = [M_{xCG}, F_{xCG}, F_{yCG}]^T$ acting on the center of gravity. The allocation of these forces to the TCPs and the force transfer in the TCPs are optimized with the optimization objective
$$\min \max \eta_i$$
This setup facilitates investigation into the reference behavior (best possible allocation and transfer of forces acting on the vehicle) of the vehicle dynamics for every configuration of available actuators influencing vehicle dynamics for the verified multi-body vehicle model.
Changing the number of available actuation during a driving maneuver allows investigation into the impact of actuator failures on vehicle dynamics, which may support reliability investigations of active vehicle dynamics systems.
Such an investigation is presented as an exemplary application of the presented multi-body vehicle model (cf. Figure 11, Figure 12 and Figure 13). The front right steering actuator of a vehicle (equipped with four steering actuators, four drive torque actuators and four actuators to control the wheel load distribution) fails. The failure occurs after one second of driving an open loop single lane change at a constant speed $v=70$ km/h (cf. Figure 13). All actuators and actuator dynamics are limited for the optimization (4) to values of actual available actuators. The actuator fails in the worst case situation at maximum steering angle of $\delta = 1.8$ degrees and is assumed to be self-locked after the failure occurred. To compensate this fixed steering error, the vehicle exhibits the same amount of body slip angle as can be seen in Figure 13. Steering back to straight driving again the failure wheel becomes the outer wheel (with respect to the center of the corner) with more wheel load. Therefore, the optimization reduces the wheel load at this wheel as much as possible. However, the steering angles and drive torques at the other wheels are much higher compared to the usual driving condition without failure (right side of Figures 11, 12 and 13). The maneuver presented is close to the physical driving limit since two tires have already reached their maximum possible adhesion potential utilization (cf. Figure 12). The lateral acceler-
\textsuperscript{2}cf. [12] for definition and examples
ation $a_y$ is reaching 4 m/s² at the minimum and maximum of the yaw rate $\psi$. This performance meets the actual specification of flat run-flat tires.



5 Conclusion and Outlook
The development and the verification of a multi-body model of a series production vehicle is presented. This vehicle model was used to investigate and compare any possible configuration of actuators to control vehicle dynamics. In this context, Modelica/Dymola has proven to be a practical environment for future development of vehicle models including mechatronic and hydraulic actuators, multi-body suspensions and controllers. To develop, verify and use Modelica/Dymola models in an efficient way, however, interfaces to CAD systems to import CAD model data and interfaces to real time environments are desirable. Especially for this project, a library for non-linear optimization was missed, which was the reason why this part of the presented control approach was realized in MATLAB/Simulink. The presented Modelica/Dymola multi-body model was implemented into MATLAB/Simulink using the Simulink Interface of Dymola.
The outlook of the presented project is to improve the matching of the model by using an error minimization. Parameters for the error minimization are probably the uncertain stiffness of the bushings and the road friction coefficient $\mu$.
The presented example of steering actuator failure could be improved by adding a strategy of actively controlled camber (assuming the availability of such a system) for steering actuator failures which counteracts the failure force generated by the tire side slip angle $\alpha_s$ of the failure wheel.
References
[1] J. Andreasson, C. Knobel, and T. Bünte. On Road Vehicle Motion Control - striving towards synergy. In Proc. of 8th International Symposium on Advanced Vehicle Control AVEC, 2006.
[2] C. Knobel, A. Pruckner, and T. Bünte. Optimized Control Allocation - A General Approach to Control and to Investigate the Motion of Overactuated Vehicles. Submitted paper for 4th IFAC Symposium on Mechatronic Systems, 09 2006.
[3] Y. Hattori, K. Koibuchi, and T. Yokoyama. Force and Moment Control with Nonlinear Optimum Distribution for Vehicle Dynamics. In Proc. of 6th International Symposium on Advanced Vehicle Control AVEC, 2002.
[4] E. Ono, Y. Hattori, Y. Muragishi, and K. Koibuchi. Vehicle Dynamics Control Based on Tire Grip Margin. In Proc. of 7th International Symposium on Advanced Vehicle Control AVEC, 2004.
[5] R. Orend. Steuerung der Fahrzeugbewegung mit minimaler Kraftschlussausnutzung an allen vier Rädern. In Proc. of Steuerung und Regelung von Fahrzeugen und Motoren - Autoreg. VDI, 2004.
[6] J. Andreasson and T. Bünte. Global Chassis Control Based on Inverse Vehicle Dynamics Models. Presented at XIX IAVSD World Congress, 10 2005.
[7] T. Bünte and J. Andreasson. Integrierte Fahrwerkregelung mit minimierter Kraftschlussausnutzung auf der Basis dynamischer Inversion. In Proc. of Steuerung und Regelung von Fahrzeugen und Motoren - Autoreg, Wiesloch, 2006.
[8] M. Otter, H. Elmqvist, and S. E. Mattsson. The New Modelica MultiBody Library. In Proc. of the 3rd International Modelica Conference, pages 311–330, 2003.
[9] J. Andreasson. Vehicle dynamics library. In Proc. of the 3rd International Modelica Conference. Modelica Association, 2003.
[10] H. B. Pacejka. Tyre and Vehicle Dynamics. Butterworth-Heinemann, Oxford, 2002.
[11] A. Woodruff. Camber Prevention Methods using a Modelica/Dymola Multi-body Vehicle Model. Master’s thesis, Mechanical and Materials Engineering Department, Queen’s University, Kingston, ON, Canada, 2006.
[12] M. Valášek. Design and Control of Under-Actuated and Over-Actuated Mechanical Systems - Challenges of Mechanics and Mechatronics. Vehicle System Dynamics, 40:37–50, 2003.
|
Liebling from the Health and Human Services Finance Division to which was referred:
H. F. No. 4, A bill for an act relating to health; prohibiting a manufacturer or wholesale drug distributor from charging unconscionable prices for prescription drugs; requiring the Board of Pharmacy, the commissioner of human services, and health plan companies to notify the attorney general of certain prescription drug price increases; authorizing the attorney general to take action against drug manufacturers and wholesalers related to certain price increases; imposing civil penalties; amending Minnesota Statutes 2018, sections 8.31, subdivision 1; 151.071, subdivisions 1, 2; proposing coding for new law in Minnesota Statutes, chapter 151.
Reported the same back with the following amendments:
Page 2, delete section 3 and insert:
"Sec. 3. Minnesota Statutes 2019 Supplement, section 151.071, subdivision 2, is amended to read:
Subd. 2. **Grounds for disciplinary action**. The following conduct is prohibited and is grounds for disciplinary action:
(1) failure to demonstrate the qualifications or satisfy the requirements for a license or registration contained in this chapter or the rules of the board. The burden of proof is on the applicant to demonstrate such qualifications or satisfaction of such requirements;
(2) obtaining a license by fraud or by misleading the board in any way during the application process or obtaining a license by cheating, or attempting to subvert the licensing examination process. Conduct that subverts or attempts to subvert the licensing examination process includes, but is not limited to: (i) conduct that violates the security of the examination materials, such as removing examination materials from the examination room or having unauthorized possession of any portion of a future, current, or previously administered licensing examination; (ii) conduct that violates the standard of test administration, such as communicating with another examinee during administration of the examination, copying another examinee's answers, permitting another examinee to copy one's answers, or
possessing unauthorized materials; or (iii) impersonating an examinee or permitting an impersonator to take the examination on one's own behalf;
(3) for a pharmacist, pharmacy technician, pharmacist intern, applicant for a pharmacist or pharmacy license, or applicant for a pharmacy technician or pharmacist intern registration, conviction of a felony reasonably related to the practice of pharmacy. Conviction as used in this subdivision includes a conviction of an offense that if committed in this state would be deemed a felony without regard to its designation elsewhere, or a criminal proceeding where a finding or verdict of guilt is made or returned but the adjudication of guilt is either withheld or not entered thereon. The board may delay the issuance of a new license or registration if the applicant has been charged with a felony until the matter has been adjudicated;
(4) for a facility, other than a pharmacy, licensed or registered by the board, if an owner or applicant is convicted of a felony reasonably related to the operation of the facility. The board may delay the issuance of a new license or registration if the owner or applicant has been charged with a felony until the matter has been adjudicated;
(5) for a controlled substance researcher, conviction of a felony reasonably related to controlled substances or to the practice of the researcher's profession. The board may delay the issuance of a registration if the applicant has been charged with a felony until the matter has been adjudicated;
(6) disciplinary action taken by another state or by one of this state's health licensing agencies:
(i) revocation, suspension, restriction, limitation, or other disciplinary action against a license or registration in another state or jurisdiction, failure to report to the board that charges or allegations regarding the person's license or registration have been brought in another state or jurisdiction, or having been refused a license or registration by any other state or jurisdiction. The board may delay the issuance of a new license or registration if an investigation or disciplinary action is pending in another state or jurisdiction until the investigation or action has been dismissed or otherwise resolved; and
(ii) revocation, suspension, restriction, limitation, or other disciplinary action against a license or registration issued by another of this state's health licensing agencies, failure to report to the board that charges regarding the person's license or registration have been brought by another of this state's health licensing agencies, or having been refused a license or registration by another of this state's health licensing agencies. The board may delay the issuance of a new license or registration if a disciplinary action is pending before another
of this state's health licensing agencies until the action has been dismissed or otherwise resolved;
(7) for a pharmacist, pharmacy, pharmacy technician, or pharmacist intern, violation of any order of the board, of any of the provisions of this chapter or any rules of the board or violation of any federal, state, or local law or rule reasonably pertaining to the practice of pharmacy;
(8) for a facility, other than a pharmacy, licensed by the board, violations of any order of the board, of any of the provisions of this chapter or the rules of the board or violation of any federal, state, or local law relating to the operation of the facility;
(9) engaging in any unethical conduct; conduct likely to deceive, defraud, or harm the public, or demonstrating a willful or careless disregard for the health, welfare, or safety of a patient; or pharmacy practice that is professionally incompetent, in that it may create unnecessary danger to any patient's life, health, or safety, in any of which cases, proof of actual injury need not be established;
(10) aiding or abetting an unlicensed person in the practice of pharmacy, except that it is not a violation of this clause for a pharmacist to supervise a properly registered pharmacy technician or pharmacist intern if that person is performing duties allowed by this chapter or the rules of the board;
(11) for an individual licensed or registered by the board, adjudication as mentally ill or developmentally disabled, or as a chemically dependent person, a person dangerous to the public, a sexually dangerous person, or a person who has a sexual psychopathic personality, by a court of competent jurisdiction, within or without this state. Such adjudication shall automatically suspend a license for the duration thereof unless the board orders otherwise;
(12) for a pharmacist or pharmacy intern, engaging in unprofessional conduct as specified in the board's rules. In the case of a pharmacy technician, engaging in conduct specified in board rules that would be unprofessional if it were engaged in by a pharmacist or pharmacist intern or performing duties specifically reserved for pharmacists under this chapter or the rules of the board;
(13) for a pharmacy, operation of the pharmacy without a pharmacist present and on duty except as allowed by a variance approved by the board;
(14) for a pharmacist, the inability to practice pharmacy with reasonable skill and safety to patients by reason of illness, use of alcohol, drugs, narcotics, chemicals, or any other type
of material or as a result of any mental or physical condition, including deterioration through
the aging process or loss of motor skills. In the case of registered pharmacy technicians,
pharmacist interns, or controlled substance researchers, the inability to carry out duties
allowed under this chapter or the rules of the board with reasonable skill and safety to
patients by reason of illness, use of alcohol, drugs, narcotics, chemicals, or any other type
of material or as a result of any mental or physical condition, including deterioration through
the aging process or loss of motor skills;
(15) for a pharmacist, pharmacy, pharmacist intern, pharmacy technician, medical gas
distributor, or controlled substance researcher, revealing a privileged communication from
or relating to a patient except when otherwise required or permitted by law;
(16) for a pharmacist or pharmacy, improper management of patient records, including
failure to maintain adequate patient records, to comply with a patient's request made pursuant
to sections 144.291 to 144.298, or to furnish a patient record or report required by law;
(17) fee splitting, including without limitation:
(i) paying, offering to pay, receiving, or agreeing to receive, a commission, rebate,
kickback, or other form of remuneration, directly or indirectly, for the referral of patients;
(ii) referring a patient to any health care provider as defined in sections 144.291 to
144.298 in which the licensee or registrant has a financial or economic interest as defined
in section 144.6521, subdivision 3, unless the licensee or registrant has disclosed the
licensee's or registrant's financial or economic interest in accordance with section 144.6521;
and
(iii) any arrangement through which a pharmacy, in which the prescribing practitioner
does not have a significant ownership interest, fills a prescription drug order and the
prescribing practitioner is involved in any manner, directly or indirectly, in setting the price
for the filled prescription that is charged to the patient, the patient's insurer or pharmacy
benefit manager, or other person paying for the prescription or, in the case of veterinary
patients, the price for the filled prescription that is charged to the client or other person
paying for the prescription, except that a veterinarian and a pharmacy may enter into such
an arrangement provided that the client or other person paying for the prescription is notified,
in writing and with each prescription dispensed, about the arrangement, unless such
arrangement involves pharmacy services provided for livestock, poultry, and agricultural
production systems, in which case client notification would not be required;
(18) engaging in abusive or fraudulent billing practices, including violations of the
federal Medicare and Medicaid laws or state medical assistance laws or rules;
engaging in conduct with a patient that is sexual or may reasonably be interpreted by the patient as sexual, or in any verbal behavior that is seductive or sexually demeaning to a patient;
failure to make reports as required by section 151.072 or to cooperate with an investigation of the board as required by section 151.074;
knowingly providing false or misleading information that is directly related to the care of a patient unless done for an accepted therapeutic purpose such as the dispensing and administration of a placebo;
aiding suicide or aiding attempted suicide in violation of section 609.215 as established by any of the following:
(i) a copy of the record of criminal conviction or plea of guilty for a felony in violation of section 609.215, subdivision 1 or 2;
(ii) a copy of the record of a judgment of contempt of court for violating an injunction issued under section 609.215, subdivision 4;
(iii) a copy of the record of a judgment assessing damages under section 609.215, subdivision 5; or
(iv) a finding by the board that the person violated section 609.215, subdivision 1 or 2.
The board shall investigate any complaint of a violation of section 609.215, subdivision 1 or 2;
for a pharmacist, practice of pharmacy under a lapsed or nonrenewed license. For a pharmacist intern, pharmacy technician, or controlled substance researcher, performing duties permitted to such individuals by this chapter or the rules of the board under a lapsed or nonrenewed registration. For a facility required to be licensed under this chapter, operation of the facility under a lapsed or nonrenewed license or registration; and
for a pharmacist, pharmacist intern, or pharmacy technician, termination or discharge from the health professionals services program for reasons other than the satisfactory completion of the program; and
for a manufacturer or wholesale drug distributor, a violation of section 151.462."
"Sec. 5. APPROPRIATION.
$46,000 in fiscal year 2021 is appropriated from the general fund to the commissioner of human services to implement Minnesota Statutes, section 151.462. The base for this appropriation is $52,000 in fiscal year 2022 and $52,000 in fiscal year 2023. There is federal financial participation of $15,000 in fiscal year 2021 and $17,000 per year thereafter."
Amend the title as follows:
Page 1, line 7, after the second semicolon, insert "appropriating money;"
Correct the title numbers accordingly
With the recommendation that when so amended the bill be returned to the Committee on Ways and Means.
This Division action taken February 27, 2020
[Signature]
Chair
|
Late-Holocene Environmental History in the Northeastern Caribbean: Multi-proxy Evidence From Two Small Lakes on the Southern Slope of the Cordillera Central, Dominican Republic
Chad Steven Lane
University of Tennessee, Knoxville
Follow this and additional works at: https://trace.tennessee.edu/utk_graddiss
Part of the Geography Commons
Recommended Citation
Lane, Chad Steven, "Late-Holocene Environmental History in the Northeastern Caribbean: Multi-proxy Evidence From Two Small Lakes on the Southern Slope of the Cordillera Central, Dominican Republic." PhD diss., University of Tennessee, 2007.
https://trace.tennessee.edu/utk_graddiss/4248
This Dissertation is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact email@example.com.
To the Graduate Council:
I am submitting herewith a dissertation written by Chad Steven Lane entitled "Late-Holocene Environmental History in the Northeastern Caribbean: Multi-proxy Evidence From Two Small Lakes on the Southern Slope of the Cordillera Central, Dominican Republic." I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the requirements for the degree of Doctor of Philosophy, with a major in Geography.
Sally P. Horn, Claudia I. Mora, Major Professor
We have read this dissertation and recommend its acceptance:
Henri D. Grissino-Mayer, Kenneth H. Orvis
Accepted for the Council:
Carolyn R. Hodges
Vice Provost and Dean of the Graduate School
(Original signatures are on file with official student records.)
To the Graduate Council:
We are submitting herewith a dissertation written by Chad Steven Lane entitled "Late-Holocene environmental history in the northeastern Caribbean: Multi-proxy evidence from two small lakes on the southern slope of the Cordillera Central, Dominican Republic." We have examined the final electronic copy of the dissertation for form and content and recommend that it be accepted in partial fulfillment of the requirements for the degree of Doctor of Philosophy, with a major in Geography.
Sally P. Horn, Major Professor
Claudia I. Mora, Major Professor
We have read this dissertation and recommend its acceptance:
Dr. Henri D. Grissino-Mayer
Dr. Kenneth H. Orvis
Acceptance for the Council:
Vice Provost and Dean of the Graduate School
Thesis
2007b
L36
LATE HOLOCENE ENVIRONMENTAL HISTORY IN THE NORTHEASTERN CARIBBEAN: MULTI-PROXY EVIDENCE FROM TWO SMALL LAKES ON THE SOUTHERN SLOPE OF THE CORDILLERA CENTRAL, DOMINICAN REPUBLIC
A Dissertation Presented for the Doctor of Philosophy Degree University of Tennessee, Knoxville
Chad Steven Lane May 2007
The following is a list of the most important and frequently used terms in the field of computer science:
1. Algorithm: A step-by-step procedure for solving a problem or performing a task.
2. Data Structure: A way of organizing data that allows efficient access, modification, and manipulation.
3. Database: An organized collection of data stored in a computer system.
4. Database Management System (DBMS): Software that manages databases and provides an interface for users to interact with them.
5. Encryption: The process of converting information into a coded form so that it can be securely transmitted or stored.
6. Hashing: A technique for mapping data of arbitrary size to fixed-size values.
7. Interface: A way for two systems to communicate with each other.
8. Object-Oriented Programming (OOP): A programming paradigm that emphasizes the use of objects to represent real-world entities and their interactions.
9. Protocol: A set of rules that govern how data is transmitted between two systems.
10. Query: A request for information from a database.
11. Security: The protection of data from unauthorized access, modification, or destruction.
12. Software: A set of instructions that tell a computer what to do.
13. System: A collection of hardware and software components that work together to perform a specific task.
14. User Interface (UI): The part of a computer program that interacts with the user.
15. Virtual Machine (VM): A software implementation of a computer system that runs on top of another computer system.
16. Web Application: A software application that runs on a web server and is accessed through a web browser.
17. XML: eXtensible Markup Language, a markup language used to structure and organize data in a web application.
18. API: Application Programming Interface, a set of rules and protocols for building software applications.
19. Cloud Computing: The delivery of computing resources over the internet.
20. Big Data: Large volumes of data that require specialized techniques for analysis and management.
21. Machine Learning: A subset of artificial intelligence that focuses on developing algorithms that can learn from data and make predictions or decisions without being explicitly programmed.
22. Natural Language Processing (NLP): A field of artificial intelligence that focuses on enabling computers to understand and generate human language.
23. Robotics: The design, construction, and operation of robots.
24. Internet of Things (IoT): The interconnection of physical devices, vehicles, appliances, and other items with the internet, allowing them to exchange data and perform actions autonomously.
25. Quantum Computing: A type of computing that uses quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data.
26. Blockchain: A decentralized digital ledger that records transactions across many computers in such a way that the registered transactions cannot be altered retroactively.
27. Artificial Intelligence (AI): The simulation of human intelligence processes by machines, especially computer systems.
28. Deep Learning: A subset of machine learning that uses neural networks to learn from large amounts of data.
29. Computer Vision: The ability of a computer to interpret and understand visual information from the world around it.
30. Natural Language Generation (NLG): The creation of natural language text from data or other sources.
ACKNOWLEDGEMENTS
I have many people to thank for their guidance and support during my dissertation research. I wish to thank my co-advisors, Drs. Sally Horn and Claudia Mora, and my dissertation committee members, Drs. Ken Orvis and Henri Grissino-Mayer, for advice and guidance in conducting my dissertation research.
I feel that my graduate program was unique and more effective because of the co-advising I received from Drs. Horn and Mora. The lines drawn between disciplines in the natural sciences are rapidly dissipating and my multidisciplinary Ph.D. experience guided by faculty from two departments has better prepared for me for my future in academia. I want to also thank Drs. Horn and Mora for making available excellent facilities and for other support for my dissertation research.
I want to thank Dr. Sally Horn for her willingness to always go well above and beyond the call of duty to help her students. Dr. Horn is no stranger to late night editing marathons that are great for meeting deadlines, but also incredibly accurate assuring her students end up with the best documents possible. Dr. Horn is also always on the look out for good student opportunities outside of the university and is responsible for introducing me to the world of grant writing.
Dr. Claudia Mora opened my eyes to the world of stable isotope geochemistry, which I plan to continue to explore for the rest of my professional career. More importantly, Dr. Mora has proven to be an incredibly supportive and productive advisor who is always pushing me to meet my potential. She is very
aware of the research needs of her students and is more than willing to help her students obtain any necessary knowledge or materials that might be required. I look forward to continuing collaborations with Dr. Mora, as well as Dr. Horn, well into the future.
I want to thank Dr. Ken Orvis for his endless support in the laboratory, classroom, and especially in the field. Dr. Orvis’ wide ranging knowledge of all facets of paleoenvironmental research, and a wide range of other topics, is incredible. Dr. Orvis has proved to be incredibly patient in the classroom and the field. He doesn’t even get upset when you dump a bucket of muddy, smelly water on his head in an unbearably hot, methane rich swamp in the middle of the Dominican Republic.
Dr. Grissino-Mayer provided helpful editorial assistance that strengthened this dissertation. He also introduced me to techniques of tree-ring analyses that I will use in future research. Dr. Grissino-Mayer’s passion for his profession is largely unmatched and it shows in his research and his students.
I also want to express my utmost appreciation to Dr. Zheng-Hua Li. As the research associate in the stable isotope geochemistry laboratory, Dr. Li directly supervised my stable isotope analyses. Dr. Li is an excellent researcher and lab manager who keeps the stable isotope geochemistry laboratory running smoothly despite the fact that there are many projects, including his own, continually in progress. Going well above and beyond the call of duty, Dr. Li was more than willing to help me analyze and interpret isotopic data.
I was supported by a Hilton-Smith Ph.D. Fellowship and a Yates Dissertation Fellowship from the University of Tennessee during my tenure as a doctoral student, along with appointments as a teaching assistant and associate (lecturer). My dissertation research was part of a larger study, funded by grants to K. Orvis and S. Horn from the National Geographic Society, and to S. Horn, K. Orvis, and C. Mora from the National Science Foundation (BCS-0550382). The latter grant provided a graduate research assistantship for the latter part of my Ph.D. program. Isotopic analyses (Chapters 3 and 4) were also supported by a grant to C. Mora from the National Science Foundation (EAR-0004104). Some laboratory analyses and equipment were supported by research grants from the Association of American Geographers (AAG) and the Biogeography Specialty Group (BSG) of the AAG. Two undergraduate students who assisted me with laboratory analyses, Katie Milam and John Thomasson, were supported by a future faculty grant that I received from the Academic Keys Foundation.
Travel to national meetings to present my dissertation research was partially funded by NSF grant BCS-0550382, as well as by grants from the Graduate School, the College of Arts and Sciences, the Department of Geography, and the Carden Fund in the Department of Earth and Planetary Sciences, all at the University of Tennessee. Further travel support to national meetings was provided by the BSG. My attendance at a stable isotope ecology short course at the University of Utah was made possible by C. Mora and the Carden Fund in the Department of Earth and Planetary Sciences. Partial support for travel and tuition to attend a Natural Environment Research Council short course on ostracod
analyses at University College London was provided by the Stewart K. McCroskey Memorial Fund in the Department of Geography, and by S. Horn, K. Orvis, and C. Mora.
Many Dominicans provided assistance and logistical support for this project and related projects in the Dominican Republic. Andrés Ferrer (former director of the Moscoso Puello Foundation; currently the Country Director for The Nature Conservancy in the Dominican Republic) and the Moscoso Puello Foundation, a non-profit conservation group in the Dominican Republic, were instrumental in helping Dr. Horn and Orvis obtain research permits and in providing the necessary infrastructure for research in the Dominican Republic. Ricardo Garcia of the National Herbarium identified a small collection of plant specimens made to help identify pollen in the lake sediments. Felipe Garcia and his family kindly assisted us with field work at Las Lagunas and also allowed us to camp on their property for extended lengths of time.
S. Horn and K. Orvis led expeditions to the Dominican Republic in the summer of 2002, summer of 2003, and winter of 2004 to conduct reconnaissance and collect sediment cores from Laguna Castilla, Laguna de Salvador, and other lakes of the Las Lagunas region. Field assistance was provided by my graduate student colleagues Duane Cozadd in 2002 and Jeff Dahoda in 2004.
Laboratory assistance, helpful discussion, and moral support was also provided by my fellow graduate and undergraduate students Zachary Taylor, Kyle Schlachter, Martin Arford, Katie Milam, John Thomasson, Duane Cozadd, Jason Graham, Dana Miller, Whitney Kocis, Dave West, Allison Stork, Brock Remus,
and Joe Burgess. I am especially grateful to Martin Arford and Duane Cozadd, who were never hesitant to help me out with pollen identifications, and Katie Milam, John Thomasson, and Jason Graham, who spent endless hours as undergraduate research assistants helping me pick out and analyze ostracods and charophyte oospores. I was assisted in ostracod identifications by Dr. Jonathon Holmes (University College London) and am grateful to him for his time and his willingness to share his expertise. Dr. Lee Newsom (Pennsylvania State University) was kind enough to share her knowledge of Caribbean archaeology, which added greatly to this dissertation. I must also thank the Departments of Geography and Earth and Planetary Sciences for providing outstanding programs and supportive environments that promote academic and scholastic excellence.
Last, but certainly not least, I am extremely grateful for the love and support I have received from my family. My parents, Steven and Martha, are the two most supportive and caring parents any child could ever hope for and have made numerous avenues of success available to me throughout my life. Without their support and confidence I would have never even imagined that I might one day be getting a Ph.D. Finally, I owe a special thanks to my wife Gretchen whose love, support, and patience has kept me level-headed and motivated over the last four years of my graduate career, and who has also always made sure that I remembered to have fun.
vii
ABSTRACT
This dissertation presents multi-proxy evidence of paleoenvironmental change preserved in sediment records recovered from two lakes on the southern (Caribbean) slope of the Cordillera Central in the Dominican Republic: Laguna Castilla (18°47'51" N, 70°52'33" W, 976 m) and Laguna de Salvador (18°47'45" N, 70°53'13" W, 990 m).
The Castilla and Salvador sediment records contain evidence of prehistoric forest clearance and agriculture, including abundant maize pollen, dating back to around A.D. 1060. These pollen grains constitute the earliest evidence of maize agriculture from the interior of Hispaniola, and represent some of the earliest evidence of maize agriculture from the Caribbean as a whole. This finding is significant geographically because it suggests that prehistoric humans that occupied the interior of the island may have relied more on maize than their coastal counterparts.
The abundance of maize pollen in the sediment records, and the high rates of sediment accumulation in the lakes, provide an ideal situation for testing the sensitivity of stable carbon isotope signatures of total organic carbon ($\delta^{13}$C$_{\text{TOC}}$) in lake sediments to variations in the spatial scale or intensity of agricultural activities. Close correspondence between $\delta^{13}$C$_{\text{TOC}}$ values and maize pollen concentrations in the Castilla sediment record indicates a close relationship between $\delta^{13}$C$_{\text{TOC}}$ signatures and the scale of maize cultivation. Correlations between $\delta^{13}$C$_{\text{TOC}}$ signatures and mineral influx also highlight the sensitivity of the $\delta^{13}$C$_{\text{TOC}}$ record to variations in allochthonous carbon delivery.
More detailed multi-proxy analyses of the Castilla and Salvador sediment records indicate extreme shifts in hydrology, vegetation, and disturbance regimes in response to climate change and human activity in the watersheds over the last ~3000 cal yr B.P. Close correspondence between the hydrological history of Castilla, Salvador, and other circum-Caribbean study sites indicates that much of the hydrologic variability was associated with variations in the mean boreal summer position of the Intertropical Convergence Zone. Human occupation of the Castilla and Salvador watersheds appears to be closely linked to severe drought events and may indicate larger scale cultural responses to severe precipitation variability on the island of Hispaniola.
# TABLE OF CONTENTS
| CHAPTER | PAGE |
|------------------------------------------------------------------------|------|
| 1. INTRODUCTION AND RESEARCH SETTING | 1 |
| Larger Framework of Dissertation | 3 |
| Dissertation Organization | 7 |
| Environmental Setting | 9 |
| Late Holocene Paleoclimates of the Circum-Caribbean | 23 |
| Prehistoric Human Occupation and Agriculture on the Island of Hispaniola| 59 |
| 2. THE EARLIEST EVIDENCE OF MAIZE AGRICULTURE FROM | |
| THE INTERIOR OF HISPANIOLA | 63 |
| Introduction | 63 |
| Methods | 68 |
| Results | 70 |
| Discussion and Conclusions | 72 |
| 3. SENSITIVITY OF SEDIMENTARY STABLE CARBON | |
| ISOTOPES IN A SMALL NEOTROPICAL LAKE TO PREHISTORIC | |
| FOREST CLEARANCE AND MAIZE AGRICULTURE | 79 |
| Introduction | 79 |
| Study Site | 83 |
| Methods | 85 |
| Results | 89 |
| Discussion | 99 |
| Conclusions | 106 |
| 4. MULTI-PROXY ANALYSIS OF LATE-HOLOCENE | |
| PALEOENVIRONMENTAL CHANGE IN THE MID-ELEVATIONS | |
| OF THE CORDILLERA CENTRAL, DOMINICAN REPUBLIC | 109 |
| Introduction | 109 |
| Study Area | 110 |
| Methods | 114 |
| Results | 118 |
| Discussion | 142 |
| Summary and Conclusions | 172 |
| 5. CONCLUSIONS AND SUMMARY | 179 |
| LIST OF REFERENCES | 187 |
APPENDIX A ........................................................................................................... 219
VITA ....................................................................................................................... 225
# LIST OF TABLES
| TABLE | PAGE |
|----------------------------------------------------------------------|------|
| **CHAPTER 2: THE EARLIEST EVIDENCE OF MAIZE AGRICULTURE FROM THE INTERIOR OF HISPANIOLA** | |
| 2.1. Stratigraphic position, abundance, and dimensions of maize pollen grains from the Laguna Castilla pre-modern maize interval | 71 |
| 2.2. Radiocarbon determinations and calibrations for Laguna Castilla | 74 |
| 2.3. Stratigraphic position, abundance, and dimensions of maize pollen grains from Laguna de Salvador | 75 |
| 2.4. Radiocarbon determinations and calibrations for Laguna de Salvador | 76 |
| **CHAPTER 3: SENSITIVITY OF SEDIMENTARY STABLE CARBON ISOTOPES IN A SMALL NEOTROPICAL LAKE TO PREHISTORIC FOREST CLEARANCE AND MAIZE AGRICULTURE** | |
| 3.1. Radiocarbon determinations and calibrations for Laguna Castilla | 93 |
| **CHAPTER 4: MULTI-PROXY ANALYSIS OF LATE-HOLOCENE PALEOENVIRONMENTAL CHANGE IN THE MID-ELEVATIONS OF THE CORDILLERA CENTRAL, DOMINICAN REPUBLIC** | |
| 4.1. Radiocarbon determinations and calibrations for Laguna Castilla | 123 |
| 4.2. Radiocarbon determinations and calibrations for Laguna de Salvador | 124 |
| 4.3. Biogenic carbonate isotope sampling information | 141 |
| 4.4. Selected limnological data for the lakes of Las Lagunas | 147 |
| 4.5. Climate summary for the Las Lagunas area | 173 |
xiii
# LIST OF FIGURES
| FIGURE | PAGE |
|------------------------------------------------------------------------|------|
| **CHAPTER 1: INTRODUCTION AND RESEARCH SETTING** | |
| 1.1. The locations of the Laguna Saladilla and Las Lagunas study sites on the island of Hispaniola and dominant moisture sources for the island | 4 |
| 1.2. The island of Hispaniola with sites mentioned in text | 10 |
| 1.3. Map of Las Lagunas | 12 |
| 1.4. Qualitative summary diagram of centennial-scale climate variability in the circum-Caribbean during the Holocene | 27 |
| **CHAPTER 2: THE EARLIEST EVIDENCE OF MAIZE AGRICULTURE FROM THE INTERIOR OF HISPANIOLA** | |
| 2.1. The locations of Hispaniolan study sites containing macrofossil or microfossil evidence of maize agriculture prior to A.D. 1500 | 65 |
| 2.2. Stratigraphy of the Laguna Castilla sediment core and the stratigraphic position of pollen samples within the pre-modern maize interval | 73 |
| **CHAPTER 3: SENSITIVITY OF SEDIMENTARY STABLE CARBON ISOTOPES IN A SMALL NEOTROPICAL LAKE TO PREHISTORIC FOREST CLEARANCE AND MAIZE AGRICULTURE** | |
| 3.1. Location of the Dominican Republic and Laguna Castilla | 84 |
| 3.2. Photograph of Laguna Castilla and the surrounding landscape | 86 |
| 3.3. Stratigraphy and radiocarbon chronology of the Laguna Castilla sediment core | 90 |
| 3.4. Age-depth graph for the Laguna Castilla sediment core based on weighted means of the probability distributions for radiocarbon dates | 95 |
| 3.5. Summary diagram of Laguna Castilla sedimentary $\delta^{13}$C$_{TOC}$ values, maize pollen concentrations, and mineral influx variation | 96 |
3.6. Comparison of Laguna Castilla sedimentary $\delta^{13}$C$_{\text{TOC}}$ values and maize pollen concentrations.......................................................... 101
CHAPTER 4: MULTI-PROXY ANALYSIS OF LATE-HOLOCENE PALEOENVIRONMENTAL CHANGE IN THE MID-ELEVATIONS OF THE CORDILLERA CENTRAL, DOMINICAN REPUBLIC
4.1. The location of the island of Hispaniola; the Las Lagunas study site within the Dominican Republic, nearby city of Azua, and capital city of Santo Domingo; and a topographic map of the Las Lagunas area......................................................................................... 112
4.2. Sediment stratigraphy and chronology of the Laguna Castilla and Laguna de Salvador sediment cores........................................................................... 119
4.3. Diagram showing sediment bulk density (g/cm$^3$), organic content (% dry mass), carbonate content (% dry mass), water content (% wet mass), mineral influx (mg/cm$^2$/yr), and organic carbon influx (mg/cm$^2$/yr) for the Laguna Castilla and Laguna de Salvador sediment cores........................................................................... 120
4.4. The weighted mean of the calibrated radiocarbon ages (cal yr B.P.) plotted against depth for the Laguna Castilla and Laguna de Salvador sediment cores........................................................................... 125
4.5. Diagram showing pollen and spore concentrations, influx, and indeterminate pollen percentages for the Laguna Castilla and Laguna de Salvador sediment records........................................................................... 128
4.6. Pollen percentage diagram for arborescent, herbaceous, and aquatic taxa in the Laguna Castilla sediment core........................................................................... 130
4.7. Pollen percentage diagram for arborescent, herbaceous, and aquatic taxa of the Laguna de Salvador sediment core........................................................................... 131
4.8. Stable carbon isotope composition of bulk sediments from Laguna Castilla and Laguna de Salvador plotted against depth and plotted against calibrated age........................................................................... 133
4.9. Concentration (valves per cm$^3$ wet sediment) of *Cythridella boldii* ostracod valves and the carbon and oxygen isotope composition of *C. boldii* valves in the Laguna Castilla sediment core. ................................................................. 136
4.10. Concentration (valves per cm$^3$ wet sediment) of *Cythridella boldii* and *Candona* sp. ostracod valves and the carbon and oxygen isotope composition of *C. boldii* and *Candona* sp. valves in the Laguna de Salvador sediment core ........................................................................... 139
4.11. Comparison of selected Laguna Castilla and Laguna de Salvador proxy data with titanium concentrations from the Cariaco Basin .................................................................................................................. 153
4.12. Comparison of mineral influx and biogenic carbonate concentrations for Laguna Castilla and Laguna de Salvador ................................................................. 160
xvii
CHAPTER 1
Introduction and Research Setting
Hispaniola (17°30′–19°50′ N, 68°20′–74°30′ W) is the second largest island in the Caribbean, after Cuba, and has the greatest relief and climatic and biological diversity of all Caribbean islands. Elevations range from sea level to the high mountain peaks of the Cordillera Central, which reach over 3000 m elevation (Orvis, 2003). Precipitation totals range from a maximum of ~2500 mm/yr in the northeastern portion of the island to a minimum of ~500 mm/yr in the western portion. The wide range of microclimates and habitats and the geographic isolation of Hispaniola have fostered the development of an incredibly diverse assemblage of organisms and a high level of endemism (Bolay, 1997). In addition, Hispaniola has a compelling human history as it is the geographic epicenter of European contact with the “new world.” It was the only Caribbean island visited on all four of Christopher Columbus’ voyages. In A.D. 1493, Columbus founded the first European settlement in the Americas at La Isabela, which became a springboard for further exploration and settlement throughout the region.
Despite the compelling physical geography, ecology, and human history of Hispaniola, very little is known about the environmental history of the island. Continuous high-resolution records of Holocene paleoenvironmental change on Hispaniola are geographically limited. One focus of research has been Lake Miragoane, a coastal lake on the southern coast of Haiti. A series of
paleoenvironmental analyses have been conducted on a single sediment core recovered from the lake in 1985 (Brenner and Binford, 1988; Hodell et al., 1991; Curtis and Hodell, 1993; Higuera-Gundy, 1991; 1999). A second focus of paleoenvironmental research has been the highlands of the Cordillera Central, where S. Horn, K. Orvis, and collaborators have examined a series of sediment cores from lakes and bogs, as well as soil and geomorphic indicators of paleoenvironmental change and modern pollen-vegetation relationships (Orvis et al., 1997; 2005; Horn et al., 2000; Clark et al., 2002; Kennedy, 2003; Kennedy et al., 2005; 2006). Also in the highlands, J. Speer and H. Grissino-Mayer have joined Orvis, Horn, and Kennedy in investigating the dendrochronological potential of the native pine, *Pinus occidentalis* (Speer et al., 2004). While these records have provided new avenues of research and insights into the impacts of climate variability and shifting disturbance regimes on the island, they could not provide much insight into prehistoric human-environment interactions on the landscape of Hispaniola.
Knowledge of the interrelationships between paleoclimate variability, human populations, and the ecosystems of Hispaniola will only improve with an increase in the number of study sites and areas investigated. Unfortunately, there are only a limited number of natural lakes or other sources of continuous archives of paleoenvironmental change and prehistoric human activity on the island. In this study, I have conducted an in-depth investigation of two mid-elevation lakes in the Cordillera Central in an effort to better understand the interrelationships
between climate, ecosystems, and prehistoric human occupants throughout the late Holocene on the island of Hispaniola.
**Larger Framework of Dissertation**
This dissertation is part of a larger study, funded by a grant to S. Horn, K. Orvis, and C. Mora from the National Science Foundation (BCS-0550382) and an earlier award to K. Orvis and S. Horn from the National Geographic Society. The goal of the National Science Foundation study is to use sediment records from two sites, Las Lagunas and Laguna Saladilla, and the unique topography of Hispaniola to reconstruct Holocene atmospheric dynamics of the region and impacts of prehistoric human populations on the island (Figure 1.1).
The island of Hispaniola is influenced by three primary climate variables: (1) trade wind strength and moisture content, (2) the influence of polar outbreaks, and (3) the migration of mean boreal summer Intertropical Convergence Zone (ITCZ) position. The northeasterly trade winds are the dominant component of the island’s climate. Hispaniola’s trade-wind-related precipitation is heaviest in the northeastern portions of the island and on the windward slopes of the multiple mountain ranges on the island. However, the WNW-ESE trending mountain ranges of the island are very effective barriers to the trade winds (Figure 1.1).
Laguna Saladilla is located along the leeward slope of one of these mountain ranges, the Cordillera Septentrional (Figure 1.1). The barrier formed by the Cordillera Septentrional creates persistent zones of atmospheric subsidence along the leeward slopes of the range and very strong rain shadow conditions.
Figure 1.1. The locations of the Laguna Saladilla and Las Lagunas study sites on the island of Hispaniola and dominant moisture sources for the island. Precipitation delivery to most of the island comes from the northeasterly trade winds. The Laguna Saladilla and Las Lagunas study sites are shielded from trade wind precipitation by the Cordillera Septentrional and Cordillera Central, respectively. Precipitation at Laguna Saladilla is primarily associated with polar air masses, known as “nortes” migrating from North America during the boreal winter. Precipitation at Las Lagunas is primarily associated with intensified sea breezes and increased convective activity during the boreal summer when ITCZ-proximal doldrum conditions dominate. Relief is based on Shuttle Radar Topography Mission 1 elevation data. Lighter shades represent higher elevations. Map provided by K. Orvis.
Figure 1.1. Continued
locally, hence the desert conditions around Laguna Saladilla. However, the topography around Laguna Saladilla is open to the WNW, the direction from which polar fronts from North America (nortes) arrive in the boreal winter (Figure 1.1). These polar air outbreaks and fronts are the primary source of precipitation delivery to the Laguna Saladilla area.
The second study area, Las Lagunas, is located on the leeward slope of the largest mountain range on the island of Hispaniola, the Cordillera Central. The Cordillera Central represent an unbroken barrier to the northeast trade winds and the modern precipitation regime in the Las Lagunas area is dominated by convection fed by sea breeze moisture during the boreal summer when ITCZ-proximal doldrums conditions dominate (Figure 1.1; see further discussion in the Environmental Setting section below).
By analyzing and comparing proxy records of paleoprecipitation recovered from Laguna Saladilla and multiple lakes at the Las Lagunas study site it should be possible to reconstruct variations in two distinct classes of weather and the related atmospheric dynamics driving these weather systems. The Laguna Saladilla sediments should hypothetically provide a record of polar outbreak events while the sediments of lakes around Las Lagunas should hypothetically provide a record of ITCZ migration. By comparing reconstructed paleoprecipitation records from both sites it should be possible to reconstruct variations in polar front intensity and ITCZ migration, along with the interrelationships of these atmospheric dynamics, over extended periods of time.
This dissertation is one part of this much larger study. It focuses on the sediment records of two of four lakes under investigation at the Las Lagunas study site and has these specific goals:
1. Search for and examine any palynological evidence of maize agriculture in the sediments of two small lakes in the mid-elevations of the Cordillera Central (Laguna Castilla and Laguna de Salvador) to develop a more in-depth understanding of the introduction, distribution, and importance of maize agriculture in Hispaniola (Chapter 2).
2. Assess the potential of using sedimentary stable carbon isotopes as high-resolution indicators of prehistoric maize agriculture intensity and forest disturbance in a small watershed (Laguna Castilla) in the mid-elevations of the Dominican Republic (Chapter 3).
3. Develop a comprehensive late Holocene record of paleoenvironmental change in the mid-elevations of the Cordillera Central, based on multi-proxy analyses of sediments from Laguna Castilla and Laguna de Salvador, that allows assessments of climate change, vegetation change, prehistoric human impacts, disturbance regimes, and the inter-relationships of all of these variables (Chapter 4).
**Dissertation Organization**
This dissertation contains five chapters. The first Chapter (1) introduces the dissertation, describes the environmental setting, and reviews prior research on regional paleoclimate and archaeology. Chapters 2, 3, and 4 are presented as
stand-alone manuscripts. They are slightly modified versions of manuscripts that have been submitted for publication or are in preparation for submission.
Chapter 2 is an analysis of prehistoric maize (*Zea mays* subsp. *mays*) pollen preserved in the sediment records of Laguna Castilla and Laguna de Salvador. I examine the timing of maize pollen deposition in relation to the archaeological record of maize on Hispaniola and in the circum-Caribbean region, and consider the archaeological relevance of my findings. This manuscript has been submitted to the *Journal of Caribbean Science*.
Chapter 3 presents an analysis of the sensitivity of the sedimentary stable carbon isotope record to variations in the abundance of maize being cultivated in the Laguna Castilla watershed. I compare variations in the bulk sedimentary stable carbon isotope record to sedimentary proxies of the intensity and/or spatial extent of maize cultivation and of allochthonous sediment delivery. This manuscript is in preparation for submission to the *Journal of Paleolimnology*.
In Chapter 4, I present a multi-proxy record of paleoenvironmental change from Laguna Castilla and Laguna de Salvador. I investigate evidence of climate change, vegetation change, prehistoric human activity, and shifting disturbance regimes, and analyze the interrelationships between all of these variables. In addition, I discuss the significance of these paleoenvironmental changes in the context of the region as a whole. This manuscript is in preparation for submission to the journal *Quaternary Science Reviews*.
Finally, Chapter 5 is a summary of the major conclusions of this dissertation.
Environmental Setting
The Cordillera Central extends from northwestern Haiti to the south-central portions of the Dominican Republic (Figure 1.2). The Cordillera Central is the oldest mountain chain on the island of Hispaniola; uplift occurred during the Plio-Pleistocene (Pubellier et al., 1991). The lithology of the Cordillera Central dates back some 60 million years and includes Cretaceous volcanic, metamorphic, and plutonic rocks (Bolay, 1997). In the province of Azua, which includes the study site, much of the plutonic core of the Cordillera Central is covered with soft marine sediments. These ancient marine sediments have been deeply incised by numerous streams and are highly susceptible to slope failure. The small town of Las Lagunas (18° 47'00" N, 70°53'00" W; Figure 1.2) is located at the site of some of the most spectacular of these slope failure events. The large slide(s) that formed the area now occupied by the town of Las Lagunas also formed several lake basins (Figure 1.3). Laguna Castilla (18°47'51" N, 70°52'33" W, 976 m) and Laguna de Salvador (18°47'45" N, 70°53'13" W, 990 m) are the focus of this dissertation.
Climate
The discussion that follows is partially based upon interpretations of Caribbean climate dynamics provided by K. Orvis (pers. comm.). The location of Hispaniola along the northern margin of the tropics means it is susceptible to tropical, subtropical, and extratropical climate dynamics. Tropical influences include the trade winds, atmospheric instability and convergence associated with doldrum conditions, and atmospheric disturbance-related influences such as
Figure 1.2. The island of Hispaniola with sites mentioned in text. Relief is based on Shuttle Radar Topography Mission 1 elevation data. Lighter shades represent higher elevations. Map provided by K. Orvis.
Figure 1.2. Continued
Figure 1.3. Map of the Las Lagunas area. The town center is marked by an “X”. Laguna Castilla and Laguna de Salvador are the focus of this dissertation. Map based on the 1:50000 topographic sheet published by the National Geospatial-Intelligence Agency. Lake positions were determined from GPS measurements by K. Orvis.
easterly waves, tropical storms, and other smaller disturbances to the atmospheric system. The northeasterly trade winds are the dominant feature of Hispaniolan climate. The northeasterly trades deliver tropical Atlantic moisture to the eastern shores of the island as well as the windward slopes of the major mountain ranges, but their constancy of direction means that severe leeward rainshadowing occurs. An increase in the intensity of the northeast trades or their moisture content, for example as a result of higher sea surface temperatures in the tropical Atlantic, can yield increased precipitation to those areas of Hispaniola that receive trade wind moisture.
During the boreal summer, when the ITCZ migrates to its northernmost position somewhat south of the island of Hispaniola, air pressures and trade wind intensities decrease regionally. The decreased air pressures promote convective activity over the island and the decrease in northwesterly trade wind intensity also decreases vertical shear, enhances instability, and promotes deeper atmospheric convection. These ITCZ-proximal doldrum conditions are especially important along the leeward slopes of the southern Cordillera on the island of Hispaniola, including the Las Lagunas area, where such activity dominates local background precipitation. Atmospheric disturbances such as easterly waves and tropical storms also play a significant role in the climate of Hispaniola, particularly in the late boreal summer and fall when tropical Atlantic and Caribbean sea surface temperatures are peaking, but this source varies on several time scales.
Subtropical climate influences on the island of Hispaniola primarily consist of the strength and duration of atmospheric subsidence (high pressure)
over the Caribbean region, especially during the boreal winter. Sustained high pressure that extends south and west into the region can significantly decrease trade wind intensity and convective activity leading to overall drier conditions on the island of Hispaniola.
Extratropical climate influences on the island of Hispaniola are primarily constrained to the northwestern portions of the island. When polar fronts are intense enough to reach Hispaniola, the uplift associated with frontal convergence can yield limited precipitation to the parts of the island exposed to the front or able to enhance it orographically.
Precipitation on the island ranges from as much as 2500 mm in the northeastern part of the country where the tradewinds are unobstructed, to as low as 500 mm annually in the rainshadowed northwestern and southwestern portions of the island (Horst, 1992; Bolay, 1997). The majority of the island experiences at least one relatively dry period during the year, with two relatively dry seasons the norm for many localities.
Temperatures on Hispaniola are typical for a tropical island, with average annual sea level temperatures between 26 °C and 29 °C, and daily temperature variation that exceeds the annual variation in monthly mean temperatures (Schubert and Medina, 1982; Orvis et al., 1997). Mean annual temperatures are considerably lower in the high elevations of the Cordillera Central. Orvis et al. (1997) calculated the mid-elevation lapse rates for the island to be around –8.5 °C km\(^{-1}\). Applying this lapse rate upslope and taking into account the effects of the
trade wind inversion, it is plausible that the highest slopes of the Cordillera Central have mean annual temperatures at or below 7 °C (Orvis et al., 1997).
Dependable meteorological records are rare in the less populated areas of the Dominican Republic, including the area around Las Lagunas. Limited meteorological data from the nearby town of Padre Las Casas indicate a mean annual temperature of 24 °C (K. Orvis, pers. comm.). The mean annual temperature for Las Lagunas is likely to be about 3.8 °C lower as it is about 450 m higher in elevation than Padre Las Casas, yielding a mean annual temperature for Las Lagunas somewhere around 20 °C.
Estimates of the mean annual precipitation for the area are more difficult to make as no precipitation data are available from the Padre Las Casas meteorological station. The nearest available precipitation data are from the more distant city of Azua, which is both lower in elevation and subject to a greater rainshadow effect (Figure 1.2). Based on the mean annual precipitation values for Azua of ~700 mm (K. Orvis, pers. comm.), it is reasonable to assume that mean annual precipitation values for the area around Laguna Castilla and Laguna de Salvador are somewhere around 900–1000 mm.
As outlined above, precipitation on the southern slope of the Cordillera Central is primarily the result of convective uplift fed by sea breeze moisture during the boreal summer when the ITCZ is in its northerly position and proximal-doldrum conditions dominate. This results in a rather seasonal precipitation regime, with a distinct dry season in the late winter and early spring (Bolay, 1997). In the Holdridge life zone classification, the Las Lagunas area
falls within the lower montane moist forest zone. The Holdridge system is a climatic classification, but zones are named after expected mature vegetation (Holdridge et al., 1971).
In addition to the multiple climate controls outlined above, the interannual precipitation regime of Hispaniola is also sensitive to several climate oscillations. Two of the most pervasive climate cycles affecting the Caribbean as a whole are the El Niño Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO), both of which significantly affect Caribbean sea surface temperatures (SSTs) and sea level pressures (SLPs; Enfield, 1996; Enfield and Mayer, 1997; Enfield and Elfaro, 1999; Giannini et al., 2000; 2001a; 2001b; Taylor et al., 2002). The interactions between these two climate cycles are complex and coherent patterns are difficult to isolate, especially with the relatively sparse and limited meteorological records of the Caribbean region (Giannini et al., 2000), but some patterns have emerged.
Giannini et al. (2000; 2001a; 2001b) documented and summarized the impacts of ENSO variations on the climate of the tropical Atlantic and Caribbean. In general, data provided by Giannini and collaborators indicate drier than average conditions in the Caribbean region during the boreal summer of an El Niño event followed by wetter than average conditions during the spring of the following year. The drier than average conditions during the boreal summer of an El Niño year are related to decreased SLPs in the equatorial and tropical Pacific, and higher than average SLPs over the tropical Atlantic (Hastenrath and Heller, 1977; Covey and Hastenrath, 1978; Curtis and Hastenrath, 1995; Poveda and
Mesa, 1997), in a pattern that has been labeled a “zonal seesaw” (Giannini et al., 2001a). This pattern leads to convergence along the margins of the eastern Pacific ITCZ and divergence, hence drier conditions, in the Caribbean basin (Giannini et al., 2001a). The decreased meridional pressure gradient between the tropical Atlantic and eastern Pacific also leads to a weakening of the northeast trade winds.
The weakening of the meridional pressure gradient, and consequently in the northeast trade winds, diminishes upwelling in the Caribbean basin leading to anomalously high SSTs. It is this delayed increase in Caribbean SSTs (Enfield and Mayer, 1997) that leads to wetter than average conditions during the spring of the year following a warm ENSO event. The mechanism is the resulting increases in atmospheric convection and absolute humidity related to increased SSTs (Giannini et al., 2001a).
It is also worth noting that an increase in vertical shear over the tropical Atlantic during El Niño years has been linked to diminished tropical cyclone activity in the tropical Atlantic (Gray et al., 1994). A decrease in tropical cyclone activity can also lead to a decrease in annual precipitation totals for the Dominican Republic.
The North Atlantic Oscillation (NAO) also significantly affects Caribbean climate (Malmgren et al., 1998). In short, an intensified North Atlantic High (NAH), typical of the positive phase of the NAO (van Loon and Rogers, 1978), leads to the intensification of the northeastern trade winds. This intensification of the trade winds leads to enhanced heat loss from the tropical ocean, hence
decreased convection and decreased precipitation (Giannini et al., 2001a).
Conversely, a weakened NAH, or more negative phase of the NAO, will lead to weakened northeast trade winds and an increase in SSTs and convective activity, yielding wetter conditions in the tropical Atlantic and Caribbean basin.
Although there is ample evidence to suggest that ENSO and the NAO are not related to each other, and the two oscillations are known to operate on different frequencies (Rogers, 1984; Giannini et al., 2001a), these climate oscillations can amplify or mask each other at different times. For example, when ENSO is in a warm phase and the NAO is in a positive phase, the two climate oscillations can combine to create very dry conditions in the tropical Atlantic and the Caribbean. Conversely, if the NAO is in a more negative phase in the year following a warm ENSO event, conditions can become much wetter than average in the tropical Atlantic and the Caribbean due to the anomalously high SSTs in both locations. Finally, if the two oscillations are having opposite impacts on the tropical Atlantic and Caribbean (i.e. cool phase ENSO and positive NAO, or warm phase ENSO and negative NAO) the climatic impacts of both oscillations can be masked or dampened (Giannini et al., 2001a). With this in mind, Giannini et al. (2001a) pointed out that long term variations in the relationship between ENSO and the NAO through time can lead to significant variations in the precipitation regime of the tropical Atlantic and the Caribbean.
Finally, precipitation totals for the Dominican Republic can also be significantly affected by tropical storms and hurricanes. According to Horst (1992), hurricanes can be expected to make landfall on the Dominican Republic
once every 3.6 years. Peak hurricane season occurs between August and mid-October when Atlantic SSTs reach their maximum. Hurricanes can have significant impacts on rainfall totals for the island. For example, in 1998 Hurricane Georges struck the Dominican Republic and, according to satellite estimates, may have dropped $\sim$1000 mm of rain on the country in less than 24 hours (Guiney, 1999).
**Vegetation**
The island of Hispaniola is home to a wide variety of plant associations and vegetation types, which include mangroves along protected coastlines, evergreen forests in the humid lowlands of the northeast, deserts or dry forests in the rainshadowed portions of the country, montane and submontane rainforests on windward slopes of the cordillera, and pine forests in the highest elevations (Tolentino and Peña, 1998). Much of the natural vegetation of Hispaniola has been heavily affected by human activity, especially in Haiti, and converted into agricultural land. Agriculture, along with tourism and mining, are key components of the economies of both the Dominican Republic and Haiti (Bolay, 1997). According to Bolay (1997), only 10% of the island’s pre-Columbian total forest area remains intact.
The vegetation currently surrounding the town of Las Lagunas is classified by Tolentino and Peña (1998) as grassland (pasture) and mixed crops and grasslands. Tolentino and Peña classify intact woody vegetation at the same altitude and slope aspect as lower montane moist forest (i.e. the Holdridge life zone designation; Panamerican Union, 1967). Remnant areas of lower montane
moist forest include pines (*Pinus occidentalis* Schwartz) mixed with evergreen and deciduous broadleaved trees (Liozier 1981). Naturally occurring broadleaf assemblages likely included species in the genera *Cecropia, Garrya, Ilex, Juglans, Magnolia, Miconia, Mecranium, Meriania, Myrica, Ocotea, Piper, Trema,* and *Weinmannia,* to name a few, as well as a wide variety of genera and species from the Arecaceae, Poaceae, and Rubiaceae families, along with others in the Urticales order not already mentioned above (Bolay, 1997; Kennedy, 2003; Kennedy et al., 2005).
The vegetation currently surrounding Laguna Castilla and Laguna de Salvador has been heavily modified by crop cultivation and the grazing of livestock. Pastures include remnant stands of *Pinus occidentalis,* and numerous species in the Poaceae (grass) and Cyperaceae (sedge) families. Cultivated fields of corn and beans are also prevalent. The shores of both lakes are currently dominated by arboreal taxa including *Syzygium jambos* (L.) Alst. (Myrtaceae), a few palms, and a limited number of *P. occidentalis* trees. Emergent aquatic plants currently found in both lakes include *Typha domingensis* Pers. (Typhaceae), *Eleocharis interstincta* (Vahl) R&S. (Cyperaceae), and a variety of other species in the Cyperaceae and Poaceae families.
**Natural Disturbance**
The most common natural disturbances that affect ecosystems on the island of Hispaniola are fires, tropical storms, and slope failures. The natural and anthropogenic fire regimes of the Dominican Republic have only recently begun to receive attention. The frequency and impacts of recent fires on the vegetation
of the Cordillera Central have been analyzed by several researchers (Horn et al., 2001; Kennedy, 2003; Martin and Fahey, 2006). Martin and Fahey (2006) developed fire records in the high elevations of the Cordillera Central using dendrochronological analysis of the endemic pine *Pinus occidentalis*. This species poses challenges for dendroclimatic research (Speer et al., 2004) and Martin and Fahey (2006) were not able to determine exact fire return intervals. They suggested conservative fire return intervals of around 42 years for the pine forests of the Cordillera Central, and speculated that many of the fires may be linked with droughts associated with warm ENSO events in the tropical Pacific.
The prehistoric fire regimes and impacts of fires initiated by prehistoric humans of the Dominican Republic are still largely unknown. Horn et al. (2000) recovered charcoal from soil profiles from the high elevations of the Cordillera Central, indicating the natural occurrence of fire over the last 42,000 years. Again using fossil charcoal, Kennedy et al. (2006) documented the occurrence of natural, and potentially anthropogenic, fires over the last 4000 years in a bog sediment record from the Valle de Bao of the Cordillera Central. A conclusion that has emerged from all of this work is that fires, both natural and anthropogenic, have influenced the highland ecosystems of the Dominican Republic for tens of thousands of years.
Modern and historic/prehistoric fire regimes of mid- and low-elevation ecosystems remain largely unstudied, despite the importance of this information to land management and conservation. Land managers in the Dominican Republic have only recently begun to consider the possible ecological role and
importance of fire on the landscape (Myers et al., 2004). Land managers have primarily focused on the prevention of fires and not the potentially beneficial aspects of natural fire regimes. To maximize ecosystem health and recovery, land managers will need more information regarding natural disturbance regimes in the many ecosystems of the island.
Some of the most spectacular and devastating natural disturbances in the Dominican Republic are from landfalling hurricanes. Hurricanes can be expected to make landfall on the Dominican Republic once every 3.6 years (Horst, 1992). Clark et al. (2002) and Kennedy et al. (2006) mentioned extensive wind damage, slope failure, and flooding from Hurricane Georges, which struck the Dominican Republic in 1998. In their analysis of sediments collected from Valle de Bao, Kennedy et al. (2005) interpreted re-deposited charcoal fragments and peaks in the abundance of spores produced by the tree fern *Cyathea aroborea* (L.) Sm. as evidence of ecosystem disturbance associated with prehistoric hurricane landfalls.
In many areas of the Dominican Republic with steep topography and high rainfall, including the area around Las Lagunas, slope failures are common. Slope failure events are especially common following extreme precipitation events associated with hurricanes and tropical storms. Some of these slope failure events can be quite large and have significant impacts on ecosystems. No studies have analyzed in detail the short-term or long-term effects of slope failures on ecological communities on the island of Hispaniola. However, research in Puerto Rico indicates significant changes in vegetation as successional species recolonize
landslide scars and debris fans (Myster and Fernandez, 1995; Walker et al. 1996; Myster and Walker, 1997).
**Late Holocene Paleoclimates of the Circum-Caribbean**
In the last few years, the number of paleoclimate studies in the circum-Caribbean region has increased significantly with the realization that the tropics play a fundamental role in climate change and are sensitive to global climate variability (e.g. Bigg, 2003). The climate dynamics invoked by researchers to explain circum-Caribbean climate change can be summarized in three general categories, but it is important to note that these categories may not be mutually exclusive.
The first, and most commonly invoked, explanation for circum-Caribbean climate variability is a shift in the mean boreal summer position of the ITCZ. During the boreal summer, the ITCZ currently migrates as far north as the Yucatan Peninsula in the western Caribbean, and to just off the northern coast of South America in the eastern Caribbean. In the western Caribbean this northern migration of the ITCZ is intimately tied to the Central American Monsoon (CAM). The CAM refers to low pressure fields formed by the heating of the Central American landmass along with southern Mexico during the boreal summer. This regional low pressure can draw the primary thread of convergence (the well defined ITCZ) or a secondary convergent thread northward. In the eastern Caribbean the ITCZ remains farther south because of the thermal low that develops over the northern portions of the South American landmass. In either case, the proximity of the ITCZ brings with it convective activity and proximal
doldrum (weakened trade winds) conditions that promote increased precipitation throughout the region (as outlined above). If the ITCZ were to remain farther south, especially during the boreal summer, precipitation throughout the circum-Caribbean would decrease significantly.
Multiple mechanisms could hypothetically change the migrational range of the ITCZ, but the characteristics of these processes over long timescales remain poorly understood. On millennial timescales, shifts in interhemispheric temperature gradients in response to Milankovitch orbital forcings could affect (shrink, expand, or shift north or south) the migratory range of the thermal equator, and hence, the ITCZ. On shorter (centennial to decadal) timescales variations in solar activity (i.e. sunspot cycles) could also impact thermal equator dynamics and ITCZ migration. Alternate explanations could include extra-regional forcings such as a weakening of the CAM in response to North American atmospheric dynamics, which could lead to higher than normal pressures over the western Caribbean and an inability of the CAM to draw the ITCZ or other convergent thread northward.
A second category of commonly invoked explanations for circum-Caribbean climate change relates to climate dynamics in the eastern Pacific ocean and is very similar to the “zonal seesaw” described above. In short, when eastern Pacific SSTs are cool, convective activity is suppressed in the eastern Pacific and enhanced in the Caribbean and tropical Atlantic. In addition, under these conditions the ITCZ tends to establish farther northward of the eastern Pacific cold tongue, enhancing the CAM and convective activity in the Caribbean. If
eastern Pacific SSTs increase, convective activity is enhanced in the eastern Pacific and suppressed in the Caribbean. In this situation vertical shear also increases in the Caribbean and tropical Atlantic, inhibiting deep convection and the formation of tropical storms. We are currently familiar with these atmospheric dynamics as they are common features of the El Niño Southern Oscillation (ENSO). Similar longer-term variations, or variations in the intensity or frequency of ENSO-type events in the Pacific over time, could significantly impact circum-Caribbean climate dynamics.
The third category of commonly invoked explanations for circum-Caribbean climate change relates to variations in Caribbean SSTs. In general, an increase in Caribbean SSTs will lead to increased atmospheric humidity, latent heat, and convection. Some researchers have postulated that Caribbean, or more specifically Gulf of Mexico, SSTs are intimately related to southeast trade wind intensity with more powerful trade winds “pushing” more warm tropical Atlantic water across the equator and into the region. Under this assumption, a more northerly position of the ITCZ, and hence of the reach of the southeast trade winds, would enhance advection of warm tropical Atlantic waters into the Caribbean and increase Caribbean SSTs. Other researchers have speculated that the intensity of meridional overturning circulation (MOC) in the north Atlantic is a primary driver of Caribbean SST variation in the past. The strength of the MOC determines how much water is being advected northward from the tropical Atlantic, into the Caribbean, and eventually up through the Gulf Stream into the
north Atlantic. A weakening of the MOC would hypothetically decrease warm tropical Atlantic water advection into the Caribbean and decrease SSTs.
In this section, I summarize selected paleoclimate studies from the circum-Caribbean region, focusing primarily on high-resolution records spanning the middle to late Holocene in which climate signals are not overpowered by signals of human disturbance or in which climate and human signals can be confidently distinguished. In some of the studies I summarize in this section the researchers have chosen to invoke one or more of the explanations I have summarized above. My intention is to provide an overview of climate variability and potential climate forcing mechanisms affecting the circum-Caribbean region during the Holocene (Figure 1.4) as a theoretical framework to better understand any evidence for climate variations I might see in the Las Lagunas records. I have grouped the studies according to geographic locality.
**Hispaniola**
Paleoenvironmental research has only just begun in most areas of Hispaniola. Lake sediments recovered from lowland Lake Miragoane, located on the southern coast of Haiti, have provided climate and vegetation records extending back some 10,500 years (Hodell et al., 1991; Curtis and Hodell, 1993; Higuera-Gundy et al., 1999). Hodell et al. (1991) and Curtis and Hodell (1993) presented an oxygen isotope record for Lake Miragoane based on analysis of monospecific ostracod (*Candona* sp.) valves extracted from the sediments. This oxygen isotope record provides an evaporation-precipitation (E/P) ratio record for Lake Miragoane that extends throughout the Holocene. Higuera-Gundy et al.
Figure 1.4. Qualitative summary diagram of centennial-scale climate variability in the circum-Caribbean during the Holocene. Elevations of the study sites and references are in parentheses. Dark gray highlights indicate periods of wet climate and the light gray highlights indicate periods of dry climate. The terms “wet” and “dry” refer only to relative shifts in climate for each individual study site. The highlighted (black) sections of the Mayewski et al. 2004 record refer to discrete “rapid climate change” events identified by the authors.
Figure 1.4. Continued.
Approximate Age (cal yr B.P.)
**Hispaniola**
- **Lake Mirogonne, Haiti**: 120 m; Hodell, 1991; Curtis and Hodell, 1993; Higuera-Gundy et al., 1999)
- **Valle de Bao, Dominican Republic**: 1800 m; Kennedy et al., 2006)
**Tropical Atlantic and Lesser and Greater Antilles**
- **Wallywash Great Pond, Jamaica**: 7 m; Street-Perrott et al., 1993; Holmes et al., 1996; Holmes, 1998)
- **Church's Blue Hole, Bahamas**: 4-14 m; Kjellmark, 1996)
- **Etang de Grand-Case Lake, Saint Martin**: <5 m; Bertrand et al., 2004)
**Florida**
- **Florida Everglades**: <5 m; Winkler et al., 2001)
- **Lake Tulane, Florida**: 37 m; Grimm et al., 1993)
- **Little Salt Spring, Florida**: 65 m; Zarikian et al. 2005)
Figure 1.4. Continued.
Approximate Age (cal yr B.P.)
Lake Valencia, Venezuela (405 m; Curtis et al., 1999; and many others)
Caririco Basin (0 m; Black et al., 2004; Haug et al., 2001; 2003; Peterson and Haug, 2006 and others)
Costa Rica, Panama, Guatemala (Summary by Islebe and Hooghiemstra, 1997; see text for details)
Lake La Yeguada, Panama (650 m; Bush et al., 1992)
Chilibrillo Cave Speleothem, Panama (60 m; Lachniet et al., 2004)
Mexico (Summary by Metcalfe et al., 2000)
Agua de Xcaamal, Mexico (~100 m; Hodell et al., 2005b)
Figure 1.4. Continued.
presented a pollen record that provides a complimentary vegetation history for the Miragoane region.
The oxygen isotope record for Lake Miragoane indicates an arid early Holocene (~10,000 to 8,000 cal yr B.P.) climate for southwestern Hispaniola. The pollen record also indicates dry conditions, with xeric palms and shrubs dominating. Gradually decreasing oxygen isotope ratios of ostracod valves deposited during the middle Holocene (~8,000 to 3000 cal yr B.P.) suggest moist conditions around Lake Miragoane. The pollen record also indicates moist conditions during this time with increases in mesic arboreal taxa such as taxa in the Moraceae family and the genera *Cecropia* and *Trema*. Following the more mesic middle Holocene, the Lake Miragoane record indicates a general increase in E/P ratios and an increasing dominance of more dry-adapted plant taxa during the late Holocene (3000 cal yr B.P. to present). The most recent sediments recovered from Lake Miragoane contain pollen and sedimentary evidence of human activity in the watershed, including deforestation and maize agriculture (Brenner and Binford, 1988; Higuera-Gundy et al., 1999).
Hodell et al. (1991) and Curtis and Hodell (1993) attributed the variations in effective precipitation recorded in the Lake Miragoane oxygen isotope and pollen records to variations in the intensity of the annual cycle (ITCZ migration) in response to Milankovitch orbital forcing mechanisms, in this case the precession cycle. During periods of increased solar radiation receipt in the Northern Hemisphere, the mean boreal summer position of the ITCZ migrates further northward, enhancing precipitation in southern Hispaniola. During
periods of decreased solar radiation receipt in the Northern Hemisphere, the mean boreal summer position of the ITCZ does not penetrate as far northward and more arid conditions dominate in southern Hispaniola.
Hodell et al. (1991) and Curtis and Hodell (1993) pointed out that variations in orbital geometry cannot explain all of the variations in E/P ratios and vegetation change observed in the Miragoane record. Numerous sub-millennial scale variations in E/P ratios are present in the Lake Miragoane oxygen isotope record. The mechanisms responsible for these rapid, centennial-scale variations in climate are not discussed by Hodell et al. (1991) or Curtis and Hodell (1993) in any detail.
The only other continuous record of middle to late Holocene paleoenvironmental change published for the island of Hispaniola is a ~4000 cal yr B.P. sediment, charcoal, and fossil pollen record from a bog in the Valle de Bao located in the high elevations (~1800 m) of the Cordillera Central in the Dominican Republic (Kennedy et al., 2006). The initial formation of the Valle de Bao bog around 4500 cal yr B.P. seems to coincide with high-elevation lake and bog formation in the Dominican Republic and may be indicative of a larger scale shift in atmospheric dynamics (Orvis et al., 2005). Poor pollen preservation between ~3700 and ~1200 cal yr B.P. suggests variable water table depth during this period, perhaps in response to highly variable precipitation totals for the region. After ~1200 cal yr B.P., pollen preservation improves markedly, suggesting more mesic conditions.
Other Records from the Tropical North Atlantic and Lesser and Greater Antilles
Several middle to late Holocene paleoenvironmental records have been reconstructed using sediment records recovered from sites in the tropical north Atlantic and the Lesser and Greater Antilles outside of Hispaniola. Sediments collected from Wallywash Great Pond, Jamaica have been studied extensively to reconstruct the limnological and climate history of the lake (Holmes, 1998; Street-Perrott et al., 1993; Holmes et al., 1995). The Wallywash Great Pond sediment record is old, stretching back some 125,000 years, but is plagued by radiocarbon dating problems and relatively slow sedimentation rates, which hamper high-resolution paleoenvironmental reconstructions.
Based on analyses of geochemistry and ostracod assemblages, Street-Perrott et al. (1993) and Holmes (1998) suggested that Wallywash Great Pond dried out completely during the late Pleistocene. During the early Holocene (~10,000 to 8300 cal yr B.P.) the sediment and ostracod data indicate a significant increase in precipitation and subsequent increase in lake depth. Following this mesic period in the lake’s history, paleolimnological data indicate that water depths began to fall, and remained low, from ~8300 to 3500 cal yr B.P. Between ~3500 and 1000 cal yr B.P. the oxygen isotope values of inorganic marls decrease and shifting ostracod assemblages indicate an increase in lake levels. Paleolimnological evidence in the form of decreased stable carbon isotope ratios in the marl and shifts in fossil ostracod and mollusk assemblages suggests a rather large flood event may have occurred around 1200 cal yr B.P. and drastically
affected the geochemistry of the lake water. Over the last 1000 cal yr B.P., the Wallywash Great Pond sediment record indicates increasing aridity for the region.
The pollen, charcoal, and sediment records of Church’s Blue Hole on Andros Island in the Bahamas extend to ~3000 cal yr B.P. and indicate significant climatic variability over this period. Kjellmark (1996) suggested that water levels in Church’s Blue Hole were much lower between ~3000 and 1500 cal yr B.P., but based much of this interpretation on evidence from other circum-Caribbean records. Following this period of apparent aridity, the Church’s Blue Hole record indicates increasing precipitation and water levels. However, more detailed interpretations of the timing and intensity of this late Holocene climate change are hindered by poor chronological control and anthropogenic impacts in the watershed.
Paleoenvironmental records from sediment profiles are rare for the Lesser Antilles, but Beets et al. (2006) analyzed the oxygen and carbon isotope composition of land snail (*Bulimulus guadaloupensis*) shells recovered from the archeological site Anse à la Gourde on the island of Guadeloupe to reconstruct precipitation variability and vegetation change over the last ~2000 years. Their oxygen isotope record indicates periods of severe drought centering on ~350, 900, and 1950 cal yr B.P. Thick sand lenses are associated with these periods indicating an increase in the aeolian transport of sand to the site as a result of increased aridity and increased trade wind strength. One of the most compelling aspects of the Anse à la Gourde record is that the archaeology of the site indicates human abandonment of the area coincident with the drought interval centered
around ~900 cal yr B.P. Beets et al. pointed out that this drought is also roughly coincident with the collapse of Mayan society, which has also been attributed to drought conditions on the Yucatan Peninsula. Following this period of drought, the isotopic record indicates an increase in precipitation from ~850 to 650 cal yr B.P. and the archaeological record indicates re-occupation of the site by a new group of inhabitants. The Anse à la Gourde record is valuable not only because it is one of the few existing paleoprecipitation records in the eastern Caribbean, but also because it provides some of the first evidence of the impacts of climate change on prehistoric occupants of Caribbean islands.
The only other detailed paleoenvironmental record in the eastern Caribbean comes from St. Martin. Bertran et al. (2004) recovered a sediment core from Étang de Grand-Case that extends back ca. 4000 cal yr B.P. Bertran et al. reconstructed the climate history of Étang de Grand-Case using a wide variety of sedimentary analyses. Their results suggest relatively low lake levels from ~4200 to 2300 cal yr B.P. and a high occurrence of hurricane landfalls as indicated by numerous sand lenses. An increase in organic mud deposition at the expense of carbonate precipitation between ~2300 and 1150 cal yr B.P. indicates an increase in lake level. The most recent ~1150 years of the Étang de Grand-Case sediment record are confounded by anthropogenic activity in the watershed and by a more complex sediment stratigraphy, preventing high-resolution paleoclimate reconstructions.
Florida
The close proximity and similar climate dynamics of the Florida Peninsula and the Greater Antilles makes the paleoenvironmental history of Florida pertinent to this study. Several Floridian sediment cores have been recovered and analyzed to reconstruct the paleoenvironmental history of the region (Watts, 1969; 1971; 1975; 1980; Watts and Stuiver, 1980; Watts et al., 1992; Grimm et al., 1993; Watts and Hansen, 1994; Zarikian et al., 2005; Huang et al., 2006). Analyses have been primarily focused on the vegetation history of the region during the Pleistocene-Holocene transition. Very few of these records have the temporal resolution necessary to make detailed analyses of late Holocene climatic variability.
Watts (1980) summarized many of the coarse-resolution records of Holocene climatic change for the southeastern U.S. He described a Holocene precipitation record for the southeastern U.S. that is, at a temporally broad scale, very similar to that described by Hodell et al. (1991) for Lake Miragoane, Haiti. Pollen records from Florida suggest that the early Holocene vegetation was predominantly dry oak forest, indicating warmer and more arid conditions than exist there today. This arid period was later identified at Camel Lake located on the Florida panhandle (Watts et al., 1992). Sometime around ~7,000 cal yr B.P., oaks declined in importance and gave way to more mesic tree taxa. These mesic tree taxa dominate Floridian pollen records for the duration of the middle Holocene, suggesting an increase in precipitation for the region as a whole. Unlike the Lake Miragoane record, the sediment records from Florida show no
indication of increasing aridity in response to decreasing Northern Hemisphere radiation receipt during the latest portions of the Holocene.
In a higher resolution study, Winkler et al. (2001) presented an in-depth analysis of Holocene sediments collected from locations throughout the Florida Everglades. Winkler et al. presented fossil pollen, charcoal, diatom, sclereid, sponge spicule, sediment chemistry, and stable carbon isotope data from a total of 18 sediment cores that they analyzed to reconstruct the late Holocene paleoenvironmental history of the Everglades. The various proxies suggest a relatively moist middle Holocene, as indicated in a variety of other circum-Caribbean climate reconstructions, followed by an arid period between ~3000 and 2200 cal yr B.P. Mesic conditions returned between ~2200 and 1600 cal yr B.P., but the ~1600–1100 cal yr B.P. period was apparently again arid. This period of aridity coincides with drought intervals inferred from sediments of the Cariaco Basin, Yucatan Peninsula, and other study sites in the circum-Caribbean. Following this arid period, climate conditions in the Everglades appear to become more variable up until the present. Winkler et al. noted that drastic changes in Everglades hydrology due to anthropogenic activities are likely masking many of the climate signals in the most recent portions of the sediment records.
Winkler et al. (2001) suggested that the increased precipitation totals for the Everglades region during the middle Holocene were most likely the result of increased SSTs in the Caribbean as a result of increased solar radiation in the Northern Hemisphere during this time period, as discussed above. Ruddiman and Mix (1993) presented marine oxygen isotope data from sediments collected off
the southern coast of Florida that do suggest increased SSTs during the middle Holocene. Winkler et al. attributed the moist middle to late Holocene conditions to increasing El Niño activity and tropical cyclone activity in the area as a result of increased SSTs. Winkler et al. provided no hypotheses regarding the forcing mechanisms responsible for the sub-millennial variations in precipitation apparent in their sediment records.
Zarikian et al. (2005) presented a high-resolution paleoenvironmental record from Little Salt Spring located in western Florida. The researchers analyzed the sediments, ostracod assemblages, and isotopic composition of monospecific ostracod valves preserved in the sediments of Little Salt Spring. The hydrology of the site is rather complex and intimately tied to sea level variations, making dependable interpretations of climate variability based on variations in water chemistry and isotope composition difficult. Zarikian et al. recognized these complexities and tried to constrain their heuristic model of hydrology based on these complexities. Zarikian et al. primarily interpreted their oxygen isotope record as a record of water table depth and salt water intrusion.
Using this interpretation, Zarikian et al. suggested that conditions around Little Salt Spring were drier during the early Holocene (~12,000 to ~10,000 cal yr B.P.), which is in agreement with other paleoenvironmental records from Florida and the circum-Caribbean region. Zarikian et al. characterized the subsequent middle Holocene as relatively moist and stable in terms of hydrology (~10,000 to ~5900 cal yr B.P.). From ~5900 cal yr B.P. up until the present, the Little Salt Spring oxygen isotope record indicates considerable hydrologic variability.
Zarikian et al. interpreted the oxygen isotope record to indicate relatively mesic conditions from ~5900 to ~2800, and from ~1200 to ~700 cal yr B.P., and extremely arid conditions from ~2600 to ~1900 cal yr B.P. This is in contrast to many other circum-Caribbean records of late Holocene precipitation, especially those from the Cariaco Basin and the Yucatan Peninsula (see discussion below), which indicate that some of the most arid conditions in those regions were occurring simultaneously around 1200–1100 cal yr B.P. and again well after 700 cal yr B.P.
**Venezuela and the Cariaco Basin**
Very few paleoenvironmental records are available from sites along the northern coast of South America. However, since the early 1980s, several researchers have analyzed sediments recovered from Lake Valencia, which is located near the northern coast of Venezuela. Studies of the Lake Valencia sediments have included pollen (Salgado-Labouriau, 1980; 1987; Leyden, 1985), diatoms (Bradbury et al., 1981), sediment chemistry (Lewis and Weibezahn, 1981; Binford, 1982; Curtis et al., 1999), animal remains (Binford, 1982), and the isotope geochemistry of biogenic carbonates (Curtis et al., 1999).
Curtis et al. (1999) summarized the climate history of Lake Valencia using the multiple proxy records listed above as well as their own sediment and isotope geochemistry records. In short, Lake Valencia was very shallow during the late Pleistocene and early Holocene. From ~10,000 cal yr B.P. to ~8500 cal yr B.P., the climate became more mesic and the lake deepened. Between ~8500 and 3500 cal yr B.P., Lake Valencia became a permanent lake and deepened enough to
become an open basin. This shift in lake depth and hydrology indicates a significant increase in precipitation delivery to the region. Pollen deposited during this period also indicates relatively moist conditions with significant increases in tree taxa at the expense of more drought tolerant herbaceous taxa (Leyden, 1985). From ~3500 cal yr B.P. up until the present, the sediment records of Lake Valencia indicate decreasing water levels as a result of decreasing precipitation delivery to the region.
Another site that has been studied extensively in this area is the Cariaco Basin ($10^\circ$N, $65^\circ$W), which is an anoxic ocean basin located off the northern coast of Venezuela (Haug et al., 2001; 2003; Peterson et al., 1991; 2000; Hughen et al., 1996; 2000; 2004a; 2004b; Tedesco and Thunell, 2003a; 2003b). The combination of anoxic conditions and large annual fluctuations in biogenic carbonate sedimentation due to annual variations in upwelling intensity with the migration of the ITCZ has produced annual varves that allow highly detailed paleoclimate reconstructions. In addition to sediments of marine origin, the Cariaco Basin also accumulates terrigenous sediments delivered by the Orinoco, Tuy, Unare, Neveri, and Manzanares rivers (Milliman and Syvitski, 1992; Peterson and Haug, 2006). Haug et al. (2001; 2003) conducted high resolution (~4–5 yr) analyses of iron and titanium concentrations in the Cariaco Basin sediments under the assumption that these minerals originate from terrestrial sources within these different watersheds. The concentration of titanium and iron are thought to indicate the level of sediment transport and erosion in these various
watersheds, and therefore should provide a record of precipitation for this portion of northern South America.
Both the Ti and Fe records indicate a wet early to middle Holocene (~10,500 to 5,400 cal yr B.P.) and more variable precipitation during the middle to late Holocene with a long-term average decline in precipitation totals over the entirety of this time span for northern South America. More specifically, the Cariaco Ti and Fe records indicate very large fluctuations in precipitation delivery to the region between 3,800 and 2,800 cal yr B.P. (Haug et al., 2001). Following this period of seemingly large fluctuations in terrigenous element delivery is a period of relative stability in the record (~2,800 to 600 cal yr B.P). Despite the apparent relative climate stability around this time, there is evidence for three multi-year droughts in the Ti and Fe record during this period (~810, 860, and 910 A.D.) that have been implicated in the fall of the Mayan civilization on the Yucatan Peninsula (Haug et al., 2003). Finally, the Cariaco Basin record indicates some of the largest decreases in precipitation in the 14,000 cal yr B.P. sediment record during the Little Ice Age (~600 to 100 cal yr B.P.; Haug et al., 2001).
Using data from the Cariaco Basin sediment record, Peterson and Haug (2006) argued that the variations in precipitation for northern South America have resulted from shifts in the mean boreal summer position of the ITCZ during the Holocene. The wet season in northern South America occurs when the ITCZ is located towards the northern end of its range (boreal summer and fall). During the boreal winter and early spring, when the ITCZ reaches its most southerly
position, the northern coast of South America experiences a dry season. Peterson and Haug attributed the long-term precipitation pattern of increasingly wet conditions from the early to middle Holocene, and subsequent increasingly arid conditions from the middle to late Holocene, to long-term variations in the mean latitudinal position of the ITCZ. Peterson and Haug further suggested that these millennial scale variations in the mean position of the ITCZ are linked to the 21,000 year Milankovitch precession cycle and are also possibly due to variable El Niño frequency and strength throughout the Holocene. Clement et al. (2000) suggested that El Niño frequency and strength may have been lower during the early to middle Holocene as compared to the late Holocene and provided evidence that these variations in El Niño activity may also be linked to Milankovitch forcing mechanisms.
In short, a shift in the 21,000 year Milankovitch precession cycle leads to decreased seasonality in the Northern Hemisphere as compared to the Southern Hemisphere. This decrease in warm season solar energy receipt leads to an inability of the Northern Hemisphere to “pull” the ITCZ into the Northern Hemisphere and a more southerly mean boreal summer position for the ITCZ (Berger and Loutre, 1991). Increased El Niño frequency and strength could also result in a more southerly mean annual position of the ITCZ because El Niño events lead to an increase in southern Pacific SSTs, which can then lead to decreased sea surface pressures. Conversely, decreased El Niño frequency and strength could lead to a more northerly mean annual position of the ITCZ (Fedorov and Philander, 2000).
Black et al. (2004) analyzed the oxygen isotope composition of foraminiferal tests from Cariaco Basin sediments to reconstruct variations in the hydrographic conditions of the basin. They proposed that the oxygen isotope compositions of the foraminiferal tests are sensitive to variations in sea surface temperature and/or salinities in the Cariaco Basin. Their 2,000 cal yr B.P. record indicates a consistent long-term increase in oxygen isotope values over the entirety of the late Holocene. Black et al. primarily interpreted this increase in oxygen isotope values as an indication of decreasing SSTs as a result of increased upwelling or an increase in salinity.
Both a decrease in Cariaco Basin sea surface temperatures and an increase in salinity would be expected with a more southerly mean boreal summer position of the ITCZ. When the ITCZ is located in a more southerly position, the increased trade winds over the Cariaco Basin (Black et al., 1999) promote more upwelling of cold subsurface waters and thereby decrease SSTs (Haug et al., 2001). In addition, a more southerly mean boreal summer position of the ITCZ would result in increased evaporation and decreased freshwater delivery to the Cariaco Basin, thereby increasing regional ocean salinities.
While the millennial scale migration of the ITCZ to a more southerly mean boreal summer position appears to primarily be the result of earth-sun geometry and possible variations in the El Niño Southern Oscillation, some of the shorter-scale variability in the Cariaco Basin sediment records has been attributed to sunspot cycles (Black et al., 2004). Using cross-spectral analyses of the oxygen isotope data from the Cariaco sediment record, Black et al. (2004)
reported a cyclicity in the record of 158, 24, 10.9, and 8.2 years, which is within the bandwidth estimate of the 121, 22, and 11 year sunspot cycles. Although the exact mechanism that could be responsible for such a relationship between sunspots and tropical climate remains elusive, several other researchers working in tropical locales have reported similar cyclicities in their datasets (Peterson et al., 1991; Linsley et al., 1994; Black et al., 1999; deMenocal et al., 2000; Hodell et al., 2001).
**The Yucatan Peninsula**
One of the most intensively studied areas in the circum-Caribbean region in terms of Holocene climate change is the Yucatan Peninsula (Leyden et al., 1994; 1998; Hodell et al., 1995; 2001; 2005a; 2005b; Curtis et al., 1996; 1998; Islebe et al., 1996a; Whitmore et al., 1996; Rosenmeier et al., 2002; Hillesheim et al., 2005; Wahl et al., 2006). Much of the interest in past climate derives from the proposed connection between rapid late Holocene climate change in the region and the collapse of the once dominant Mayan civilization (Hodell et al., 1995; Gill, 2000; Haug et al., 2003).
One well studied site on the Yucatan Peninsula is Lake Chichancanab, Mexico. The Lake Chichancanab sediment record extends back some ~9500 cal yr B.P. (Hodell et al., 1995), but the majority of analyses of Chichancanab sediment profiles have focused on the last ~3000 cal yr B.P. (Hodell et al., 2001; 2005a). The absence of lacustrine microfossils (ostracods and gastropods), along with other sedimentary and isotopic data from Lake Chichancanab, indicate dry conditions in the Yucatan region during the early Holocene (~9500 to 7500 cal yr
Like most other climate records around the circum-Caribbean region, the Lake Chichancanab record indicates, on average, a peak in effective precipitation from ~7500 to 3500 cal yr B.P. Following this moist period in the Lake Chichancanab sediment record, isotopic and sediment geochemistry data suggest greater precipitation variability over the last ~3500 cal yr B.P. with relatively arid periods between ~3200 to 3000, ~2200 to 2000, and ~1400 to 900 cal yr B.P. The most recent drought interval (~1400 to 900 cal yr B.P.) has been associated with the collapse of the Mayan civilization.
The drought interval associated with the collapse of the Mayan civilization has also been detected in lake sediments recovered from Lake Punta Laguna, Mexico (Curtis et al., 1996). Sedimentary and isotopic data from Lake Punta Laguna generally indicate moist conditions from ~3500 to 2000 cal yr B.P., followed by relatively arid conditions between ~2000 and 950 cal yr B.P. The oxygen isotope composition of ostracods preserved in the Lake Punta Laguna sediments indicates a rapid shift from relatively arid conditions to moist conditions around the lake around 950 cal yr B.P. that then persist for ~200 years. Between ~750 and 450 cal yr B.P. the oxygen isotope record of Lake Punta Laguna indicates a return to relatively arid conditions with significant precipitation variability. From 450 cal yr B.P. to the present, the Lake Punta Laguna sediment record indicates a relative increase in effective moisture with the exception of two short-lived arid periods around 350 and 100 cal yr B.P.
The exact mechanisms responsible for these sub-millennial variations in precipitation delivery to the Yucatan Peninsula are still poorly understood, but
Hodell et al. (2001) proposed that much of this variability may be the result of the variations of solar activity with a periodicity of 208 years. This sub-millennial scale variation is superimposed upon the millennial scale patterns of decreasing precipitation totals in the northern tropics hypothetically linked to Milankovitch forcings (Hodell et al., 1991; Haug et al., 2001).
To further understanding of climate change on the Yucatan Peninsula and the interconnections with global climate events, Hodell et al. (2005b) analyzed a very high-resolution sediment record from Aguada X’caamal, Mexico, using a variety of techniques. Hodell et al. focused on a short period of time (~A.D. 1400 to 1500) encompassed by the Little Ice Age (LIA) and the associated climate changes that occurred in the region during this period. They reported significant decreases in precipitation and a subsequent decrease in Aguada X’caamal lake levels during the LIA. Hodell et al. attributed this decrease in effective precipitation for the region to decreased Caribbean SSTs that have also been reported at this time in several other studies (Winter et al., 2000; Nyberg et al., 2001; 2002; Watanabe et al., 2001). Hodell et al. proposed that decreased SSTs in the Caribbean may have decreased evaporation, and hence precipitation over the Yucatan Peninsula. In addition, Hodell et al. proposed the possibility that decreased precipitation totals over the Yucatan Peninsula were the result of large-scale change in oceanic and atmospheric fields during the LIA that may have led to a more southerly mean boreal summer position of the ITCZ. This shift in the mean latitudinal position of the ITCZ would decrease precipitation totals for northern tropical locales, such as the Yucatan Peninsula.
While a large number of Holocene paleoenvironmental records for Central America and Mexico exist, most of these records are strongly dominated by signals of prehistoric human activity and lack the proxies necessary to isolate climate change from those human impacts (Horn, in press). Islebe and Hooghiemstra (1997) combined pollen data from a bog and soil profile in Costa Rica with other regional records of climate change to produce a synoptic picture of Holocene climate change for Central America. In their review, Islebe and Hooghiemstra suggested a moist early Holocene based on pollen assemblages preserved in lake and bog sediments in Costa Rica (Horn, 1993; Islebe et al., 1996b). Paleolimnological evidence from Panama indicated decreased precipitation totals from ~7000 to 5000 cal yr B.P. (Piperno et al., 1991).
However, decreased macroscopic charcoal influx into Lago de las Morrenas 1, located in the highlands of Costa Rica, at this time may indicate wetter conditions (League and Horn, 2000). Central American paleoclimate records since ~5000 cal yr B.P. become increasingly difficult to interpret due to prehistoric human impacts, but several researchers have suggested drought intervals that possibly occurred throughout Central America around 2500 cal yr B.P. and between 1300 and 1100 cal yr B.P. (e.g. Horn and Sanford, 1992; Horn, 1993; Hodell et al., 1995; Islebe and Hooghiemstra, 1997; Haberyan and Horn, 1999; League and Horn, 2000; Anchukaitis and Horn, 2005).
One of the few continuous records of climate variability in Central America was assembled by Bush et al. (1992), who conducted multi-proxy
analyses of a ~14,000 cal yr B.P. sediment record from Lake La Yeguada, Panama. Paleoshorelines and exposed lake deposits suggest that Lake Yeguada was much deeper during the early Holocene than it is today. Conditions around La Yeguada remained moist until ~8000 cal yr B.P., after which time phytolith assemblages and geomorphic evidence suggest significantly decreased precipitation until ~6000 cal yr B.P., when diatom assemblages and geomorphic evidence indicate an increased lake level. The increased lake level persisted until ~2000 cal yr B.P. when paleolimnological evidence indicates decreasing lake level. However, the most recent 4000 cal yr B.P of the sediment record indicate prehistoric human impacts in the Lake Yeguada watershed that may be masking signals of climate change for the region.
Lachniet et al. (2004) developed an oxygen isotope record from a speleothem collected from Chilibrillo Cave, Panama, that spans much of the late Holocene. Results indicate relatively moist conditions from ~2150 to 1400 cal yr B.P., followed by a severe drop in precipitation around 1300 cal yr B.P. that Lachniet et al. associated with the Maya Hiatus, a period of decreased monument construction, abandonment in some areas, and social unrest in the Mayan lowlands. From this point on, the Chilibrillo Cave speleothem record indicates a general decrease in precipitation with discrete, severe periods of drought around 1150, 950, 800, 700, and 600 cal yr B.P. Lachniet et al. attributed this precipitation variability to variations in the strength of the “Central American Monsoon” (CAM), which is sensitive to ENSO oscillations. In short, warm ENSO events are associated with a decrease in the intensity of the CAM and a
decrease in precipitation for the region. Based on spectral analysis of the oxygen isotope data, Lachniet et al. concluded that ENSO variability, and not solar output variability, is the primary driver of precipitation variability for southern Central America.
The paleoenvironmental history of Mexico has received considerable attention from researchers. In an in-depth review, Metcalfe et al. (2000) summarized over 30 Mexican paleoclimate records. These records were developed using a variety of materials (lake sediments and packrat middens) and proxies (pollen, diatoms, other microfossils, sediment chemistry, isotopic signatures) from locations throughout the country including the Mexican portion of the Yucatan Peninsula (discussed above). These records indicate that Mexico was much more arid than today during the late Pleistocene and early Holocene (until ~9000 cal yr B.P.). Most records suggest an increase in effective precipitation between ~9000 and 6000 cal yr B.P., followed by a period of variable precipitation from ~6000–5000 cal yr B.P. On average, conditions appear to have become more arid throughout Mexico between ~5000 and 3500 cal yr B.P. Over the last ~3000 cal yr B.P., climate conditions become increasingly variable with several notable periods of drought, including one of the most arid periods in the Holocene around 1000 cal yr B.P. Metcalfe et al. focused their discussion of climate forcing mechanisms on the Pleistocene to Holocene transition and did not engage in any in-depth discussion of climate forcing mechanisms that might be responsible for the variations in Holocene precipitation patterns for Mexico. They did suggest that the position and intensity of the
Bermuda High, which is intimately related to ITCZ position, may play a significant role in Holocene climate variability for Mexico.
**Gulf of Mexico and Caribbean Marine Sediment Records**
Several sediment records have been collected from the Gulf of Mexico and the Caribbean Sea over the last few decades, but many of these studies have focused on the Pleistocene-Holocene transition, with very little attention devoted to Holocene climate variability (e.g. Ericson and Wollin, 1968; Malmgren and Kennett, 1976; Schmidt et al., 2004). Poore et al. (2003) reanalyzed sediments collected from the Gulf of Mexico in 1968 in an effort to reconstruct Holocene variations in Gulf of Mexico ocean currents. They interpreted foraminiferal assemblages and the oxygen isotope composition of foraminiferal tests as indicators of Loop Current intensity during the Holocene. When the mean latitudinal position of the ITCZ is displaced northward, southeasterly surface winds across the Caribbean Sea and Gulf of Mexico intensify. This intensification of southeasterly winds can increase ocean current strength through the Yucatan Strait causing the Loop Current to penetrate further northward into the Gulf of Mexico. Hence, more intense incursions of the Loop Current into the Gulf of Mexico, as indicated by fossil foraminfera assemblages and the isotopic compositions of foraminifera tests, are interpreted by Poore et al. as indications of a more northerly mean boreal summer position of the ITCZ. Poore et al. suggested that a more northerly mean boreal summer position of the ITCZ and related intensification of the Loop Current would lead to increased precipitation totals for much of the Caribbean region. An increase in Loop Current intensity
has also been associated with increased SSTs in the Gulf of Mexico and enhanced evaporation, which might also lead to increased precipitation totals for North America and the Caribbean (Brown et al., 1999; Poore et al., 2003).
Based on the foraminiferal data from the Gulf of Mexico, Poore et al. (2003) suggested a more intense Loop Current (and an inferred increase in regional precipitation) during the middle Holocene as compared to today. Analyzing the foraminiferal abundance data at shorter time intervals, Poore et al. (2003) reported cycles in the data with periods of ~500, 300, and 200 years. Poore et al. noted the similarity of these cycle periods to those of solar output, as reconstructed using records of $^{14}$C (Stuiver and Braziunas, 1989; Stuiver et al. 1991; Stuiver et al., 1998) and $^{10}$Be production (Finkel and Nishiizumi, 1997), and cycles reported by Hodell et al. (2001) for oxygen isotope records from the Yucatan Peninsula.
In a more detailed analysis, Poore et al. (2004) conducted a similar study using sediments from the Pigmy Basin in the Gulf of Mexico. Using the abundance of the foraminifer *Globigerinoides sacculifer* as a proxy of Loop Current intrusion into the gulf, Poore et al. attempted to reconstruct the position of the ITCZ over the last 5000 cal yr B.P. According to Poore et al., the mean boreal summer position of the ITCZ was located further northward between ~5000 and ~2900 cal yr B.P. than at any other time in the last 5000 cal yr B.P. Over the last 2900 cal yr B.P., incursions of the Loop Current into the Gulf of Mexico have, on average, decreased, except for brief increases in Loop Current incursions around ~1900, 1400, and 1200 cal yr B.P. Concentrations of *G. sacculifer* reach
minimum values around ~2800, 2600, 2300, 2100, 1000, 900, 600, and 400 cal yr B.P., a pattern interpreted to indicate a more southerly mean summer position of the ITCZ during these times. Again, Poore et al. associated this variability in the position of the ITCZ with variations in solar output as outlined above.
High-resolution, late Holocene analyses of Caribbean Sea surface temperature variability are few in number, but Nyberg et al. (2001; 2002) analyzed sediments collected from the Caribbean Sea, just south of Puerto Rico, and present sedimentary evidence for variations in Caribbean sea surface temperatures and sea surface salinities (SSSs), as well as upwelling intensity, spanning the last 2000 cal yr B.P. Nyberg et al. invoked numerous feedback mechanisms and teleconnections, including ENSO variability and thermohaline circulation intensity, to explain their Caribbean SST and SSS records. In general, Nyberg et al. associated increased SSSs and decreased SSTs with decreases in precipitation for the Caribbean region, brought about by a more southerly mean boreal summer position of the ITCZ.
Nyberg et al. (2001; 2002) focused the majority of their discussion on two climate events during the late Holocene. The first event, which occurred between ~1250 and 1000 cal yr B.P., is typified by increased SSTs and decreased SSSs, which is opposite of the expected pattern. Nyberg et al. attributed this unexpected relationship to very strong ENSO warm events, which have been detected at this time in a variety of other climate records (Quinn, 1992; Ely et al., 1993). The second climate event emphasized by Nyberg et al. (2001; 2002) occurred between ~400 and 550 cal yr. B.P., concurrent with the LIA. Nyberg et al. suggested a
cooling of Caribbean SSTs of ~2 °C and increased SSSs, which they associated with increased penetration of troughs of cold air from the north, or possibly increased upwelling associated with stronger trade winds. In either case, it seems that the ITCZ was displaced southward during this time, which would decrease precipitation for the region. In a spectral analysis of their entire dataset, Nyberg et al. (2001) suggested a close link between variations in Caribbean climate and solar output variability. However, they emphasized the importance of internal mechanisms in amplifying the impacts of variations in solar output, which were relatively weak and could not be the sole mechanism responsible for global climate variability.
**Correlations with Late Holocene Global Climate Changes**
To fully understand the origin of climatic change in the circum-Caribbean region, records must be analyzed in a broader context. In their comprehensive review of Holocene climate change, Mayewski et al. (2004) identified what they call periods of rapid climatic change (RCC) globally. Using more than 50 high-resolution records of climate change from locations around the world, Mayewski et al. identified 9000–8000, 6000–5000, 4200–3800, 3500–2500, 1200–1000, and 600–150 cal yr B.P. as periods of RCC.
In general, Mayewski et al. characterized these periods of RCC as intervals of decreased temperatures in polar regions and increased aridity in tropical regions as a result of reorganizations in atmospheric circulation. However, the RCC period between ~600–150 cal yr B.P. may have been an instance of decreased polar temperatures coinciding with increases in tropical
moisture availability. The periods of RCC identified by Mayewski et al. seem to generally correlate with periods of climatic change in the circum-Caribbean region. This is especially true for the periods between 1200–1000 and 600–150 cal yr B.P. However, unlike many of the tropical records summarized by Mayewski et al., the circum-Caribbean records summarized here seem to indicate increased aridity during the 600–150 cal yr B.P. RCC event. The RCC period proposed by Mayewski et al. between ~3500 and 2500 cal yr B.P. does seem to express itself in paleoclimate records from the Yucatan Peninsula, Lake Miragoane, the Cariaco Basin, Lake Valencia, the Gulf of Mexico, and select study sites in Central America, but is not apparent in most other records from the circum-Caribbean. Mayewski et al. attributed most of these Holocene RCCs to variations in solar output, but also noted the importance of other climatic forcing mechanisms such as shifting orbital geometries, volcanic activity, changes in ocean circulation dynamics, and greenhouse gas concentrations.
**Summary of Holocene Circum-Caribbean Paleoclimate**
Developing a synoptic picture of Holocene climate change for the circum-Caribbean is not straightforward, but the recent increases in study site density throughout the region make possible the delineation of general qualitative patterns (Figure 1.4). Generally speaking, it appears as though much of the circum-Caribbean region was arid during the late Pleistocene and very early Holocene. Following this period, long-term increases in northern Hemisphere solar insolation from the early to middle Holocene may have increased precipitation totals for most of the circum-Caribbean. This was likely the result of a more
northerly mean boreal summer position of the ITCZ and increased SSTs for the region. Despite the general increase in precipitation delivery to the region throughout the middle Holocene, some rather drastic climate variability is apparent in some of the high-resolution paleoclimate records.
Since ~3000 cal yr B.P., circum-Caribbean climate conditions seem to have become less coherent with increased inter- and intra-site variability, perhaps as a result of the greater numbers of high-resolution paleoclimate studies spanning this time. There does appear to be a general consensus that precipitation decreased for most areas of the Caribbean between ~3000 and 2000 cal yr B.P. and that a major drought event affected much of the region between ~1200 and 900 cal yr B.P. The drought between ~1200 and 900 cal yr B.P. is the same drought, or series of droughts, that has been associated with the collapse of the Mayan civilization.
Interpreting climate variability in the circum-Caribbean over the last millenium becomes increasingly difficult. For example, inferred precipitation records from the Cariaco Basin suggest that the most arid conditions in the entirety of the Holocene occurred between ~500 cal yr B.P. and 100 cal yr B.P., but this apparent extreme decrease in precipitation is not seen in most other circum-Caribbean records (Haug et al., 2003; Peterson and Haug, 2006). In addition, the majority of circum-Caribbean records summarized here seem to suggest a gradual decrease in precipitation spanning the last ~200 cal yrs B.P., but records from Central America and Mexico seem to indicate relative increases in precipitation over this period. Developing any sort of coherent picture of climate
change over this time period will require more high-resolution records from a variety of locales throughout the circum-Caribbean.
Most researchers who analyze Holocene climate change in the circum-Caribbean have attributed the observed climate variations to variations in solar insolation and solar intensity. One interpretation of long-term climate change in the Caribbean that has been widely accepted is the hypothesis that variations in earth-sun geometry, primarily due to the precession cycle, have been driving the general pattern of precipitation in the tropics. The general idea is that the gradual increase in solar energy receipt in the northern hemisphere during the early- to middle Holocene, as a result of the precession cycle, gradually led to a more northerly mean boreal summer position of the ITCZ and increased SSTs in the Caribbean Sea, which increased regional precipitation. Gradually decreasing solar energy receipt for the northern hemisphere during the middle to late Holocene gradually led to a more southerly mean boreal summer position of the ITCZ, decreased SSTs in the Caribbean Sea, and decreased regional precipitation over this time. This interpretation has been supported by Holocene climate records from Africa, where ITCZ dynamics operate in much the same manner (deMenocal et al., 2000), and Holocene climate records from South America, where, as expected, an inverse pattern of precipitation delivery has been noted (Abbott et al., 2000; Cross et al., 2000; Mayle et al., 2000).
On sub-millennial time scales, it appears that variations in solar output may be the driving force behind circum-Caribbean climate variability. Several researchers have reported periodicities in proxy datasets for precipitation and
temperature that correlate well with established periodicities of solar intensity (e.g. deMenocal, 2001; Hodell et al., 2001; Nyberg et al., 2001; Poore et al., 2003; Black et al., 2004), particularly at cyclicities of ~200 years. Yet, the link between solar output variability and sub-millennial Caribbean climate variability remains somewhat of a mystery. Several researchers have suggested that the relationship cannot be a direct one, but must also involve some other internal forcing mechanism or mechanisms (e.g. Turney et al., 2005). Resolving the link between solar output variability and Caribbean climate change will require more high-resolution records of Caribbean climate variability and a more in-depth understanding of the complex internal feedback mechanisms and global teleconnections affecting Caribbean climate during the Holocene.
Nyberg et al. (2001; 2002) invoked rather complex feedback mechanisms and teleconnections as potential drivers of Holocene climate variability in the Caribbean, including variations in ENSO cyclicity and intensity and the strength of deep water formation in the North Atlantic. They proposed that during extended periods of cold ENSO phases, consequent increases in atmospheric water transport into the tropical Atlantic could lead to a decrease in SSSs, which would then lead to a decrease in deep water formation in the North Atlantic as the fresh water influx is propagated northward by the North Atlantic Current system. A decrease in deep water formation in the North Atlantic would hypothetically result in a decrease in SSTs in the northeastern Caribbean as the northward flow of warm tropical Atlantic waters into the region would slow considerably. This decrease in SSTs could then lead to increased surface pressure over the
Caribbean, and in turn, a more southerly mean boreal summer position of the ITCZ and more arid conditions for the northern tropics.
Conversely, Nyberg et al. suggested that enhanced export of freshwater out of the North Atlantic region during periods of exceptionally intense or more frequent warm ENSO phases may act to increase Caribbean SSSs. As these salty Caribbean waters are advected northward by the North Atlantic Current system, they might then stabilize or intensify deep water formation in the North Atlantic and thereby maintain, or intensify, the northward advection of warm waters from the tropical Atlantic into the Caribbean Sea. This stabilization could explain the reconstructed increase in SSTs south of Puerto Rico between ~1200 and 1000 cal yr. B.P. presented by Nyberg et al. (2002). In other words, Nyberg et al. hypothesized that, through a series of teleconnections, variations in the Pacific climate system could be the source of much of the climate variation observed around the Caribbean region throughout the Holocene.
The development of more high-resolution paleoclimate records and a more in-depth understanding of oceanic and atmospheric dynamics and inter-relationships will help to elucidate these interconnections and the resulting impacts on regional climate regimes. The development of more high-resolution paleoclimate records from around the world will also help us to develop a better understanding of alternate forcing mechanisms possibly responsible for sub-millennial scale variations in Caribbean climate such as volcanic activity, atmospheric aerosol concentrations, or possibly even anthropogenic activities.
No archaeological data are available for the Las Lagunas area, but the general human history of the island of Hispaniola is fairly well understood. The first humans on the island of Hispaniola are thought to have migrated from the Yucatán region of Mexico and to have arrived on the island during the Lithic Age, roughly 7,000 yr BP as dated by the Casimiroid complexes on Hispaniola (Rouse, 1992). This initial settlement was apparently followed by several migrations from the mainland areas of both Central and South America and migrations between the individual islands of the Antilles.
The Casimiroid peoples of the Dominican Republic have been further distinguished as the Barrera-Mordán people. It is thought that the Barrera-Mordán were primarily hunter-gatherers with a heavy reliance on marine resources such as shellfish. Archaeological evidence suggests that the Barrera-Mordán never settled the interior portions of the island of Hispaniola (Rouse, 1992; Petersen, 1997). Most archaeological sites on the island of Hispaniola are located along the coast. Far less archaeological data are available from the interior of the island and temporal and spatial patterns of settlement are poorly known.
Newsom and collaborators (Newsom and Wing, 2004; Newsom, 2006) have suggested that some of the early inhabitants of the island of Hispaniola may have been low-level plant cultivators, who may have managed native vegetation in home gardens or other settings. However, it was not until much later (~2000 yr BP) that people began to move into the lowland interior of the island and rely
more on agricultural subsistence systems (Rouse, 1992). This change in subsistence patterns seems to be coincident with the arrival of the Saladoid peoples from the northern coast of South America. Interactions between the Saladoid peoples and the native populations already inhabiting the Antilles led to the development of the Taino culture.
Archaeological data suggest that the Saladoid people had developed an agricultural system based on a mixture of slash and burn agriculture and the cultivation of crops in floodplains during the dry season in South America (Rouse, 1992). It seems reasonable to assume that the Saladoid peoples implemented these same agricultural techniques in the Antilles, where possible. Early European settlers of the Antilles reported the use of slash and burn techniques by the native inhabitants (Newsom, 2006).
Unlike many ancient horticultural populations in the mainland neotropics, the prehistoric occupants of the Antilles apparently did not rely heavily upon maize agriculture, but were more dependent upon root crops such as cassava and sweet potatoes (Petersen, 1997; Newsom, 2006). Archaeological evidence suggests that maize was used more as a vegetable in the diet of Saladoid people than as a staple crop (Rouse, 1992; Newsom and Deagan, 1994; Petersen, 1997; Newsom, 2006). Only scant botanical evidence presently exists for the timing of arrival and spread of maize agriculture in Hispaniola.
Some of the best evidence of maize agriculture on Hispaniola comes from the En Bas Saline archaeological site, on the northeastern coast of Haiti. Maize macroremains, including a cob fragment, cupules, and kernel fragments,
recovered from En Bas Saline have been dated to A.D. 1250. In addition to these macroremains, maize pollen dating to between A.D. 1000 and A.D. 1500 has been recovered from the sediments of Lake Miragoane (Higuera-Gundy et al., 1999) and maize pollen possibly dating back to A.D. 1020 has been reported from soil profiles near El Jobito in the Dominican Republic (Newsom, 2006). Finally, Ortega and Guerrero (1981) have speculated that maize pollen from the El Curro and Puerto Alejandro archaeological sites may have been deposited as early as 1450 B.C. In any case, it appears that maize agriculture arrived somewhat late to Hispaniola compared to many other Mesoamerican and circum-Caribbean sites.
The scant evidence of maize agriculture and consumption has led some researchers to believe that maize played a secondary role in the diet of prehistoric horticulturists on the island of Hispaniola and may suggest a relatively restricted usage pattern on the island. The exact reason, or reasons, why this protein-rich grain may have played only a secondary role in the diet of prehistoric human populations of Hispaniola remains a mystery, especially considering the few native terrestrial mammals available to prehistoric hunters. It has been proposed that maize may have been primarily consumed by the upper class and during religious ceremonies. It has also been proposed that the predominantly root crop and marine-based diet of native inhabitants provided a stable source of protein that did not require maize agriculture as a supplement (Newsom and Deagan, 1994; Newsom, 2006). This would be especially true along the coastal margins of the island.
62
CHAPTER 2
The Earliest Evidence of Maize Agriculture from the Interior of Hispaniola
This chapter is a slightly modified version of a manuscript that has been submitted for publication in the *Journal of Caribbean Science* by me, Sally P. Horn, Kenneth H. Orvis, and Claudia I. Mora. The manuscript, which is currently under review, includes additional information on the study area that is presented in Chapter 1 of this dissertation. My use of “we” in this chapter refers to my co-authors and myself.
**Introduction**
The prehistoric domestication of maize (*Zea mays* subsp. *mays*) and subsequent spread of maize agriculture throughout the Central, South, and North American mainlands have been topics of considerable research in recent years (e.g. Johannessen and Hastorf, 1994; Staller et al., 2006). In contrast, the introduction and subsequent spread of maize agriculture throughout the Caribbean region has received much less attention until quite recently (Newsom, 2006). This lack of attention has not been due to a lack of interest, but rather to a lack of evidence.
Despite the relative abundance of excavated archaeological sites distributed throughout the Caribbean (Newsom and Pearsall, 2003; Newsom and Wing, 2004), evidence of prehistoric maize agriculture has proven to be a rare find. In fact, only two Caribbean archaeological sites have produced prehistoric macroremains of maize. The first, the Tutu site on St. Thomas, yielded maize kernels that were dated to around A.D. 1140 (Pearsall, 2002). At the second site, En Bas Saline in Haiti, maize cobs, cupules, and kernels were recovered and dated
to around A.D. 1250 (Newsom and Deagan, 1994). Not only are Caribbean sites that contain macrobotanical evidence of prehistoric maize agriculture rare, but even when this evidence is present there is very little of it. For example, at En Bas Saline, only 34% of the plant macroremains recovered from the site were maize macroremains, with half of those remains coming from a single prehistoric pit (Newsom and Deagan, 1994; Newsom, 2006).
Microremains of maize, typically pollen grains, are more commonly found in Caribbean archaeological sites than are macroremains, but these finds are still very limited in geographic extent. Maize pollen has been reported from three coastal or near-coastal sites on Hispaniola (Figure 2.1). García Arévalo and Tavares (1978) found maize pollen in a soil pit at the El Jobito archaeological site in the southeastern Dominican Republic. Based on the presence of Ostionoid artifacts at the site, the pollen grains in the excavation were assumed to have been deposited sometime around A.D. 1020. Higuera-Gundy (1991) reported maize pollen possibly dating back to around A.D. 850 from a sediment core from Lake Miragoane, Haiti. This age was assigned using down-core extrapolation of Pb-210 dates acquired some 40 cm above the stratigraphic position of the maize pollen in the sediment core (Brenner and Binford, 1988). Later radiocarbon analyses indicated that the maize pollen was probably deposited closer to A.D. 1500 (Higuera-Gundy et al., 1999). Finally, Ortega and Guerrero (1981) reported fossil maize pollen at the El Curro archaeological site in Puerto Alejandro, Dominican Republic. Dated to 1450 B.C., the El Curro site is a preceramic,
Figure 2.1. The locations of Hispaniolan study sites containing macrofossil or microfossil evidence of maize agriculture prior to A.D. 1500. Laguna Castilla and Laguna de Salvador (italics) are the sites addressed in this study.
preagricultural site developed on a former mangrove swamp. Sediment samples from three shallow excavations were analyzed for pollen by Luis Fortuna, whose results are reported as an appendix to Ortega and Guerrero’s monograph. Fortuna found maize grains at 10–20 cm depth that he interpreted as evidence of maize consumption, though not necessarily farming, at the site as early as 1450 B.C. Ortega and Guerrero, however, regarded the maize pollen (as well as some surficial ceramics and a shell amulet found at 0–7 cm) as intrusive elements introduced during later occupation of the area by agricultural peoples.
Phytolith and starch residue analyses provide additional evidence of maize agriculture in the Caribbean, but this evidence is also limited geographically. D. M. Pearsall (Newsom and Pearsall, 2003) reported maize phytoliths in the sediments of a pond near the Maisabel archaeological site in northern Puerto Rico dating back to ca. 2000 B.C. J.R. Pagán-Jiménez et al. (2005) reported starch residues indicative of maize processing from two Archaic Age sites in southern Puerto Rico (Maruca and Puerto Ferro) and a late Ceramic Age site (UTU-27) in the central mountains of Puerto Rico (Newsom, 2006).
Several hypotheses have been put forth to explain the low signals of prehistoric maize agriculture found in the Caribbean. Lee Newsom and her collaborators have been at the forefront of this issue (Newsom and Deagan, 1994; Newsom and Pearsall, 2003; Newsom and Wing, 2004; Newsom, 2006). They have suggested that one possible explanation for the low signals of prehistoric maize agriculture in the Caribbean was heavy reliance by prehistoric inhabitants of the region on root crops (Petersen, 1997), marine resources (Stokes, 1998), and
possibly small home gardens, primarily comprising species other than maize (Newsom and Wing, 2004). Excavated artifacts, animal and plant remains, and isotopic analyses of human remains support this hypothesis (e.g. Keegan, 1985; Wilson, 1990; 1997; van Klinken, 1991; Rouse, 1992; Keegan and Deniro, 1998; Stokes, 1998; Wing, 2001). Based on the spatial context of maize macroremains found around the En Bas Saline archaeological site in Haiti, Newsom and her colleagues (Newsom and Deagan, 1994; Newsom and Wing, 2004; Newsom, 2006) also suggested that maize may have been reserved for high status individuals or communal feasts, with limited daily maize consumption by the vast majority of the population. Furthermore, early accounts by Spanish colonists describe the consumption of maize as a “vegetable” by prehistoric inhabitants of the Caribbean (Newsom, 2006, page 333), suggesting that it was a dietary supplement, but never a staple in the prehistoric diet.
With the exception of the UTU-27 site in Puerto Rico, all of the aforementioned archaeological sites are located along the coastal margins of their respective islands. Much of the microfossil evidence has come from excavated soil horizons in which vertical mixing and rapid downwash can complicate pollen stratigraphies (Horn et al., 1998) and possibly also phytolith results. The small amounts of material available as either macrofossils or microfossils has made direct dating impossible; most dating has been based on archaeological context.
To further refine understanding of the introduction and spread of maize agriculture in the Caribbean, we present evidence of prehistoric maize agriculture preserved in the sediment records of two mid-elevation lakes on the Caribbean
slope of the Cordillera Central in the Dominican Republic. The sediment records from Laguna Castilla and Laguna de Salvador (Figure 2.1) contain abundant maize pollen dating back to as early as ~A.D. 1060. This find represents the earliest evidence of maize agriculture from the interior of Hispaniola, and some of the oldest and most securely dated evidence of maize agriculture from the island of Hispaniola and the Caribbean as a whole.
**Methods**
**Study Area**
Laguna Castilla (18°47'51" N, 70°52'33" W, 976 m) and Laguna de Salvador (18°47'45" N, 70°53'13" W, 990 m) are mid-elevation lakes located on the southern slope of the Cordillera Central in the Dominican Republic (Figure 2.1), about 45 km inland from the Caribbean coast, near the small community of Las Lagunas in the province of Azua. The lakes are located in an area of large hills composed of ancient marine sediments deeply incised by streams. To our knowledge, no archaeological surveys have been undertaken in the area.
**Sediment Core Retrieval and Analysis**
We collected sediment cores from near the centers of Laguna Castilla and Laguna de Salvador during field expeditions in 2002 and 2004, respectively. Sediments were retrieved in aluminum core tubes in 1 m sections using a Colinvaux-Vohnaut (C-V) locking piston corer (Colinvaux et al., 1999). The uppermost sediments from both lakes were collected with a PVC tube fitted with a rubber piston, and then extruded and sliced in 2-cm increments and the intervals
bagged individually in the field. After opening the C-V core sections in our lab, we described color (Munsell) and textural changes.
We constructed chronologies for both sediment cores by obtaining accelerator mass spectrometry (AMS) radiocarbon dates on charcoal, other organic macrofossils, and bulk sediment. We calibrated the AMS radiocarbon dates using the CALIB 5.0 computer program (Stuiver and Reimer, 1993) and the dataset of Reimer et al. (2004). We determined the weighted mean of the probability distribution of the calibrated age (Telford et al., 2004a; 2004b) for each AMS date and used this single calendar age to calculate sedimentation rates. Calendar ages for lake sediment horizons with maize pollen were calculated based on linear interpolation between dated intervals.
**Pollen Analysis**
We sub-sampled the sediment cores from Laguna Castilla and Laguna de Salvador for pollen analysis at varying depth intervals (4 to 16 cm), chemically processed the samples using standard techniques (Berglund, 1986; Faegri and Iverson, 1989), added *Lycopodium* tablets as controls (Stockmarr, 1971), and mounted the pollen residues on microscope slides in silicone oil (Appendix A). We scanned at least two slides from each sample level in their entirety at low (100x) magnification (Horn, 2006) searching for maize pollen.
We identified as maize pollen all Poaceae pollen grains with a diameter greater than 62 µm. This identification criterion is based on the work of Whitehead and Langham (1965) who measured and compared the grain and pore diameters of pollen from 12 races of cultivated maize, 10 races of teosinte, and
two races of grass from the genus *Tripsacum*, all mounted in silicone oil. Several researchers have documented the potential influence of mounting media, especially glycerine jelly, on the sizes of maize pollen grains (Ludlow-Wiechers, et al. 1983; Sluyter, 1997), but our use of silicone oil for the Castilla and Salvador samples makes possible direct comparison with the work of Whitehead and Langham (1965). Their measurements indicated that pollen grains produced by modern cultivars of *Zea mays* subsp. *mays* and mounted in silicone oil ranged in diameter from 58 µm to 98.6 µm (Whitehead and Langham, 1965); Ludlow-Wiechers et al. (1983) later reported some Mexican races of maize to have pollen grains as large as 120 µm in diameter. It is important to acknowledge that there is overlap in the sizes of pollen grains produced by cultivated maize and those produced by wild maize or teosinte (*Zea mays* subsp. *parviglumis* H. H. Iltis & Doebley, *Zea perennis* (Hitchc.) Reeves & Mangelsd., and other *Zea* L. spp.; taxonomy follows Sluyter, 1997). Measurements of teosinte pollen grains mounted in silicone oil range in diameter from 46.4 to 87 µm (Whitehead and Langham, 1965). However, islands of the Caribbean are outside the natural range of *Zea* and there is no evidence that teosinte was present in prehistoric times on Hispaniola.
**Results**
A total of 20 down-core pollen samples from the Laguna Castilla sediment core contained prehistoric maize pollen grains (Table 2.1). Most samples from this pre-modern maize interval were extracted from clay-rich sediments with fine laminations suggesting minimal vertical mixing of sediments and associated
Table 2.1. Stratigraphic position, abundance, and dimensions of maize pollen grains from the Laguna Castilla pre-modern maize interval.
| Depth (cm) | Approximate Age (cal yr A.D.) | Maize Pollen Grains (n) | Grain Size Range (μm) | Annulus Size Range (μm) | Average Grain Size (μm) | Average Annulus Size (μm) |
|------------|-------------------------------|-------------------------|-----------------------|------------------------|------------------------|--------------------------|
| 350 | 1271 | 4 | 74.4–79.4 | 13.6–16.1 | 75.0 | 14.6 |
| 366 | 1253 | 9 | 69.4–79.4 | 12.4–14.9 | 74.4 | 13.8 |
| 382 | 1236 | 12 | 62.0–86.8 | 12.4–16.1 | 72.3 | 13.8 |
| 398 | 1219 | 17 | 66.9–81.8 | 12.4–14.9 | 73.1 | 13.6 |
| 414 | 1201 | 8 | 64.5–74.0 | 12.4–17.4 | 69.7 | 14.3 |
| 430 | 1184 | 2 | 71.9 (2) | 13.6–14.9 | 71.9 | 14.3 |
| 446 | 1166 | 3 | 66.9–74.4 | 13.6–16.1 | 69.4 | 14.9 |
| 462 | 1149 | 3 | 69.4–76.9 | 13.6–14.9 | 71.9 | 14.5 |
| 470 | 1132 | 5 | 74.4–76.9 | 13.6–14.9 | 75.9 | 14.6 |
| 474 | 1123 | 2 | 76.9–84.3 | 13.6–14.9 | 80.6 | 14.2 |
| 478 | 1119 | 5 | 66.9–74.4 | 12.4–16.1 | 71.4 | 14.1 |
| 482 | 1114 | 4 | 71.9–76.9 | 12.4–14.9 | 74.1 | 14.3 |
| 486 | 1110 | 6 | 66.9–76.9 | 13.6–14.9 | 70.3 | 14.3 |
| 490 | 1105 | 1 | 70.7 | 13.6 | 70.7 | 13.6 |
| 502 | 1092 | 1 | 70.7 | 13.6 | 70.7 | 13.6 |
| 506 | 1088 | 2 | 74.4–76.9 | 13.6–14.9 | 75.6 | 14.3 |
| 510 | 1084 | 2 | 69.4 (2) | 13.6–14.9 | 69.4 | 14.3 |
| 514 | 1079 | 2 | 70.7–71.9 | 13.6 (2) | 71.3 | 13.6 |
| 522 | 1071 | 3 | 74.4–76.9 | 13.6–14.9 | 76.1 | 14.1 |
| 530 | 1062 | 1 | 71.9 | 13.6 | 71.9 | 13.6 |
*a*Depth refers to the depth below the sediment-water interface.
*b*Ages were estimated using linear interpolation between the calibrated radiocarbon dates bracketing this stratigraphic section.
pollen grains (Figure 2.2). Linear interpolation between the calibrated ages bracketing this interval of maize pollen deposition indicates that the grains range in age from cal yr A.D. 1062 to cal yr A.D. 1271 (Tables 2.1 and 2.2). Our relatively coarse pollen sampling of the Laguna de Salvador sediment core has resulted in the discovery of fewer maize pollen grains at this site; however, the timing of maize pollen deposition was similar. Three pollen samples from Laguna de Salvador contained prehistoric maize pollen with interpolated ages ranging from cal yr A.D. 1108 to cal yr A.D. 1187 (Tables 2.3 and 2.4). The grain and annulus diameters of the maize grains (Tables 2.1 and 2.3) from both sediment profiles are well within the expected size ranges of pollen grains of modern cultivars of *Zea mays* subsp. *mays* (Whitehead and Langham, 1965) and of prehistoric maize pollen from the Central American mainland (Horn, 2006).
**Discussion and Conclusions**
The palynological evidence of prehistoric maize agriculture presented here represents some of the earliest documented evidence of maize agriculture from the island of Hispaniola (Figure 2.1), and the only evidence from the interior of the island. The maize pollen grains preserved in Laguna Castilla also represent some of the most securely dated evidence of early maize agriculture in Hispaniola and the entire Caribbean region. Three aspects of our findings give us great confidence in our dating. First, we have obtained AMS radiocarbon dates on organic sediments positioned only 6 cm deeper than the lowest stratigraphic position of prehistoric maize pollen, and only 20 cm higher than the upper stratigraphic boundary of the pre-modern maize pollen interval in the Laguna
Figure 2.2. Stratigraphy of the Laguna Castilla sediment core and the stratigraphic position of pollen samples within the pre-modern maize interval. Filled circles represent pollen samples containing cultivated maize pollen.
Table 2.2. Radiocarbon determinations and calibrations for Laguna Castilla.
| Lab Number<sup>a</sup> | Depth (cm) | δ<sup>13</sup>C (%) | Uncalibrated <sup>14</sup>C Age (<sup>14</sup>C yr BP) | Calibrated Age Range<sup>b</sup> ± 2 σ | Area Under Probability Curve | Weighted Mean<sup>c</sup> |
|------------------------|------------|---------------------|-------------------------------------------------|--------------------------------------|-------------------------------|--------------------------|
| β-196817 | 66–68 | −25.6 | 103.9% of Modern | cal A.D. 1951.5 – 1954.5* | 1.000* | cal A.D. 1953* |
| β-204702 | 204–207 | −24.5 | 110 ± 40 | cal A.D. 1951–1954 | 0.008 | cal A.D. 1817 |
| | | | | cal A.D. 1800–1940 | 0.651 | |
| | | | | cal A.D. 1772–1776 | 0.007 | |
| | | | | cal A.D. 1677–1765 | 0.333 | |
| β-196818 | 329–331 | −25.9 | 730 ± 40 | cal A.D. 1365–1383 | 0.063 | cal A.D. 1276 |
| β-171499 | 536–537 | −24.2 | 1000 ± 40 | cal A.D. 1218–1303 | 0.937 | |
| β-192641 | 651–653 | −23.8 | 2190 ± 40 | 127–120 cal B.C. | 0.009 | 267 cal B.C. |
| | | | | 382–163 cal B.C. | 0.991 | |
| β-171500 | 724–725 | −23.2 | 2860 ± 40 | 1130–912 cal B.C. | 0.970 | 1033 cal B.C. |
| | | | | 1159–1143 cal B.C. | 0.016 | |
| | | | | 1190–1177 cal B.C. | 0.014 | |
| β-171501 | 758–761 | −25.3 | 2470 ± 40 | 469–413 cal B.C. | 0.118 | 602 cal B.C. |
| | | | | 673–478 cal B.C. | 0.600 | |
| | | | | 763–678 cal B.C. | 0.282 | |
<sup>a</sup>Analyses were performed by Beta Analytic Laboratory. Samples β-196817, β-196818, and β-171499 consisted of bulk sediment; samples β-192641 and β-204702 consisted of a mixture of plant macroremains, insect parts, and charcoal; sample β-171501 consisted of plant macroremains; and sample β-171500 consisted of charcoal.
<sup>b</sup>Calibrations were calculated using Calib 5.0 (Stuiver and Reimer, 1993) and the dataset of Reimer et al. (2004).
<sup>c</sup>Weighted mean of the calibrated age probability distribution curve.
*Dates were calibrated using the CALIBomb program (Reimer et al., 2004).
Table 2.3. Stratigraphic position, abundance, and dimensions of maize pollen grains from Laguna de Salvador.
| Depth\(^a\) (cm) | Approximate Age\(^b\) (cal yr A.D.) | Maize Pollen Grains (n) | Grain Size Range (\(\mu m\)) | Annulus Size Range (\(\mu m\)) | Average Grain Size (\(\mu m\)) | Average Annulus Size (\(\mu m\)) |
|------------------|-------------------------------------|-------------------------|-------------------------------|-------------------------------|-------------------------------|-------------------------------|
| 268 | 1187 | 1 | 69.4 | 14.9 | 69.4 | 14.9 |
| 276 | 1147 | 1 | 76.9 | 14.9 | 76.9 | 14.9 |
| 284 | 1108 | 5 | 71.9–79.4 | 13.6–14.9 | 75.9 | 14.6 |
\(^a\)Depth refers to the depth below the sediment-water interface.
\(^b\)Ages were estimated using linear interpolation between the calibrated radiocarbon dates bracketing this stratigraphic section.
Table 2.4. Radiocarbon determinations and calibrations for Laguna de Salvador.
| Lab Number\(^a\) | Depth (cm) | \(\delta^{13}C\) (\%) | Uncalibrated \(^{14}C\) Age (\(^{14}C\) yr BP) | Calibrated Age Range\(^b\) ± 2 σ | Area Under Curve | Weighted Mean\(^c\) |
|------------------|------------|------------------------|-----------------------------------------------|---------------------------------|-----------------|-------------------|
| β-219035 | 76.5 | −25.7 | 100 ± 40 | cal A.D. 1951–1954 | 0.013 | cal A.D. 1825 |
| | | | | cal A.D. 1801–1939 | 0.673 | |
| | | | | cal A.D. 1680–1763 | 0.315 | |
| β-204696 | 204 | −27.5 | 410 ± 40 | cal A.D. 1558–1631 | 0.243 | cal A.D. 1504 |
| | | | | cal A.D. 1427–1524 | 0.757 | |
| β-196821 | 359 | −29.8 | 1280 ± 40 | cal A.D. 841–861 | 0.028 | cal A.D. 736 |
| | | | | cal A.D. 787–824 | 0.065 | |
| | | | | cal A.D. 658–783 | 0.907 | |
| β-192645 | 504 | −25.1 | 2060 ± 40 | 183 cal B.C.–cal A.D. 24 | 1.000 | 79 cal B.C. |
\(^a\)Analyses were performed by Beta Analytic Laboratory. Samples β-219035, β-204696, β-196821 consisted of wood fragments and sample β-192645 consisted of charcoal.
\(^b\)Calibrations were calculated using Calib 5.0 (Stuiver and Reimer, 1993) and the dataset of Reimer et al. (2004).
\(^c\)Weighted mean of the probability distribution of the calibrated age (Telford et al., 2004b).
Castilla sediment record (Figure 2.2), such that our interpolated ages are very close to directly dated horizons. Second, the sedimentation rate in Laguna Castilla during this period was quite high (0.92 cm/yr), which allows for relatively precise interpolation of dates. Third, most of the maize pollen grains are preserved within finely laminated sediments, and the oldest maize pollen grains are preserved in organic silts a few cm below the laminated sediments (Figure 2.2). This stratigraphic context makes it highly unlikely that any vertical mixing of the sediments and their associated microfossils took place. With the exception of Lake Miragoâne, the secure stratigraphy of the Castilla pollen grains contrasts with the stratigraphy of all prehistoric maize sites on Hispaniola. That evidence has come from excavations in soil that may have been prone to vertical mixing or downwashing of younger microfossils and for which dating has primarily relied on ceramic styles and on limited radiocarbon analyses not closely tied to the pollen spectra.
The lack of archaeological research around Laguna Castilla and Laguna de Salvador limits our ability to interpret the archaeological context of our pollen results. However, the radiocarbon dates place the interval of pre-modern maize pollen deposition in both lakes within the Ostionoid archaeological period (~A.D. 500 to A.D. 1500; Wilson, 1997). This is a period that has been associated with an intensification of horticultural production throughout Hispaniola as indicated by increased use of agricultural terraces (Ortiz Aguilu et al., 1991) and by the construction of small earthen mounds (conucos) associated with more intensive agricultural production (Rouse, 1992).
Although the inland location of Laguna Castilla and Laguna de Salvador does not preclude the possibility that aquatic and marine resources were an important part of the prehistoric diet in this area (Wilson, 1993), it is conceivable that the approximately 45-km distance from the Caribbean coast may have led to a greater local dependence on terrestrial food sources, including cultivated maize, than is apparent at contemporaneous coastal sites on Hispaniola (Newsom 2006). The hypothesis that interior populations in the Caribbean were more dependent on terrestrial food sources, including maize, than coastal populations was advanced by Stokes (1998) and is supported by her isotopic analyses of human remains collected from the Paso del Indio site located in the interior of Puerto Rico.
Our discovery of prehistoric maize pollen grains in the sediments of Laguna Castilla and Laguna de Salvador, together with starch residue and phytolith evidence of prehistoric maize cultivation (Newsom, 2006) and isotopic evidence of maize consumption from the interior of Puerto Rico (Stokes, 1998), emphasize the need for further archaeological research into the importance of maize agriculture in the interior of Hispaniola and other Caribbean islands. More archaeological investigations of inland sites on the Greater Antilles would improve our understanding of the geography and history of maize cultivation in the prehistoric Caribbean and its role in the evolving ethnobotany of the region.
CHAPTER 3
Sensitivity of Sedimentary Stable Carbon Isotopes in a Small Neotropical Lake to Prehistoric Forest Clearance and Maize Agriculture
This chapter is in preparation for submission to the *Journal of Paleolimnology* by me, Claudia I. Mora, Sally P. Horn, and Kenneth H. Orvis. The submitted manuscript will include additional information on the study area that is presented in Chapter 1 of this dissertation. My use of “we” in this chapter refers to my co-authors and myself.
**Introduction**
Much of what we currently know about the environmental impacts of prehistoric human populations has come from lake sediment records of paleoenvironmental change. Lake sediment records from around the world have been used to document a variety of prehistoric human activities including deforestation (Burney et al., 1994; Islebe et al., 1996; Northrop and Horn, 1996; Goman and Byrne, 1998; Clement and Horn, 2001; Rosenmeier et al., 2002a; Rosenmeier et al., 2002b; Fisher et al., 2003; Wahl et al., 2006), soil degradation (Ohara et al., 1994; Jacob and Hallmark, 1996; Beach, 1998; Conserva and Byrne, 2002; Lucke et al., 2003), water pollution (Oldfield et al., 2003; Davies et al., 2004; Ekdahl et al., 2004), and agriculture (Sluyter, 1997b; Leyden et al., 1998; Dull, 2006; Horn, 2006). The majority of these studies have taken a qualitative approach, documenting the occurrence and timing, but not the spatial scale, of these activities.
In a recent study, Lane et al. (2004) documented prehistoric forest clearance and crop cultivation in the neotropics using the stable carbon isotope
composition of total organic carbon ($\delta^{13}$C$_{\text{TOC}}$) in lake sediments. Subsequently, Lane et al. (in press) proposed that relative shifts in the $\delta^{13}$C$_{\text{TOC}}$ values of lake sediments could be used to compare the relative spatial scale of prehistoric forest clearance and agriculture at a particular site through time. These studies raised the possibility of quantitatively reconstructing the spatial scale of these activities at high temporal resolutions using the stable carbon isotope proxy.
Lane et al. (2004) and Lane et al. (in press) provided a full overview of the theoretical basis behind the $\delta^{13}$C$_{\text{TOC}}$ proxy record of prehistoric forest clearance and agriculture. This proxy is effective because maize (*Zea mays* subsp. *mays*) and a few other tropical cultigens, as well as many associated agricultural weeds, use the C$_4$ photosynthetic pathway, whereas mesic neotropical forest ecosystems are dominated by trees and shrubs that use the C$_3$ photosynthetic pathway. Plants that use the C$_3$ photosynthetic pathway produce tissues with $\delta^{13}$C values ranging between $-35\%$o and $-20\%$o V-PDB, but plants that use the C$_4$ photosynthetic pathway produce tissues with $\delta^{13}$C values ranging between $-14\%$o and $-10\%$o V-PDB (Bender, 1971; O’Leary, 1981). After the deforestation of a C$_3$-dominated ecosystem, such as a neotropical forest, and replacement by C$_4$ cultigens and weeds, a shift in the isotopic composition of organic carbon is produced by the ecosystem as a whole. The shift in the carbon isotope compositions can be recorded in lake sediments as long as carbon from the ecosystem is input to those lake sediments (Aucour et al., 1999; Huang et al., 2001; Street-Perrott et al., 1997; 2004).
The detection of prehistoric forest clearance and agriculture using stable carbon isotopes only allows assessment of the relative importance of these activities through time. To develop a more quantitative assessment of the environmental impacts of prehistoric human populations on the environment, based on the isotope proxy, it is necessary to develop a more in-depth understanding of how the sedimentary $\delta^{13}C_{TOC}$ record responds to numerous and complex watershed variables. Two critical variables are variations in the abundance of C$_4$ plants, most notably maize, in the watershed and variations in the contribution of allochthonous carbon to the lake sediments. In this study, we attempt to assess the influence of these variables on $\delta^{13}C$ values of lake sediments from Laguna Castilla, a small lake in the Dominican Republic, over a period of ~300 years using a multi-proxy approach at high temporal resolution.
The most well-established technique for reconstructing the abundance of C$_4$ plants within a watershed is a mass balance approach in which the relative contributions of C$_3$ and C$_4$ plants to the bulk carbon isotope compositions of lake sediments and soils are estimated based on their end-member isotopic compositions. However, this is the very proxy we are seeking to study. An alternative approach to establishing the relative C$_4$ plant abundance through time in the mesic neotropics is to use the maize pollen concentration of sediments. Forest ecosystems are dominated by C$_3$ plants, and because any increase in C$_4$ plants within the ecosystem is most likely linked to agricultural activities and will be proportional in scale to agriculture within the watershed. Although the exclusive use of maize pollen may underestimate the total abundance of C$_4$ plants
in the watershed, it is not possible to distinguish the pollen of other C\textsubscript{4} species from C\textsubscript{3} species in the same families.
Maize pollen grains preserved in lake sediments have been previously used as an indicator of prehistoric agriculture (c.f., Staller et al., 2006). Pollen produced by \textit{Zea mays} subsp \textit{mays}, as well as several other species in the genus \textit{Zea}, is relatively large, and has a very high settling velocity and short dispersal distance (Raynor et al., 1972; Luna et al., 2001; Aylor et al., 2005). Based on the short dispersal distance of maize pollen, some researchers have conjectured that the presence of maize pollen in lake sediments may require that the plants be grown on the very shore of the lake (Islebe et al., 1996). This short dispersal distance is somewhat problematic in the context of reconstructing the abundance of maize at the landscape scale because the cultigen is typically poorly represented in pollen assemblages. However, the small size of the Laguna Castilla watershed (see Study Site description below) suggests that any maize cultivation in the watershed occurred fairly close to the lake itself. In addition, the relatively high abundance of maize grains in the Laguna Castilla sediment record (Chapter 2) should make it possible to reliably estimate maize pollen concentrations. Variations in the abundance of maize pollen are thus hypothesized to track, at least semi-quantitatively, changes in the relative abundance of maize and closely associated agricultural weeds in the watershed through time.
The contribution of sediments that originate from allochthonous sources can be assessed using a variety of techniques. In this study, we use mineral influx as a proxy of allochthonous sediment delivery. While some of the mineral
components of lake sediments can originate from autochthonous sources (e.g., diatoms, ostracods, gastropods, charophytes, marl, sponge spicules), the mineral component of sediments with low calcite or aragonite concentrations, such as those analyzed here, primarily originates from the physical and chemical breakdown of surrounding rocks and soils and subsequent delivery of that material to the lake through erosion and sediment transport. Therefore, we hypothesize that the mineral influx into Laguna Castilla can be used as a proxy of the relative importance of allochthonous sediment delivery through time.
By comparing variations in sedimentary $\delta^{13}$C$_{\text{TOC}}$ values, maize pollen concentrations, and mineral influx in the Laguna Castilla sediment record, it should be possible to assess the sensitivity of lake sediment $\delta^{13}$C$_{\text{TOC}}$ values to variations in the abundance of C$_4$ cultigens and associated weeds on the surrounding landscape, as well as variations in allochthonous sediment delivery. In addition, by conducting these analyses at a high resolution (approximately 5–20 years) it should also be possible to assess the temporal sensitivity of sedimentary $\delta^{13}$C$_{\text{TOC}}$ values to variations in these variables. Because agricultural activities are typically based on an annual cycle of field clearance and crop cultivation, it is essential that the $\delta^{13}$C$_{\text{TOC}}$ record be responsive at a high temporal resolution if we hope to use this proxy to quantitatively reconstruct these past activities.
**Study Site**
Laguna Castilla (18°47'51" N, 70°52'33" W, 976 m) is located on the Caribbean slope of the Cordillera Central in the Dominican Republic (Figure 3.1),
Figure 3.1. Location of the Dominican Republic and Laguna Castilla.
near the small community of Las Lagunas in the province of Azua. Based on aerial photographs and topographic maps of the area, the Laguna Castilla watershed appears to be less than 25 ha in total area (Figure 3.2). Laguna Castilla itself is a fairly small lake with a surface area of approximately 1.5 ha.
The landscape around Laguna Castilla is currently being used for a wide range of activities including cattle and goat ranching and agriculture (Figure 3.2). Humans living in the area today cultivate a variety of crops including beans, corn, and coffee. Vegetation of nearby areas with similar climate conditions, but with less human impact, has been classified as lower montane moist forest (i.e. the Holdridge life zone designation; Tolentino and Peña, 1998). Lower montane moist forest in the Dominican Republic is a C₃-dominated ecosystem consisting of pines (*Pinus occidentalis* Schwartz) mixed with a wide variety of evergreen and deciduous broadleaved trees and shrubs (Liogier, 1981).
**Methods**
**Sediment Core Recovery and Chronology**
We collected a 7.8 m sediment core from near the center of Laguna Castilla in 2002. Sediments 40 cm below the sediment/water interface were retrieved in aluminum core tubes in 1 m sections using a Colinvaux-Vohnaut (C-V) locking piston corer (Colinvaux et al., 1999). After opening the C-V core sections in our lab, we photographed and described the stratigraphy of the core. In this study, we focus on sediments from 3 m to 6 m below the sediment-water interface, which span the period of prehistoric human occupation of the watershed (Chapter 3).
Figure 3.2. Photograph of Laguna Castilla and the surrounding landscape. Note the small size of the Laguna Castilla watershed (highlighted in white). The shore of Laguna Castilla has been highlighted in black. For scale, the width of Laguna Castilla is approximately 100 m.
We constructed a chronology for the Laguna Castilla sediment core by obtaining accelerator mass spectrometry (AMS) radiocarbon dates from Beta Analytic Laboratory, Inc., in Miami, Florida. Radiocarbon determinations were made on a variety of organic materials including charcoal, non-carbonized organic macrofossils, and bulk sediment. We calibrated the AMS radiocarbon dates using the CALIB 5.0 computer program (Stuiver and Reimer, 1993) and the dataset of Reimer et al. (2004). To calculate sedimentation rates, we calculated a single calibrated age by determining the weighted mean of the calibrated age probability distribution (Telford et al., 2004a; b). We calculated the calendar ages for lake sediment horizons located between the positions of radiocarbon dated materials using linear interpolation.
**Laboratory Analyses**
**Stable carbon isotope analysis**
We measured the stable carbon isotope ratios of bulk sedimentary organic carbon ($\delta^{13}$C$_{\text{TOC}}$) from Laguna Castilla at intervals of 4 to 16 cm. We removed carbonates from the sediment samples by reacting the sediment with 10% HCl. Following neutralization with distilled water, we dried the sediment overnight at 50 °C, removed any large organic macrofossils, and ground the dried samples to a fine powder with a mortar and pestle to ensure the samples were homogenized and representative of the organic carbon fraction of the bulk sediment. We then combusted the sediment samples at 800 °C under vacuum in quartz tubes in the presence of 500 mg of copper, 500 mg of copper oxide, and a small platinum wire. Next, we cryogenically purified the rendered CO$_2$ and analyzed its carbon
isotope composition using a dual-inlet Finnigan MAT Delta-plus mass spectrometer at the University of Tennessee. We report all carbon isotopic compositions in standard $\delta$-per mil notation relative to the Vienna-Pee Dee belemnite (V-PDB) marine-carbonate standard, where:
$$\delta^{13}C \text{ (per mil)} = 1000 \left[ \frac{R_{\text{sample}}}{R_{\text{standard}}} - 1 \right],$$
where $R = ^{13}C/^{12}C$.
Repeated analyses of the USGS 24 graphite standard indicate that the precision of these offline carbon isotopic determinations are better than ± .05‰ V-PDB.
**Maize pollen concentration**
A detailed explanation of our pollen sampling and processing procedures, along with the criteria used to identify maize (*Zea mays* subsp. *mays*) pollen, was presented earlier (Chapter 2). In short, we sub-sampled 0.5 cc of sediment from the Laguna Castilla core for pollen analysis at the same depth intervals subsampled for isotope analysis. We prepared and scanned at least two slides from each sample level for maize pollen.
We calculated the concentration of maize pollen grains (grains/cm$^3$) in each 0.5 cc sample using the following equation:
$$\text{Maize grain concentration (grains/cm}^3) = \frac{(\text{Controls}_{\text{sample}} * \text{Maize}_{\text{slides}})}{\text{Controls}_{\text{slides}}} * 2$$
where Controls$_{\text{sample}}$ represents the total number of controls (*Lycopodium* spores) added to the 0.5 cc sample (approximately 13,911 *Lycopodium* spores), Maize$_{\text{slides}}$ represents the total number of *Zea mays* subsp. *mays* pollen grains counted on two slides, and Controls$_{\text{slides}}$ represents the number of controls on two slides. The
number of controls on two slides was estimated based on the extrapolation of the number of controls counted during full pollen counts that covered a known area of the slides.
**Mineral influx analysis**
We took duplicate 0.5 cc sediment sub-samples from the Laguna Castilla sediment core at the same intervals as those taken for isotope and pollen analysis. We combusted the pre-weighed sub-samples at 550 °C for one hour to estimate the organic carbon content of the sediment and 1000 °C for one hour to estimate the carbonate content of the sediment (Dean, 1974). We assumed that any material remaining after the 550 °C burn was mineral. We then calculated the mineral influx for each sample using the following equation:
\[
\text{Mineral Influx (mg/cm}^2/\text{yr)} = \frac{\text{Mineral Bulk Density (mg/cm}^3)}{\text{Sedimentation Rate (cm/yr)}}
\]
We calculated the sedimentation rate using linear interpolation of the weighted means of the probability distribution of the calibrated radiocarbon ages bracketing the positions of the two adjacent sub-samples.
**Results**
**Sediment Stratigraphy and Chronology**
Between 5.2 and 6.0 m, the Laguna Castilla sediments consist of organic silts and clays with fine fibrous organics (Figure 3.3). Subsequently, a rather abrupt transition to faintly banded organic and mineral clays occurs around 5.2 m. The fact that these sediments are laminated indicates minimal vertical mixing of
Figure 3.3. Stratigraphy and radiocarbon chronology of the entire Laguna Castilla sediment core. This study focuses on the sediments located between 300 and 600 cm (dashed line). The asterisks designate the section of the sediment record that contains pollen grains of prehistoric maize (*Zea mays* subsp. *mays*).
Laguna Castilla
0 m
Age > A.D. 1950
Organic gyttja
1.0 m
Organic gyttja with fine fibrous organics
2.0 m
Mineral clay
3.0 m
730 ± 40 $^{14}$C yr B.P.
Organic gyttja, mineral silts increasing with depth
4.0 m
Very fine organic clay with fine laminations
5.0 m
* 1000 ± 40 $^{14}$C yr B.P.
Organic silt with fibrous organics
6.0 m
Laminated organic clays and mineral silts
Peat
Organic silt with coarse organics and sparse sands
7.0 m
2190 ± 40 $^{14}$C yr B.P.
Gravel and sand
7.8 m
2860 ± 40 $^{14}$C yr B.P.
Figure 3.3. Continued.
the sediments and associated fossils. Based on the appearance of maize pollen in the sediment record around this time, we hypothesize that this transition represents the initial occupation of the Laguna Castilla watershed by prehistoric humans and a resulting increase in erosion and sediment delivery to the lake. These sediments have very low carbonate contents, averaging around 2% by mass. At approximately 4.1 m depth, the sediments become mineral rich. Within the overlying 50 cm, the proportion of mineral content to organic gyttja gradually decreases. At 325 cm depth, an abrupt transition occurs from gyttja to a relatively small lens (5.5 cm) of mineral clay. Based on the disappearance of maize pollen from the sediment record at this time, we hypothesize that these sediments coincide with the period of prehistoric human abandonment of the watershed. Following deposition of this clay lens, total organic content and the abundance of fine fibrous organics increase.
The radiocarbon chronology for Laguna Castilla includes one date reversal near the bottom of the core (Table 3.1). We chose to reject this date because it appears that the organic material dated may have been root material that grew down through the Castilla sediments and is anomalously young compared to the surrounding sediment. Radiocarbon sample β-171500 consisted of charcoal and is likely to be a more reliable date for estimating the timing of the formation of Laguna Castilla (Table 3.1). Based on this date, it appears that Laguna Castilla formed around 2980 cal yr B.P. Sedimentation rates in Laguna Castilla vary between 0.09 cm/yr and 1.32 cm/yr, with the highest sedimentation rates
Table 3.1. Radiocarbon determinations and calibrations for Laguna Castilla.
| Lab Number* | Depth (cm) | $\delta^{13}$C (%) | Uncalibrated $^{14}$C Age ($^{14}$C yr BP) | Calibrated Age Range$^b$ ± 2 σ (cal yr B.P.) | Area Under Probability Curve | Weighted Mean$^c$ (cal yr B.P.) |
|-------------|------------|-------------------|------------------------------------------|---------------------------------------------|----------------------------|-----------------------------|
| β-196817 | 66–68 | −25.6 | 103.9% of Modern | −1.5 – −4.5* | 1.000* | −3* |
| β-204702 | 204–207 | −24.5 | 110 ± 40 | −1 – −4 | 0.008 | 133 |
| | | | | 150–10 | 0.651 | |
| | | | | 178–174 | 0.007 | |
| | | | | 273–185 | 0.333 | |
| β-196818 | 329–331 | −25.9 | 730 ± 40 | 585–567 | 0.063 | 674 |
| | | | | 732–647 | 0.937 | |
| β-171499 | 536–537 | −24.2 | 1000 ± 40 | 975–795 | 1.000 | 899 |
| β-192641 | 651–653 | −23.8 | 2190 ± 40 | 2077–2070 | 0.009 | 2217 |
| | | | | 2332–2113 | 0.991 | |
| β-171500 | 724–725 | −23.2 | 2860 ± 40 | 3080–2862 | 0.970 | 2983 |
| | | | | 3109–3093 | 0.016 | |
| | | | | 3140–3127 | 0.014 | |
| β-171501 | 758–761 | −25.3 | 2470 ± 40 | 2419–2363 | 0.118 | 2552 |
| | | | | 2623–2428 | 0.600 | |
| | | | | 2713–2628 | 0.282 | |
$^a$Analyses were performed by Beta Analytic Laboratory. Samples β-196817, β-196818, and β-171499 consisted of bulk sediment; samples β-192641 and β-204702 consisted of a mixture of plant macroremains, insect parts, and charcoal; sample β-171501 consisted of plant macroremains; and sample β-171500 consisted of charcoal.
$^b$Calibrations were calculated using Calib 5.0 (Stuiver and Reimer, 1993) and the dataset of Reimer et al. (2004).
$^c$Weighted mean of the calibrated age probability distribution curve.
*Dates were calibrated using the CALIBomb program (Reimer et al., 2004).
occurring during periods of prehistoric and modern human occupation (Chapter 4; Figure 3.4).
**Stable Carbon Isotopes, Maize Pollen Concentrations, and Mineral Influx**
We have delineated six zones (A–F) in the Castilla sediment section based on the interrelationships of $\delta^{13}$C$_{\text{TOC}}$, maize pollen concentrations, and mineral influx (Figure 3.5).
**Zone F (600–535 cm)**
Zone F represents a period when conditions in and around Laguna Castilla favored low mineral influx (2–8 mg/cm$^2$/yr). No maize pollen is present and stable carbon isotope values increase gradually from $-27$ to $-24\%$, with the exception of a large negative $\delta^{13}$C$_{\text{TOC}}$ excursion around 570 cm.
**Zone E (535–460 cm)**
Zone E contains the first appearance of maize in the Laguna Castilla watershed. Concentrations of maize pollen range from 0 to 59 grains per cm$^3$. Maximum $\delta^{13}$C$_{\text{TOC}}$ values ($-21\%$) occur early in Zone E, decrease around 500 cm, and then increase again around 480 cm. There appears to be a good correspondence between $\delta^{13}$C$_{\text{TOC}}$ values and maize pollen concentrations in Zone E, but with a slight lag in the response of the $\delta^{13}$C$_{\text{TOC}}$ values to changes in maize pollen concentrations. The $\delta^{13}$C$_{\text{TOC}}$ and mineral influx records display similar patterns through Zone E, with mineral influx values slightly leading shifts in the $\delta^{13}$C$_{\text{TOC}}$ record. Mineral influx values reach some of the highest values in the entire sediment record in Zone E ranging from a minimum of 32 mg/cm$^2$/yr to a maximum of 356 mg/cm$^2$/yr.
Figure 3.4. Age-depth graph for the Laguna Castilla sediment core based on weighted means of the probability distributions for radiocarbon dates (Table 1). Sediment accumulation rates (italics) are reported in cm/calendar year.
Figure 3.5. Summary diagram of Laguna Castilla sedimentary $\delta^{13}$C$_{TOC}$ values, maize pollen concentrations, and mineral influx variation. Radiocarbon dates ($^{14}$C yr B.P.) at left are uncalibrated.
| 14C Age | Depth (cm) | Stable Carbon Isotope Ratio | Zea mays subsp. mays Pollen Concentration | Mineral Influx | Zone |
|---------|------------|-----------------------------|------------------------------------------|---------------|------|
| 730 ± 40 yr B.P. | 300-600 | -28 to -20 | 0-120 grains/cm³ | 0-40 mg/cm²/year | A-F |
| 1000 ± 40 yr B.P. | 540-600 | -28 to -20 | 0-120 grains/cm³ | 0-40 mg/cm²/year | A-F |
Figure 3.5 Continued
Zone D (460–414 cm)
Average mineral influx values and maize pollen concentrations decrease significantly in Zone D, but still remain high compared to Zone F. The $\delta^{13}$C$_{\text{TOC}}$ values remain relatively low and stable, but there is a slight increase in the $\delta^{13}$C$_{\text{TOC}}$ values coincident with a sharp increase in mineral influx from 446 to 430 cm.
Zone C (414–382 cm)
Zone C contains the highest maize pollen concentrations in the entire sediment record (103 grains/cm$^3$). Along with this increase in maize pollen concentration is an increase in $\delta^{13}$C$_{\text{TOC}}$ values to around $-21\%$, and the highest mineral influx in the entire sediment record (370 mg/cm$^2$/yr). Following these increases, all three proxy indicators decline toward the top of Zone C.
Zone B (382–335 cm)
Zone B is characterized by a steady decline in maize pollen and its eventual disappearance from the sediment record. Mineral influx and $\delta^{13}$C$_{\text{TOC}}$ values remain steady. The mineral influx values average around 200 mg/cm$^2$/yr and the $\delta^{13}$C$_{\text{TOC}}$ values average around $-24\%$.
Zone A (335–300 cm)
Mineral influx values in Zone A approach pre-occupational levels (20 mg/cm$^2$/yr). Stable carbon isotope ratios progressively decrease from approximately $-24\%$ to around $-27\%$. There is no maize pollen present.
Discussion
Zone F (600–535 cm): Pre-Settlement Conditions
Prior to the settlement of the Laguna Castilla watershed by prehistoric humans, mineral influx was low, indicating a small contribution of allochthonous materials to the sediments, and $\delta^{13}$C$_{\text{TOC}}$ values were low, indicating that organic carbon that originated from terrestrial vegetation in the watershed was most likely being produced by C$_3$ plants (average $\delta^{13}$C$_{\text{TOC}}$ value = $-25\%o$). Modest increases in $\delta^{13}$C$_{\text{TOC}}$ values through Zone F may indicate increasing regional aridity, with a resulting slight increase in the local dominance of C$_4$ plants or drought stress in C$_3$ plants (e.g. Stewart et al., 1995). It seems unlikely that the increase in $\delta^{13}$C$_{\text{TOC}}$ values was the result of prehistoric deforestation because we observed no concurrent increase in mineral influx that would be expected with deforestation and increased soil erosion.
Zone E 535–460 cm): Initial Settlement
The most striking aspects of Zone E are the sudden appearance of maize pollen and the steep increases in mineral influx and carbon isotope ratios. Mineral influx increases by two orders of magnitude compared to pre-settlement conditions and is most likely associated with significant forest clearance during initial human settlement of the watershed. The $\delta^{13}$C$_{\text{TOC}}$ data in Zone E correspond well with both the maize concentrations and mineral influx data and indicate that the bulk organic carbon in the watershed includes a significant component of cultivated maize or C$_4$ agricultural weeds.
A slight lag in the response of the $\delta^{13}$C$_{\text{TOC}}$ record to maize abundance is indicated in the pollen concentrations. For example, peaks in maize concentration around 520, 485, and 470 cm match well with peaks in $\delta^{13}$C$_{\text{TOC}}$ values around 510, 480, and 462 cm, respectively. In addition, a conspicuous drop in maize pollen concentration around 495 cm is accompanied by a decrease in $\delta^{13}$C$_{\text{TOC}}$ values around 490 cm. This temporal relationship between the $\delta^{13}$C$_{\text{TOC}}$ record and the maize pollen concentration record appears to exist throughout the 300–600 cm subsection of the Laguna Castilla sediment record.
The close relationship between the $\delta^{13}$C$_{\text{TOC}}$ and maize pollen concentration curves are clearly evident when the depths of carbon isotope data are shifted downward by 4 cm (Figure 3.6). This shift is arbitrary and merely intended to clarify the relationships between these two datasets. Realistically, the temporal response of $\delta^{13}$C$_{\text{TOC}}$ values is unlikely to be linear through time, as it will depend upon numerous, and quite complex, environmental variables. Despite the simplistic nature of this linear correction, the close correspondence between $\delta^{13}$C$_{\text{TOC}}$ values and maize pollen concentrations is quite clear.
If we assume that the maize pollen concentrations in the Laguna Castilla sediment record are representative of the abundance of maize on the landscape, then a slight lag in the response of the $\delta^{13}$C$_{\text{TOC}}$ record should be expected. This section of the Castilla sediment record has very high sedimentation rates (Figure 3.4; 1 cm/yr) and we have analyzed proxies at high temporal resolution (approximately 5–15 years between samples). We may actually be seeing in these datasets the time lag between pollen production by living maize plants and the
Figure 3.6. Comparison of Laguna Castilla sedimentary $\delta^{13}$C$_{\text{TOC}}$ values and maize pollen concentrations, with $\delta^{13}$C$_{\text{TOC}}$ data graphed 4 cm higher in the profile than actual depths to capture the inherent time lag between the two proxies.
decomposition and delivery of maize tissues to Laguna Castilla and the incorporation of that carbon into the sedimentary carbon pool.
Taking into account the slight lag in the response of $\delta^{13}$C$_{\text{TOC}}$ values to shifting maize pollen concentrations, the close correspondence between $\delta^{13}$C$_{\text{TOC}}$ values and the maize pollen concentrations in Zone E indicate that the sedimentary $\delta^{13}$C$_{\text{TOC}}$ values may be quite sensitive to the abundance of maize in the watershed (Figure 3.6). Considering the fact that maize pollen is very poorly dispersed and typically underrepresented in most pollen assemblages, the correspondence between the two proxies is surprisingly strong. Based on this strong correspondence, we suggest that the majority of sedimentary carbon produced by C$_4$ plants and entering Laguna Castilla originated either from maize itself, or from C$_4$ weeds closely associated with maize agriculture.
The strong correspondence between the $\delta^{13}$C$_{\text{TOC}}$ record and mineral influx data indicates that the $\delta^{13}$C$_{\text{TOC}}$ record is also sensitive to variations in allochthonous sediment delivery. Unlike the relationship between the $\delta^{13}$C$_{\text{TOC}}$ record and maize pollen concentrations, there is virtually no lag in the relationship between the $\delta^{13}$C$_{\text{TOC}}$ and mineral influx records. Conceptually, this relationship makes sense because it is ultimately the delivery of allochthonous C$_4$ carbon that drives the $\delta^{13}$C$_{\text{TOC}}$ record. In other words, the co-variation of the $\delta^{13}$C$_{\text{TOC}}$ and maize pollen concentration records indicate that the $\delta^{13}$C$_{\text{TOC}}$ record is sensitive to variations in the abundance of maize being cultivated within the watershed, but it appears that the efficiency of transport of the organic carbon produced by these terrestrial sources ultimately controls the response of the $\delta^{13}$C$_{\text{TOC}}$ record.
Zone D (460–414 cm): Decreased Prehistoric Human Impact
We hypothesize that Zone D represents a period of decreased human impact in the Laguna Castilla watershed because a decrease in $\delta^{13}$C$_{\text{TOC}}$ values, maize pollen concentrations, and mineral influx values occurred when compared to the previous time interval of Zone E. The $\delta^{13}$C$_{\text{TOC}}$ values and maize pollen concentrations are relatively low throughout Zone D, but mineral influx varies significantly, spiking from a minimum of 156 mg/cm$^2$/yr to a maximum of 270 mg/cm$^2$/yr around a depth of 430 cm. The correspondence between the $\delta^{13}$C$_{\text{TOC}}$ and maize pollen concentration data, and the lack of a response in the $\delta^{13}$C$_{\text{TOC}}$ data to the spike in mineral influx around 430 cm, seem to indicate that the $\delta^{13}$C$_{\text{TOC}}$ record is more responsive to variations in the abundance of maize on the landscape than it is to variations in the delivery of allochthonous sedimentary material throughout Zone D.
The exact mechanisms responsible for this departure between the $\delta^{13}$C$_{\text{TOC}}$ record and mineral influx data in Zone D cannot be resolved with the limited analyses conducted here. It is hypothetically possible that the increase in allochthonous sediment delivery around 430 cm was accompanied by a slight increase in the dominance of C$_3$ plants in the watershed due to the apparent decrease in cultivation during this period. An increased contribution of C$_3$ organic matter could explain the lack of a response in the $\delta^{13}$C$_{\text{TOC}}$ record.
Zone C (414–382 cm): Maximum Human Impact
Zone C includes very high concentrations of maize pollen and some of the highest $\delta^{13}$C$_{\text{TOC}}$ and mineral influx values in the entire sediment record. All three
proxies indicate that the period encompassed in Zone C may have been the period of most severe prehistoric human impacts in the Laguna Castilla watershed.
Much like Zone E, we found a close correspondence between the $\delta^{13}$C$_{\text{TOC}}$ record and the mineral influx data. Perhaps more importantly, a comparison of the isotopic shift ($\Delta^{13}$C$_{\text{TOC}}$) in Zone E to that of Zone C reveals the impact of allochthonous sediment delivery on the $\delta^{13}$C$_{\text{TOC}}$ record. In Zone E, a shift was found in $\delta^{13}$C$_{\text{TOC}}$ values from $-24\%$ to $-21\%$ ($\Delta^{13}$C$_{\text{TOC}} = 3\%$) between 515 and 535 cm. Taking into account the slight lag in the response of the $\delta^{13}$C$_{\text{TOC}}$ record (Figure 3.6), this shift is associated with a peak in maize pollen concentrations of approximately 60 grains/cm$^3$. In Zone D there is a shift in $\delta^{13}$C$_{\text{TOC}}$ values from $-25.5\%$ to $-22.5\%$ ($\Delta^{13}$C$_{\text{TOC}} = 3\%$) between 390 and 420 cm. Again taking into account the slight lag in the response of the $\delta^{13}$C$_{\text{TOC}}$ record (Figure 3.6), this shift is associated with a peak in maize pollen concentrations of approximately 100 grains/cm$^3$. This shift in maize pollen concentrations corresponds to a three-fold increase in the raw number of maize grains observed on two pollen slides. If the concentration of maize pollen in the sediments is a good proxy for maize abundance in the watershed, and if the $\delta^{13}$C$_{\text{TOC}}$ record was primarily responding to the abundance of maize being cultivated within the watershed, then there should hypothetically be a larger isotopic shift in Zone D than that observed in Zone B, but the isotopic shifts are quite similar. However, the peak mineral influx values for Zone B and Zone D are also quite similar. The similarity between the response of the $\delta^{13}$C$_{\text{TOC}}$ record in Zones B and D to the mineral influxes during those periods indicates that allochthonous sediment delivery is potentially the
primary control on the amplitude of change observed in the $\delta^{13}$C$_{\text{TOC}}$ record.
Again, this is not surprising considering the fact that the amount of C$_4$ organic matter that enters the lake is ultimately controlled by the size of the carbon source area and efficiency of allochthonous organic matter delivery.
This finding is important because it indicates that the $\delta^{13}$C$_{\text{TOC}}$ value of the sediment alone cannot be used as an accurate representation of the exact amount of maize being cultivated within the watershed without taking into account variations in allochthonous sediment delivery. This does not mean that the $\delta^{13}$C$_{\text{Roc}}$ record is not providing a reliable estimate of the *relative* extent of maize cultivation in the watershed through time (Lane et al., in press), only that developing an accurate estimate of the extent of these activities is not as simple as only analyzing variations in the $\delta^{13}$C$_{\text{TOC}}$ record.
**Zone B (382–335 cm): Decreased Human Impact**
Compared to Zone C, Zone B marks the beginning of a different relationship between $\delta^{13}$C$_{\text{TOC}}$ values, maize pollen concentrations, and mineral influx in the Laguna Castilla sediment record. Maize pollen concentrations decrease steadily throughout Zone B, but the $\delta^{13}$C$_{\text{TOC}}$ and mineral influx data display little variation. The similarity in the $\delta^{13}$C$_{\text{TOC}}$ record and the mineral influx data seems to indicate that the $\delta^{13}$C$_{\text{TOC}}$ record in Zone B is more sensitive to variations in allochthonous sediment delivery than it is to variations in the abundance of maize on the landscape.
Based on our limited analyses, it is difficult to explain why the $\delta^{13}$C$_{\text{Roc}}$ record appears to be more sensitive to variations in allochthonous sediment
delivery than to maize abundance at this time. It is possible that prehistoric human impacts in the Laguna Castilla watershed were so severe through the period encompassed by Zone C that they had an effect on the available terrestrial carbon pool that lasted through the period encompassed by Zone B. If the majority of the Laguna Castilla watershed was deforested and under cultivation during the period encompassed by Zone C, an abundance of $C_4$ organic matter would have been available for transport into the lake. Thus, even with a decrease in Zone B in the abundance of maize being cultivated, there may still have been a significant component of $C_4$ organic material in the terrestrial carbon pool available for transport to the lake.
**Zone A (335–300 cm): Land Abandonment**
Maize pollen deposition in Laguna Castilla terminates at the Zone B/Zone A boundary indicating the cessation of maize agriculture around the lake and apparent abandonment of the watershed around 730 cal yr B.P. (Table 1). Mineral influx and $\delta^{13}C_{TOC}$ values nearly drop to pre-settlement levels indicating decreased watershed erosion and the recovery of $C_3$-dominated lower montane moist forest. Based on the evidence currently available, it is unclear why the watershed was abandoned at this time.
**Conclusions**
The stable carbon isotope composition of lake sediments is an effective proxy of prehistoric forest clearance and agriculture in the neotropics, but the development of quantitatively robust reconstructions of these activities will require a more in-depth understanding of the sensitivity of sedimentary $\delta^{13}C_{TOC}$
values to factors such as shifts in the relative dominance of $C_3$ and $C_4$ plants and variations in allochthonous carbon delivery. The Laguna Castilla data we present here indicate that sedimentary $\delta^{13}C_{TOC}$ values are temporally sensitive to rapid variations in $C_3$ and $C_4$ plant dominance, but may lag the vegetation shifts by a few years. In addition, the close correspondence between sedimentary $\delta^{13}C_{TOC}$ and mineral influx values in Zones E, C, and B of the Laguna Castilla record highlights the sensitivity of sedimentary $\delta^{13}C_{TOC}$ values to variations in allochthonous carbon delivery. More importantly, comparisons between the $\delta^{13}C_{TOC}$ record and the mineral influx data indicate that the amplitudes of shifts in the $\delta^{13}C_{TOC}$ record are intimately linked with variations in allochthonous sediment delivery.
The sensitivity of the sedimentary $\delta^{13}C_{TOC}$ record to the limited number of watershed variables analyzed here further reinforces the need for an increased understanding of carbon dynamics and cycling in lake watersheds. Despite the complexity of the exact response of the sedimentary $\delta^{13}C_{TOC}$ record to numerous watershed variables, the close correspondence between the $\delta^{13}C_{TOC}$ record and maize pollen concentrations indicates that the $\delta^{13}C_{TOC}$ record can be used to reliably assess the *relative* extent of these activities through time. We also believe that this proxy still has enormous potential as a technique that could eventually be used to quantitatively reconstruct the areal extent of anthropogenic forest clearance and crop cultivation in tropical watersheds.
Future analyses that utilize compound-specific isotopic analyses could further refine this technique by providing a purely allochthonous stable carbon
isotope record, thereby eliminating any complications brought on by autochthonous carbon isotope variability. In addition, the development of modern analogs, where the areal extent of maize cultivation, erosion rates, sedimentation patterns, and sedimentation rates can all be monitored precisely over relatively short time intervals, could further our understanding of how to best apply this proxy to prehistoric settings.
CHAPTER 4
Multi-Proxy Analysis of Late Holocene Paleoenvironmental Change in the Mid-Elevations of the Cordillera Central, Dominican Republic
This chapter is in preparation for submission to the journal *Quaternary Science Reviews* by me, Sally P. Horn, Claudia I. Mora, and Kenneth H. Orvis. The submitted manuscript will include additional information on the study area that is presented in Chapter 1 of this dissertation. My use of “we” in this chapter refers to my co-authors and myself.
**Introduction**
Several high-resolution paleoclimate records from sites in the circum-Caribbean region indicate significant climate variation during the middle to late Holocene (e.g. Hodell et al., 1991; 2005a; 2005b; Curtis et al., 1996; 1998; Black et al., 1999; 2004; Haug et al., 2001; Rosenmeier et al., 2002a; Tedesco and Thunell, 2003; Peterson and Haug, 2006). These climate variations have received considerable attention because of the importance of tropical climate dynamics in the global climate system (e.g. Diaz and Markgraf, 2000; Rittenour et al., 2000; Schmidt et al., 2004; Ivanochko et al., 2005) and their potential impact on prehistoric human populations including, most famously, the Mayan civilization (Hodell et al., 1995; 2005a; Gill, 2000; deMenocal, 2001; Haug et al., 2003).
Despite this burgeoning interest and our rapidly expanding knowledge of circum-Caribbean climate change, little is known about the paleoenvironmental and societal impacts of climate variability on the many islands of the Caribbean region. To date, published records of late Holocene paleoenvironmental change are available for just nine island study sites in the eastern Caribbean and tropical
north Atlantic: Anse à la Gourde, Guadeloupe (Beets et al., 2006); Church’s Blue Hole, Bahamas (Kjellmark, 1996); Grande-Case Lake, St. Martin (Bertran et al., 2004); Laguna de la Leche, Cuba (Peros et al., 2007); Laguna Tortuguero, Puerto Rico (Burney et al., 1994); Lake Antoine, Grenada (McAndrews and Ramcharan, 2003); Lake Miragoane, Haiti (Brenner and Binford, 1988; Hodell et al., 1991; Curtis and Hodell, 1993; Higuera-Gundy et al., 1999); Valle de Bao, Dominican Republic (Kennedy et al., 2006); and Wallywash Great Pond, Jamaica (Street-Perrott et al., 1993; Holmes et al., 1995; Holmes, 1998). With the exception of Valle de Bao, these are all low-elevation, coastal sites, and their distribution leaves a void in our knowledge of the paleoenvironmental history of Caribbean island interiors. Apart from Anse à la Gourde and Lake Miragoane, the majority of these records are also fairly low-resolution records with little or no evidence of prehistoric human activity.
In this study, we present a ~3000 cal yr B.P. record of paleoenvironmental change from a mid-elevation site in the Dominican Republic. We conducted high-resolution analyses of pollen, charcoal, biogenic carbonate macrofossil assemblages and stable isotope geochemistry, and bulk sedimentary stable carbon isotope ratios from sediment cores recovered from two small lakes, Laguna Castilla and Laguna de Salvador, to better understand the climate, vegetation, and human history of the area.
**Study Area**
Laguna Castilla (18°47'51" N, 70°52'33" W, 976 m) and Laguna de Salvador (18°47'45" N, 70°53'13" W, 990 m) are located on the Caribbean slope
of the Cordillera Central in the Dominican Republic (Figure 4.1). Laguna Castilla and Laguna de Salvador are located near the small community of Las Lagunas in the province of Azua. Four lakes exist in the Las Lagunas area, all of which occupy small basins created by slope failures (Figure 4.1). Laguna Castilla (Castilla) and Laguna de Salvador (Salvador) are relatively small lakes with surface areas of approximately 1.2 and 0.5 ha, respectively, but both do have open water. Laguna de Felipe (Felipe; ~0.8) and Laguna Clara (Clara; ~0.4 ha) are similar in size, but choked with aquatic macrophytes and have no open water. Paleoshorelines around Castilla and Salvador evident in aerial photographs indicate lake levels in the past perhaps 1–2 m above current levels.
**Climate**
The precipitation regime of the Caribbean slope of the Cordillera Central, including the Las Lagunas area, is primarily controlled by the seasonal proximity of the Intertropical Convergence Zone (ITCZ). During the boreal summer, when the ITCZ reaches its northernmost position, convection fed by sea breezes on the southern slope of the Cordillera Central increases as a result of the dominant ITCZ-proximal doldrum conditions. As the ITCZ migrates southward during the boreal winter, the descending arm of the Hadley cell moves over the region, limiting convective activity and decreasing precipitation.
No site-specific meteorological data are available for Las Lagunas. Based on environmental lapse rates calculated for the Cordillera Central by Orvis et al. (1997) and limited meteorological data from the nearby town of Padre Las Casas
Figure 4.1. The location of the island of Hispaniola (A); Las Lagunas study site within the Dominican Republic, nearby city of Azua, and capital city of Santo Domingo (B); and topographic map of the Las Lagunas area (C). Laguna Castilla and Laguna de Salvador are the focus of this study. The “X” marks the town center of Las Lagunas. Map C is based on the 1:50000 topographic sheet published by the National Geospatial-Intelligence Agency. Lake positions were determined from GPS measurements by K. Orvis.
(~520 m; MAT = 24 °C), the mean annual temperature of the Las Lagunas area is likely to be around 20 °C. The nearest available precipitation data are from the city of Azua 40 km to the south (Figure 4.1), which is more arid because it is lower in elevation (100 m) and subject to a greater rainshadow effect. Based on the mean annual precipitation value for Azua of ~700 mm, we assume that mean annual precipitation values for the Las Lagunas area are somewhere around 900–1000 mm.
**Vegetation**
The vegetation now surrounding Castilla and Salvador has been heavily modified by modern human activity. People living in the area today cultivate a variety of crops including beans, corn, and coffee, and raise cattle, goats, horses, and chickens. The vegetation currently surrounding the town of Las Lagunas is classified by Tolentino and Peña (1998) as grassland (pasture) and mixed crops and grasslands. Tolentino and Peña classify intact woody vegetation at the same altitude and slope aspect as lower montane moist forest (i.e. the Holdridge life zone designation; Panamerican Union, 1967). Remnant areas of lower montane moist forest include pines (*Pinus occidentalis* Schwartz) mixed with evergreen and deciduous broadleaved trees (Liogier 1981). Naturally occurring broadleaf assemblages likely included species in the genera *Cecropia*, *Garrya*, *Ilex*, *Juglans*, *Magnolia*, *Miconia*, *Mecranium*, *Meriania*, *Myrica*, *Ocotea*, *Piper*, *Trema*, and *Weinmannia*, just to name a few, as well as a wide variety of species from the Arecaceae, Poaceae, and Rubiaceae families, and the Urticales order (Bolay, 1997; Kennedy, 2003; Kennedy et al., 2005). Associated herbaceous
plants include species in the Amaranthaceae, Asteraceae, Cyperaceae, and Poaceae families (Liogier, 1981; Bolay, 1997; Horn et al., 2001; Kennedy et al., 2005). Emergent aquatic plants currently found in both lakes include *Typha domingensis* Pers. and a variety of species in the Cyperaceae and Poaceae families.
**Methods**
**Sediment Core Retrieval, Sediment Stratigraphy, and Radiocarbon Dating**
We recovered a 7.8 m sediment core near the center of Castilla and a 5.2 m sediment core near the center of Salvador during field expeditions in 2002 and 2004. We collected the watery, uppermost sediments at both sites with a PVC tube fitted with a rubber piston, and then extruded, sliced, and bagged this uppermost core section in 2 cm intervals in the field. We recovered deeper sediments in ~1 m sections using a Colinvaux-Vohnaut locking piston corer (Colinvaux et al., 1999). We returned core sections to the University of Tennessee in their original aluminum coring tubes and stored them at 6 °C. We cut the aluminum core tubes lengthwise using a specialized router and sliced the sediments using a thin wire.
We photographed core sections upon opening and described color (Munsell) and textural changes. We determined water content by drying subsamples overnight at 100 °C, and estimated organic and carbonate content using loss-on-ignition at 550 °C and 1000 °C, respectively (Dean, 1974). Chronologies are based on AMS radiocarbon dates on charcoal, other organic macrofossils, and bulk sediment. Dates were calibrated using the CALIB 5.0 computer program.
(Stuiver and Reimer, 1993) and the dataset of Reimer et al. (2004).
Sedimentation rates were calculated using the weighted means of the calibrated age probability distributions (Telford et al., 2004a; 2004b), and ages for lake sediment horizons located between the positions of radiocarbon-dated materials were calculated using linear interpolation.
**Pollen and Microscopic Charcoal Analyses**
Sediment cores from Castilla and Salvador were sub-sampled for pollen analysis at regular intervals of approximately 16 cm (some sections were also sampled at finer intervals) and chemically processed using standard techniques (Appendix A; Berglund, 1986; Faegri and Iverson, 1989). Tablets containing *Lycopodium* spores were added as controls (Stockmarr, 1971) and the pollen residues were mounted on microscope slides in silicone oil. Pollen and spores were identified and counted to a minimum of 300 pollen grains, excluding *Typha domingensis* pollen, indeterminate pollen grains, and all spores.
Pollen was identified at 400x magnification based on comparison with pollen reference slides prepared from vouchered plant specimens, and with published pollen descriptions, illustrations, photographs, and keys (Heusser, 1971; Bartlett and Barghoorn, 1973; McAndrews et al., 1973; Markgraf and D'Antoni, 1978; Moore and Webb, 1978; Hooghiemstra, 1984; Horn, 1986; Moore et al., 1991; Roubik and Moreno, 1991). Pollen grains of the order Urticales were classified by pore number, except for *Cecropia* and *Trema*, which were identified to genus. Unknown pollen and spore types were sketched and recorded as morphological types. Algal remains and other microfossils that may indicate
paleoenvironmental conditions were also identified. In addition to the full pollen counts, two slides from each level were scanned completely at low-power (100x) magnification for the presence of maize pollen (*Zea mays* subsp. *mays*; Chapter 2).
Microscopic charcoal was tallied during the regular pollen counts. Charcoal was identified as dark (black), opaque, angular fragments. All fragments over 50 µm in length were tallied in one of two size classes (50–125 µm and >125 µm).
**Bulk Sedimentary Carbon Isotope Analysis**
The Castilla and Salvador sediment cores were sub-sampled for bulk sedimentary stable carbon isotope analysis at the same intervals sampled for pollen. A detailed explanation of our methods can be found in Chapter 3. In short, dried and decalcified sediment samples were combusted under vacuum in quartz tubes in the presence of copper, copper oxide, and a small platinum wire at 800 °C. The rendered CO$_2$ was purified cryogenically offline and analyzed using a dual-inlet Finnigan MAT Delta-plus mass spectrometer at the University of Tennessee. All carbon isotopic compositions are reported in standard δ-per mil notation relative to the Vienna-Pee Dee belemnite (VPDB) marine-carbonate standard, where:
$$\delta^{13}C \text{ (per mil)} = 1000 \left[ \frac{R_{\text{sample}}}{R_{\text{standard}}} - 1 \right],$$
where $R = ^{13}C/^{12}C$.
Repeated analyses of the USGS 24 graphite standard indicate that the precision of these analyses are better than ±0.05‰ V-PDB.
Aquatic Macrofossil Extraction
Ostracod valves, charophyte oospores, and gastropod shells were present in some sections of each core. Core sections rich in aquatic macrofossils were identified using a binocular scope, and macrofossils were isolated using nested 500, 250, and 125 µm sieves at 1 cm sampling intervals. Fossil ostracod valves were identified with the assistance of Dr. Jonathan Holmes (University College London). Charophyte oospores were identified based on the descriptions of Wood and Imahori (1964) and Wood (1967). Rare gastropod shells were not identified.
Carbon and Oxygen Isotope Analysis of Biogenic Carbonates
Adult monospecific ostracod valves and calcified charophyte oospores were isolated for carbon and oxygen isotope analysis and cleaned using a soft brush and distilled water. Due to the fragility of these biogenic carbonates, especially the ostracod valves, we avoided ultrasonic cleaning and instead removed any remaining organic matter using a modified version of the methods of Lister (1988) and Diefendorf et al. (2006), which involved roasting the carbonate fossils under vacuum at 375 °C for 3 hours.
The oxygen and carbon isotope compositions of the biogenic carbonates were determined using an automated Finnigan CarboFlo system interfaced with a Finnigan MAT Delta-plus mass spectrometer at the University of Tennessee. Biogenic carbonates were reacted with orthophosphoric acid at 120 °C and the evolved CO₂ was cryogenically purified on-line. Sample masses analyzed on the CarboFlo system were typically about 0.3 mg (approximately 15 *Cythridella*
boldii ostracod valves, 5 Candona sp. ostracod valves, or 20 Chara haitensis oospores). All carbon and oxygen isotopic compositions have been temperature corrected to 25 °C and are reported in standard δ-per mil notation relative to the Vienna-Pee Dee belemnite (VPDB) marine-carbonate standard. Precision of the CarboFlo system was determined to be ±0.05‰ for δ¹³C V-PDB and ±0.10‰ for δ¹⁸O V-PDB using several internal laboratory standards.
Results
Sediment Recovery, Stratigraphy, and Chronology
Coring operations at Castilla and Salvador penetrated a complex sequence of sediments of varying texture and organic content (Figures 4.2 and 4.3). The basal sediments of Castilla (781–730 cm) consist of a mixture of coarse gravels, sands, and gleyed silts and clays (5G 4/2 to 10GY 5/1). From 730 to 670 cm, the Castilla sediments consist of organic silts (10YR 2/1) with abundant fibrous organics and sparse sands. A relatively thin layer of coarse fibrous organics and peat (2.5Y 4/2 to 10YR 3/1) extends from 670 to 650 cm. The thin peat layer is overlain by finely laminated organic clays and mineral silts (2.5Y 7/1 to 2.5Y 2/1) from 650 to 610 cm. From 610 cm to 520 cm, the sediments consist of very fine organic clays (10YR 2/1) with abundant fibrous organics. From 610 to 339 cm, the Castilla sediments consist of finely laminated mineral silts and clays (2.5Y 3/1 to 5Y 3/1) capped by a section of mineral clay that slowly grades into organic gyttja. A thin layer of mineral rich silts and clays (10Y 5/2) extends from 339 to 334 cm. The uppermost sediments of the Castilla core (334 to 0 cm sub-bottom)
Figure 4.2. Sediment stratigraphy and chronology of the Laguna Castilla and Laguna de Salvador sediment cores. Radiocarbon dates ($^{14}$C yr B.P.) are italicized and the weighted means of the probability distributions for radiocarbon dates (cal yr B.P.) are in parentheses.
Figure 4.3. Diagram showing sediment bulk density (g/cm$^3$), organic content (% dry mass), carbonate content (% dry mass), water content (% wet mass), mineral influx (mg/cm$^2$/yr), and organic carbon influx (mg/cm$^2$/yr) for the Laguna Castilla and Laguna de Salvador sediment cores. Radiocarbon ages are uncalibrated.
Figure 4.3. Continued.
consist of organic rich gyttja (2.5Y 3/1), with abundant fibrous organics between 334 and 155 cm.
Like the Castilla sediment core, the basal Salvador sediments (522–510 cm) consist of very coarse gravels, sands, and gleyed silts and clays (10Y 5/1). From 510 to 460 cm, the Salvador sediments consist of a mixture of organic and mineral clays and silts (2.5Y 4/1 to 10YR 2/1). Organic rich clay sediments (10YR 2/1) from 460 to 365 cm contain abundant fibrous organics and are capped by a thin layer of coagulated, ped-like clays (10Y 2/1; 365 to 359 cm). From 359 to 310 cm the Salvador sediments consist of organic rich clays and silts (5Y 2.5/1). At approximately 310 cm, we observed an abrupt transition from organic rich sediments to organic rich (2.5Y 2.5/1), mineral rich (5Y 6/2), and gleyed (10Y 2.5/1) clay laminae that extends to 265 cm. From 265 to 150 cm, the Salvador sediments consist of fine mineral silts and clays intermixed with organic clays (2.5Y 3/1 to 5Y 3/1). The uppermost sediments (150 to 0 cm) are organic rich gyttja (5Y 3/2), with abundant zooplankton fecal pellets from 150 to 50 cm.
The radiocarbon dates from Castilla and Salvador are in stratigraphic order except for the lowermost date (β-171501) in the Castilla core (Tables 4.1 and 4.2). The macrofossil dated may have been a root that penetrated older sediments; we have discounted it in our age model. According to the basal date, Castilla formed ~2983 cal yr B.P. Linear interpolation of the radiocarbon data indicates Salvador formed ~1870 cal yr B.P. Sedimentation rates (Figure 4.4) varied through time at both lakes, with higher and more variable sedimentation rates at Castilla.
Table 4.1. Radiocarbon determinations and calibrations for Laguna Castilla.
| Lab Number\(^a\) | Depth (cm) | \(\delta^{13}C\) (%) | Uncalibrated \(^{14}C\) Age (\(^{14}C\) yr BP) | Calibrated Age Range\(^b\) ± 2 \(\sigma\) (cal yr B.P.) | Area Under Probability Curve | Weighted Mean\(^c\) (cal yr B.P.) |
|------------------|------------|----------------------|-----------------------------------------------|-------------------------------------------------|-----------------------------|-------------------------------|
| \(\beta\)-196817 | 66–68 | −25.6 | 103.9% of Modern | −1.5 – −4.5* | 1.000* | −3* |
| \(\beta\)-204702 | 204–207 | −24.5 | 110 ± 40 | −1 – −4 | 0.008 | 133 |
| | | | | 150–10 | 0.651 | |
| | | | | 178–174 | 0.007 | |
| | | | | 273–185 | 0.333 | |
| \(\beta\)-196818 | 329–331 | −25.9 | 730 ± 40 | 585–567 | 0.063 | 674 |
| | | | | 732–647 | 0.937 | |
| \(\beta\)-171499 | 536–537 | −24.2 | 1000 ± 40 | 975–795 | 1.000 | 899 |
| \(\beta\)-192641 | 651–653 | −23.8 | 2190 ± 40 | 2077–2070 | 0.009 | 2217 |
| | | | | 2332–2113 | 0.991 | |
| \(\beta\)-171500 | 724–725 | −23.2 | 2860 ± 40 | 3080–2862 | 0.970 | 2983 |
| | | | | 3109–3093 | 0.016 | |
| | | | | 3140–3127 | 0.014 | |
| \(\beta\)-171501 | 758–761 | −25.3 | 2470 ± 40 | 2419–2363 | 0.118 | 2552 |
| | | | | 2623–2428 | 0.600 | |
| | | | | 2713–2628 | 0.282 | |
\(^a\)Analyses were performed by Beta Analytic Laboratory. Samples \(\beta\)-196817, \(\beta\)-196818, and \(\beta\)-171499 consisted of bulk sediment; samples \(\beta\)-192641 and \(\beta\)-204702 consisted of a mixture of plant macroremains, insect parts, and charcoal; sample \(\beta\)-171501 consisted of plant macroremains; and sample \(\beta\)-171500 consisted of charcoal.
\(^b\)Calibrations were calculated using Calib 5.0 (Stuiver and Reimer, 1993) and the dataset of Reimer et al. (2004).
\(^c\)Weighted mean of the calibrated age probability distribution curve.
*Dates were calibrated using the CALIBomb program (Reimer et al., 2004).
Table 4.2. Radiocarbon determinations and calibrations for Laguna de Salvador.
| Lab Number\(^a\) | Depth (cm) | \(\delta^{13}C\) (\%) | Uncalibrated \(^{14}C\) Age (\(^{14}C\) yr BP) | Calibrated Age Range\(^b\) ± 2 \(\sigma\) (cal yr B.P.) | Area Under Probability Curve | Weighted Mean\(^c\) (cal yr B.P.) |
|------------------|------------|------------------------|-----------------------------------------------|-------------------------------------------------|-----------------------------|-------------------------------|
| β-219035 | 76.5 | −25.7 | 100 ± 40 | −1 – 4 | 0.013 | 130 |
| | | | | 149–11 | | |
| | | | | 270–187 | | |
| β-204696 | 204 | −27.5 | 410 ± 40 | 392–319 | 0.243 | 446 |
| | | | | 523–426 | 0.757 | |
| β-196821 | 359 | −29.8 | 1280 ± 40 | 1109–1089 | 0.028 | 1214 |
| | | | | 1163–1126 | 0.065 | |
| | | | | 1292–1167 | 0.907 | |
| β-192645 | 504 | −25.1 | 2060 ± 40 | 2133–1926 | 1.000 | 2029 |
\(^a\) Analyses were performed by Beta Analytic Laboratory. Samples β-219035, β-204696, and β-196821 consisted of wood fragments and sample β-192645 consisted of charcoal.
\(^b\) Calibrations were calculated using Calib 5.0 (Stuiver and Reimer, 1993) and the dataset of Reimer et al. (2004).
\(^c\) Weighted mean of the calibrated age probability distribution curve.
Figure 4.4. The weighted mean of the calibrated radiocarbon ages (cal yr B.P.) plotted against depth for the Laguna Castilla and Laguna de Salvador sediment cores. Approximate sedimentation rates, labeled in italics and represented by the lines between dates, are estimated by linear interpolation between radiocarbon dates.
Figure 4.4. Continued.
Zonation
We delineated seven chronological zones across the two sediment records. These zones were based on the interrelationships of proxy data in the records but zone boundaries were positioned based on estimated ages and not correlation of proxy data. Our presentation of sediment stratigraphy did not make use of the zones, because of the complexity of the stratigraphy, but all other proxy data are presented by chronological zone. Zone 7 predates the formation of Salvador.
Pollen and Charcoal
Pollen is poorly preserved in the basal sediments of both cores, but overlying sediments contain abundant and well preserved pollen. Pollen spectra in both cores are generally dominated by *Pinus* and Poaceae, but there is considerable variability in pollen assemblages through time (Figures 4.5–4.7). Zone 6 (~2250–1520 cal yr B.P.) in both records is dominated by arboreal taxa, especially *Pinus*, Urticales, and other broadleaved trees and shrubs. On average, arboreal taxa decrease gradually through Zone 5 (~1520–890 cal yr B.P.), and herbaceous pollen, such as Poaceae and Asteraceae, and charcoal concentrations, influx, and charcoal:pollen ratios increase. Zone 4 (~890–700 cal yr B.P.) marks the first appearance of maize pollen in both sediment records. The appearance of maize pollen in Zone 4 is accompanied by decreases in pollen percentages of arboreal taxa, especially *Pinus*, sharp increases in the percentages of Poaceae and Asteraceae pollen, and increases in charcoal concentrations and influx, particularly in the Castilla record.
Figure 4.5. Diagram showing pollen and spore concentrations, influx, and indeterminate pollen percentages for the Laguna Castilla and Laguna de Salvador sediment records. This diagram also includes charcoal fragments expressed as charcoal:pollen ratios for the $>50 \mu m$, $>50–125 \mu m$, and $>125 \mu m$ size categories. Total charcoal concentrations are also expressed as fragments per g dry sediment and fragments per g wet sediment. Total charcoal influx is expressed as fragments per cm$^2$ per year. Radiocarbon ages are uncalibrated.
Figure 4.5. Continued.
Figure 4.6. Pollen percentage diagram for arborescent, herbaceous, and aquatic taxa in the Laguna Castilla sediment core. Fern spores are classified by morphology. The “Other Humid Montane Taxa” group includes *Alchornea*, *Bocconia*, *Ilex*, *Juglans*, Melastomataceae, *Piper*, and *Zanthoxylum*. The “Selected Broadleaf Trees and Shrubs” group includes *Cecropia*, *Ficus*, *Garrya*, *Myrsine*, Rubiaceae, *Trema*, and *Weinmannia*. The *Zea mays* subsp. *mays* data show the presence or absence of maize pollen on two slides scanned in their entirety. Radiocarbon ages are uncalibrated.
Figure 4.7. Pollen percentage diagram for arborescent, herbaceous, and aquatic taxa of the Laguna de Salvador sediment core. Fern spores are classified by morphology. The “Other Humid Montane Taxa” group includes *Alchornea*, *Bocconia*, *Ilex*, *Juglans*, Melastomataceae, *Piper*, and *Zanthoxylum*. The “Selected Broadleaf Trees and Shrubs” group includes *Cecropia*, *Ficus*, *Garrya*, *Myrsine*, Rubiaceae, *Trema*, and *Weinmannia*. The *Zea mays* subsp. *mays* data show the presence or absence of maize pollen on two slides scanned in their entirety. Radiocarbon ages are uncalibrated.
Zone 3 (700–350 cal yr B.P.) marks the disappearance of maize pollen, a sharp decrease in the abundance of herbaceous pollen, the highest percentages of *Pinus* pollen in the entirety of the sediment records (~70%), and a sharp decrease in the amount of charcoal entering the lakes (Figure 4.5). Zone 2 (~350–95 cal yr B.P.) encompasses a period of decreasing *Pinus* percentages and increasing percentages of pollen of broadleaf trees and shrubs (e.g. Urticales, *Cecropia*, *Trema*, Rubiaceae, Arecaceae). Zone 1 (~95 to –54 cal yr B.P.) marks the reappearance of maize and a subsequent decrease in pollen of arboreal taxa. Zone 1 also includes a conspicuous peak in the abundance of Myrtaceae and *Typha* pollen in both sediment records.
**Bulk Sedimentary Stable Carbon Isotopes**
The $\delta^{13}$C$_{\text{TOC}}$ values in both cores vary markedly with depth (Figure 4.8). On average, the Salvador sediments have more negative $\delta^{13}$C$_{\text{TOC}}$ values (avg = –26.0‰) than do the Castilla sediments (avg = –24.6‰). Zone 7 (~2980–2250 cal yr B.P.) in Castilla is typified by relatively high $\delta^{13}$C$_{\text{TOC}}$ values (approximately –19‰) followed by a gradual decrease through Zone 6 (~2250–1520 cal yr B.P.) to around –27.5‰. Salvador $\delta^{13}$C$_{\text{TOC}}$ values are also relatively high in the lowermost sediments of Zone 6 (approximately –18‰) and decrease steadily upcore to around –25.5‰ at the Zone 6/Zone 5 boundary. The $\delta^{13}$C$_{\text{TOC}}$ values then increase steadily in both Castilla and Salvador through Zone 5 (~1520–890 cal yr B.P.), with the exception of a large negative excursion in the Salvador $\delta^{13}$C$_{\text{TOC}}$ record around 350 cm depth. The $\delta^{13}$C$_{\text{TOC}}$ record becomes increasingly complex in Zone 4 (~890–700 cal yr B.P.), especially in the Castilla profile where
Figure 4.8. Stable carbon isotope composition of bulk sediments from Laguna Castilla (A) and Laguna de Salvador (B) plotted against depth and plotted against calibrated age (C). Radiocarbon ages in A and B are uncalibrated.
Figure 4.8. Continued.
$\delta^{13}C_{TOC}$ values range from a minimum of $-26.5\%$ to a maximum of $-20.9\%$ and are highly variable. The $\delta^{13}C_{TOC}$ values decrease through Zone 3 (700–350 cal yr B.P.) in both profiles, reaching a minimum of $-31.9\%$ in the Salvador record, but then increase in Zone 2 (350–95 cal yr B.P.), reaching a maximum of $-21.8\%$ in the Castilla record. Finally, in Zone 1 (95 to $-54$ cal yr B.P.) there is a decrease in the $\delta^{13}C_{TOC}$ signatures of Castilla from $-21.8$ to $-27.3\%$ and an increase in the $\delta^{13}C_{TOC}$ signatures of the Salvador sediments from $-27.1$ to $-26.2\%$.
**Aquatic Macrofossils**
Four different types of aquatic macrofossils were isolated from the Castilla and Salvador sediments, each occurring in only limited portions of the cores. The Castilla sediments contain only fossil valves and carapaces of the benthic ostracod *Cythridella boldii* Purper. The Salvador sediment record contains a greater variety of aquatic macrofossils, including *C. boldii* and *Candona* sp. ostracod valves and carapaces, calcified and non-calcified oospores from the charophyte *Chara haitensis* Turpin, and a very limited number of unidentified gastropods.
In the Castilla sediment core, *Cythridella boldii* valves are found exclusively in Zones 2–5, clustered in three distinct depth intervals (Figure 4.9). Valve concentrations of *C. boldii* reach their maximum values in the Castilla sediment record ($\sim 2.6$ valves/cc wet sediment) between 600 and 515 cm depth. Between 300 and 265 cm, *C. boldii* valve concentrations range from 0 to 0.3 valves/cc wet sediment. From 240 to 190 cm, *C. boldii* valve concentrations range from 0 to 0.7 valves/cc wet sediment.
Figure 4.9. (A) Concentration (valves per cm$^3$ wet sediment) of *Cythridella boldii* ostracod valves and the carbon and oxygen isotope composition of *C. boldii* valves in the Laguna Castilla sediment core. Dashed lines indicate sections of discontinuous fossil occurrence where *C. boldii* valves were too sparse for isotopic analysis. Radiocarbon ages are uncalibrated. (B) Carbon and oxygen isotope composition of *C. boldii* valves in Zones 2 and 3 of the Castilla sediment record. (C) Carbon and oxygen isotope composition of *C. boldii* valves in Zones 4 and 5 of the Castilla sediment record. Error bar symbols in graphs B and C indicate the interval sampled to obtain enough material for isotopic analysis.
Figure 4.9. Continued.
Biogenic carbonates are found in Zones 1–5 in the Salvador sediment core. The most abundant aquatic macrofossils in the Salvador sediment core are from the charophyte *Chara haitensis* (Figure 4.10), some of which are encrusted in calcium carbonate. Oospore concentrations reach their maximum (~19 oospores/cc wet sediment) between 50 and 110 cm depth. *Chara* oospores are also present in the 185–130 and 375–320 cm depth intervals. *Candona* sp. ostracod valves also occur sporadically throughout much of the Salvador sediment core in relatively low concentrations (Figure 4.10). *Cythridella boldii* ostracod valves are only present in the Salvador sediment core between 150 and 45 cm depth.
**Isotopic Analyses of Biogenic Carbonates**
The low concentrations of biogenic carbonates in the Castilla and Salvador sediment cores made it necessary to combine monospecific biogenic carbonates from adjacent sub-samples to obtain adequate masses for isotopic analysis (Table 4.3). The oxygen ($\delta^{18}\text{O}_{\text{cyth}}$) and carbon ($\delta^{13}\text{C}_{\text{cyth}}$) isotopic composition of *Cythridella boldii* ostracod valves varies markedly throughout the Castilla sediment record (Figure 4.9), with $\delta^{18}\text{O}_{\text{cyth}}$ values from 0.0 to 4.3‰ and $\delta^{13}\text{C}_{\text{cyth}}$ values from –0.9 to 4.2‰. The $\delta^{18}\text{O}_{\text{cyth}}$ and $\delta^{13}\text{C}_{\text{cyth}}$ values of valves isolated from the Salvador sediment core (Figure 4.10) tend to be more negative, varying between –2.2 and 4.1‰ and –6.8 to –2.8‰, respectively. For the most part, the $\delta^{18}\text{O}$ and $\delta^{13}\text{C}$ trends covary in each of the sediment cores (Figures 4.9 and 4.10). This covariation is typical of carbonates forming in closed basin lakes (Talbot, 1990).
Figure 4.10. (A) Concentration (valves per cm$^3$ wet sediment) of *Cythridella boldii* and *Candona* sp. ostracod valves and the carbon and oxygen isotope composition of *C. boldii* and *Candona* sp. valves in the Laguna de Salvador sediment core. Also included are the concentrations of *Chara haitensis* oospores (oospores per cm$^3$ wet sediment), calcified oospores, and non-calcified oospores, and the carbon and oxygen isotope composition of calcified oospores. Dashed lines indicate sections of discontinuous fossil occurrence where biogenic carbonates were too sparse for isotopic analysis. Radiocarbon ages are uncalibrated. (B) Stable carbon and oxygen composition of *C. boldii* valves in the Laguna de Salvador sediment record. (C) Stable carbon and oxygen isotope composition of *Candona* sp. valves in the Laguna de Salvador sediment record. (D) Stable carbon and oxygen isotope composition of calcified *C. haitensis* oospores in the Laguna de Salvador sediment record. Error bar symbols in graphs B, C, and D indicate the interval sampled to obtain enough material for isotopic analysis.
Figure 4.10. Continued.
Table 4.3. Biogenic carbonate isotope sampling information.
| Lake | Average Sample Depth Interval (cm) | Average Sample Age Interval (cal yrs) |
|-----------------------|------------------------------------|--------------------------------------|
| Laguna Castilla | | |
| *Cythridella boldii* | 4.1 | 23.5 |
| Laguna de Salvador | | |
| *Cythridella boldii* | 3.3 | 8.1 |
| *Chara haitensis* | 3.0 | 8.6 |
| *Candona sp.* | 3.1 | 9.1 |
The oxygen ($\delta^{18}\text{O}_{\text{chara}}$) and carbon ($\delta^{13}\text{C}_{\text{chara}}$) isotopic composition of calcified *Chara haitensis* oospores vary markedly throughout the Salvador record. The $\delta^{18}\text{O}_{\text{chara}}$ values range from a minimum of $-3.6$ to a maximum $3.5\%o$ and the $\delta^{13}\text{C}_{\text{chara}}$ values range from $-7.4$ to $-3.1\%o$. The oxygen ($\delta^{18}\text{O}_{\text{cand}}$) and carbon ($\delta^{13}\text{C}_{\text{cand}}$) isotopic composition of *Candonia* sp. ostracod valves in the Salvador record are also highly variable, ranging from $-1.7$ to $3.9\%o$.
**Discussion**
**Proxy Interpretation**
**Pollen**
Modern pollen studies are rare in the Caribbean and only one modern pollen study has been undertaken on the island of Hispaniola. Kennedy et al. (2005) investigated modern pollen rain as revealed by surface samples collected in the high elevations of the Cordillera Central in the Dominican Republic. Two sample sites were in humid montane broadleaf forest located $\sim 30$ km north of Las Lagunas. Due to the similar elevations and climates of these humid montane broadleaf forest modern pollen sample sites and the Las Lagunas lakes, we expect similar vegetation assemblages and pollen rain at both locations. Arboreal pollen rain at the humid montane broadleaf forest sites was dominated by *Pinus*, *Myrsine*, *Brunellia/Weinmannia*, *Ilex*, and Urticales pollen, and herbaceous pollen rain was dominated by Poaceae, Amaranthaceae, and *Begonia* pollen. The samples also contained relatively high fern spore concentrations compared to surface pollen samples from other high-elevation sites analyzed by Kennedy et al. (2005). Kennedy et al. also reported high percentages of *Pinus* pollen from all of
their surface pollen sampling sites regardless of the local importance of *Pinus occidentalis*, and attributed this to long-distance transport of pine pollen.
**Stable Carbon Isotopes**
Analyses of the carbon isotopic compositions of lacustrine sediments require an understanding of the possible sources of carbon entering a lake or bog. Stuiver (1975) suggested three sources of organic carbon that can be incorporated into lake sediments: terrestrial plants, submerged aquatic organisms, and pondweeds or other emergent plants. The isotopic composition of these organic carbon sources typically determines the $\delta^{13}$C$_{\text{TOC}}$ value of lake sediments, and the type of photosynthetic pathway used by these various carbon sources is of particular importance.
The Poaceae family, and a few other plant families such as the Cyperaceae (Deines, 1980; Boom et al., 2001), contain genera that utilize the C$_4$ photosynthetic pathway. This mode of photosynthesis seems to be most advantageous under warm and dry conditions, in periods of decreased partial pressures of atmospheric CO$_2$ (Ehleringer et al., 1997; Collatz et al., 1998), and also possibly in locations where warm season precipitation dominates (Huang et al., 2001). The C$_4$ photosynthetic pathway may also be favored in tropical localities following land clearance and crop cultivation by humans (Lane et al., 2004; in press). Plants using the C$_4$ photosynthetic pathway produce carbon isotopic compositions distinct from those of plants using the more common C$_3$ pathway or the somewhat rare CAM photosynthetic pathway. The distinct carbon isotope ratios of these plants are then incorporated in the terrestrial component of
the sedimentary organic carbon pool. Although some plants using the CAM photosynthetic pathway can produce organic matter with $\delta^{13}C$ values that overlap with that of organic matter produced by $C_4$ plants, CAM plants are typically small contributors to the terrestrial organic carbon pool in mesic tropical forests.
Plants using the $C_4$ pathway produce organic tissues with $\delta^{13}C$ values ranging from $-17$ to $-9\%$, averaging $-12\%$, while $C_3$ species produce a $\delta^{13}C$ value ranging from $-32$ to $-20\%$, averaging $-27\%$ (Bender, 1971; O'Leary, 1981). Plants using CAM photosynthesis typically produce values intermediate between these two ranges, but CAM plants are typically small contributors to the organic carbon pool of most ecosystems. The distinct ranges of these carbon isotope ratios allow for the evaluation of the past abundances of $C_3$ vs. $C_4$ plants in the watershed of a particular lake or bog from sediment isotope profiles as long as those plants contribute to the organic matter contained in the lake or bog sediments.
Terrestrial vegetation is not the only organic carbon source contributing to lake sediments. Autochthonous organic matter from aquatic organisms is also present and can confound $\delta^{13}C_{TOC}$ records. Talbot and Johannessen (1992) pointed out one such complication produced by aquatic organisms capable of using a HCO$_3^-$-based metabolism. Under highly alkaline or saline conditions, some aquatic plants and algae will begin to utilize a HCO$_3^-$-based metabolism (Smith and Walker, 1980; Lucas, 1983). Photosynthesis utilizing this metabolic pathway can produce organic matter that is enriched in $^{13}C$ (Smith and Walker, 1980), mimicking the composition of $C_4$ terrestrial material. Under arid
conditions, when lake levels drop, increased alkalinity and salinity may promote HCO\textsuperscript{-3}-based photosynthesis. The resulting isotopic enrichment in $\delta^{13}$C\textsubscript{TOC} may result in an overestimation of C\textsubscript{4} plant dominance (Brincat et al., 2000).
\textit{Aquatic Macrofossils}
The modern distribution and habitat preference of the ostracod \textit{Cythridella boldii} has not been well defined. Specimens of \textit{C. boldii} have been collected from Lake Valencia, Venezuela (Curtis et al., 1999) and from the Enriquillo Valley, Dominican Republic (Purper, 1974). \textit{Cythridella boldii} is known to be a non-swimming species of ostracod and a profundal burrower (Curtis et al., 1999). Curtis et al. (1999) documented the presence of \textit{C. boldii} valves at a water depth of 9.4 m in Lake Valencia.
The cogener \textit{Cythridella illosvayi} has been studied in more detail and used in several paleolimnological studies. Holmes (1997) documented the presence of \textit{C. illosvayi} along the coastal margin of Wallywash Great Pond, Jamaica, where emergent macrophytes dominate the shallow waters. Based on this modern distribution, Holmes (1998) used the presence of \textit{C. illosvayi} in the Wallywash Great Pond sediment record as an indicator of decreased water levels. However, \textit{C. illosvayi} ostracod valves have also been recovered from greater depths in Lake Punta Laguna, Mexico (6.3 m; Curtis et al., 1996), Lake Peten-Itza, Guatemala (7.6 m; Curtis et al., 1998), and Little Salt Spring, Florida (70 m; Zarikian et al., 2005). Thus, it seems that water depth alone does not control the distribution of \textit{C. illosvayi}.
The identification of ostracod species in the genus *Candona* from shell morphology alone is difficult, and we lack collections of living specimens from the Las Lagunas lakes that would make species identification possible. Species of *Candona* are found throughout the world in a wide variety of habitats, thus their occurrence does not constrain paleolimnological conditions in Castilla or Salvador. However, variable preservation and changes in the isotopic composition of these valves may indicate shifting hydrological conditions in the lakes (see discussion below).
The habitat preference and geographic distribution of the charophyte *Chara haitensis* are also poorly understood and we have not made any paleolimnological interpretations based on its occurrence. The type specimen of *C. haitensis* was collected in Haiti. Proctor et al. (1971) suggested that the geographic range of *C. haitensis* is centered in the neotropics and that the species is restricted to the western hemisphere.
Preliminary analyses of aquatic macrofossils in near-surface sediments collected from all four lakes in the Las Lagunas area (Laguna Castilla, Laguna de Salvador, Laguna de Felipe, and Laguna Clara) indicate that the presence of ostracods and charophytes is probably more dependent upon water chemistry than habitat availability (Thomason et al., 2007). The presence of calcified oospores and calcite-encrusted charophytes is likely to be indicative of Ca$^{2+}$ ion saturation in the water column (Dean, 1981; Delorme, 1991). Currently, ostracod and charophyte macrofossils are only found in significant numbers in the near-surface sediments of Laguna de Felipe, which has the highest Ca$^{2+}$ ion concentrations of
Table 4.4. Selected limnological data for the lakes of Las Lagunas. Water samples were collected in January 2004
| | Laguna Castilla | Laguna de Salvador | Laguna de Felipe | Laguna Clara |
|------------------------|-----------------|--------------------|------------------|--------------|
| Surface Area$^a$ | 1.2 ha | 0.5 ha | 0.8 ha | 0.4 ha |
| Water Depth | 4.5 m | 2.8 m | 1.8 m | 1.1 m |
| Water Temperature$^b$ | 21.7 °C | 20.2 °C | 20.3 °C | 20.0 °C |
| pH$^c$ | 7.9 | 8.1 | 7.6 | 6.8 |
| Ca$^{2+}$ Ion Concentration$^d$ | 32.9 ppm | 52.4 ppm | 88.1 ppm | 15.2 ppm |
| $\delta^{18}$O (V-SMOW)$^e$ | $-27.6\%_{\text{o}}$ | $-28.1\%_{\text{o}}$ | $-32.7\%_{\text{o}}$ | $-32.5\%_{\text{o}}$ |
| $\delta^{18}$O (V-PDB)$^e$ | $2.4\%_{\text{o}}$ | $1.9\%_{\text{o}}$ | $-2.8\%_{\text{o}}$ | $-2.5\%_{\text{o}}$ |
| Biogenic Carbonates$^f$ | No | No | Yes | No |
$^a$Surface areas were estimated based on GPS measurements by K. Orvis.
$^b$Water temperature was measured using a YSI model 55 meter.
$^c$pH was measured with an Oakton pH meter.
$^d$Chemical analyses were conducted by the University of Wisconsin Soil and Plant Analysis Lab.
$^e$Isotope measurements were conducted by the Department of Earth and Planetary Sciences Stable Isotope Lab at the University of Tennessee.
$^f$Presence or absence in the uppermost surface sediments.
the four Las Lagunas lakes (Table 4.4). Thus, we interpret the presence of ostracod and charophyte macrofossils in the Castilla and Salvador sediment records as an indication of increased Ca$^{2+}$ ion concentrations, which were, in turn, variations most likely controlled by changing lake levels, with decreased lake levels increasing the concentration of Ca$^{2+}$ ions in the water column and vice versa.
**Oxygen and Carbon Isotope Composition of Biogenic Carbonates**
The oxygen isotope composition of an ostracod carapace is primarily dependent on the $\delta^{18}$O composition of the water and the temperature at which carbonate precipitation occurs (Craig, 1965; Stuiver, 1970). In tropical, closed-basin lakes with a seasonally dry climate, the $\delta^{18}$O value of the lake water is controlled primarily by the evaporation to precipitation ratio (E/P) of the lake (Fontes and Gonfiantini, 1967; Gasse et al., 1990). In some cases, landscape changes, such as widespread watershed deforestation, can also affect the $\delta^{18}$O value of lake water (Rosenmeier et al., 2002b). During periods of increased (decreased) E/P ratios, the $\delta^{18}$O value of lake water will go up (down) as kinetic fractionation processes lead to an increase (decrease) in the relative concentrations of $^{18}$O compared to $^{16}$O. Assuming that long-term temperature changes in the tropics are less likely to affect $\delta^{18}$O values in the lake than are changes in the E/P ratio (Covich and Stuiver, 1974; Curtis and Hodell, 1993), the $\delta^{18}$O value of ostracod carapaces should be most indicative of variations in the E/P ratio of the lake.
Young ostracod instars have been shown to assimilate carapaces with trace element chemistries and isotopic compositions that differ from those of adults of the same species under the same conditions (Chivas et al., 1986; Engstrom and Nelson, 1991; Keatings et al., 2002). The $\delta^{18}$O composition of ostracod valves is also affected by species-specific “vital effects” (von Grafenstein et al., 1999; Keatings et al., 2002) and microhabitats (Heaton et al., 1995; Ito et al., 2003), but these problems can be minimized by analyzing numerous monospecific adult specimens.
The $\delta^{13}$C composition of lacustrine biogenic carbonates depends mainly upon the $\delta^{13}$C value of dissolved inorganic carbon (DIC) within the lake. This value, in turn, is controlled by a variety of factors including atmospheric CO$_2$ concentration, dissolution of carbonate rocks in the watershed, root respiration, watershed vegetation, and the bacterial decay of humus (Lister, 1988). In productive freshwater lakes, the most important factor in determining the $\delta^{13}$C$_{DIC}$ is the photosynthetic activity of aquatic organisms (Oana and Deevey, 1960). During photosynthesis, most aquatic organisms preferentially take up $^{12}$C from the DIC pool, thereby increasing the $\delta^{13}$C value of the remaining DIC. In light of the small water volumes and high biologic productivity of Castilla and Salvador, we interpret the $\delta^{13}$C value of biogenic carbonates produced in these lakes as a proxy of paleoproductivity.
In a recent study, Pentecost et al. (2006) documented extreme isotopic disequilibrium between the carbon and oxygen isotope compositions of the calcite encrusting specimens of *Chara hispida* and the isotopic composition of lake water.
in shallow, highly productive lakes. Pentecost et al. attribute this disequilibrium to the direct combination of atmospheric CO$_2$ with hydroxide ions under high pH conditions. With this in mind, we interpret the Chara isotopic data from Salvador only in the context of other proxy indicators of paleolimnological and paleoclimatological variability.
**Paleoenvironmental Reconstruction and Regional Context**
**Zone 7 (~2980–2250 cal yr B.P.)**
The apparent formation of Laguna Castilla is marked by deposits of organic rich sediments dating back to ~2980 cal yr B.P. The material underlying these sediments has low pollen concentrations, high bulk densities, low organic carbon content, and large grain sizes, suggesting that the lake developed on landslide material (Figures 4.2, 4.3, and 4.6).
Some of the most positive $\delta^{13}$C$_{TOC}$ values in the Castilla sediment record are within Zone 7. There are two possible explanations for this. First, it is possible that much of the organic carbon carried within the landslide material was produced by C$_4$ plants or that C$_4$ plants initially colonized the basin prior to lake formation. An alternative explanation is that the basin was a shallow water environment early in its history, that may have stagnated seasonally, leading to methane production and outgassing. Degassing of $^{12}$C-enriched methane leaves the residual sediments enriched in $^{13}$C and, thus, anomalously positive $\delta^{13}$C$_{TOC}$ values (Ogrinc et al., 2002). This interpretation is supported by the similar $\delta^{13}$C$_{TOC}$ pattern (Figure 4.8) and poor pollen preservation in the basal Salvador
sediments (Figure 4.7), which may have undergone a similar genesis at a later date.
The soils in the Las Lagunas area are porous and well-drained (pers. observation). It is possible that water in the Castilla basin had a very short residence time until a clay seal formed and cut off any subsurface drainage from the basin. These conditions could have potentially led to the development of a highly productive methanogenic shallow water, or seasonally inundated, ecosystem. In either case, the sediments would have been prone to drying, which could lead to the observed fossil pollen degradation in Zone 7.
**Zone 6 (~2250–1520 cal yr B.P.)**
Pollen preservation improves markedly in the Castilla sediments around 2250 cal yr B.P. (Figure 4.6) and in the Salvador sediments around 1870 cal yr B.P. (Figure 4.7). Improvement in pollen preservation suggests more mesic conditions. The dominance of *Pinus*, Urticales, and a variety of other broadleaf pollen through Zone 6 indicates the presence of humid montane broadleaf forest (Kennedy et al. 2005) near the lakes. Moderately high microscopic charcoal concentrations (Figure 4.5) indicate that fires were common in the region during this period. The $\delta^{13}$C$_{TOC}$ records for both lakes indicate a dominance of C$_3$ plants in the two watersheds (Figure 4.8), as expected for a humid montane broadleaf forest.
Other paleoclimate records from the circum-Caribbean region also indicate that this was a relatively moist period of the late Holocene. Trace metal concentrations in the sediments of the Cariaco Basin are relatively high and
steady, indicating consistently high rainfall in northern South America (Figure 4.11; Haug et al., 2001). Relatively high lake levels are indicated for the Yucatan Peninsula (Hodell et al., 1995; 2005a) and Lake Valencia (Curtis et al., 1999). High moisture delivery to all of these sites has been interpreted as an indication of a more northerly mean boreal summer position of the ITCZ (Haug et al, 2003). A more northerly mean position of the ITCZ during the boreal summer would also yield increased precipitation for the Las Lagunas area as the proximal doldrum conditions would promote enhanced delivery of sea breeze moisture into the area.
Zone 5 (~1520–890 cal yr B.P.)
The decline in pollen from arboreal taxa and increase in herbaceous pollen in Zone 5 of both sediment records (Figures 4.6 and 4.7) indicate a period of increasing aridity at Las Lagunas. While an increase in the dominance of herbaceous plants might also be attributed to deforestation, lack of an associated increase in mineral influx into either lake suggests deforestation was not the cause (Figure 4.3; also see Zone 4 discussion below). The steady increase in the $\delta^{13}$C$_{\text{TOC}}$ compositions in both sediment records more likely indicates a local increase in the proportion of C$_4$ plants, which tend to be more drought tolerant than C$_3$ plants (Figure 4.8). Charcoal concentrations in both records reach some of their highest levels, possibly indicating an increase in fire return intervals or fire intensity as a result of more arid conditions (Figure 4.5; Martin and Fahey, 2006). Zone 5 also includes the first appearance of biogenic carbonates in the two sediment records (Figures 4.9 and 4.10). We interpret the presence of *Cythridella boldii* valves and *Chara haitensis* oospores as an indication of increased Ca$^{2+}$ ion
Figure 4.11. Comparison of selected Laguna Castilla and Laguna de Salvador proxy data with titanium concentrations from the Cariaco Basin (ODP Site 1002; $10^\circ 42'44''$ N, $65^\circ 10'11''$ W; Haug et al. 2001; 2003). Increased Ti concentrations in the Cariaco Basin sediments indicate increased terrigenous input from rivers draining northern South America as a result of increased precipitation. Haug et al. (2001; 2003) hypothesized that variations in precipitation are closely tied to the mean boreal summer position of the Intertropical Convergence Zone (ITCZ). A more northerly mean position of the ITCZ yields higher precipitation totals in northern South America and the Las Lagunas area. The age model for the Cariaco Basin data has been adjusted slightly from cal yr before A.D. 2000 to cal yr before A.D. 1950 for comparison to the Castilla and Salvador sediment records.
Figure 4.11. Continued.
concentrations resulting from decreased lake levels. The Cariaco Basin trace element records also indicate steady decreases in precipitation delivery for northern South America during this period (Figure 4.11; Haug et al., 2001; 2003).
One of the most striking features of Zone 5 is evidence of early pedogenesis at around 360 cm depth in the Salvador record (Figure 4.2). The high organic carbon content (~27% by mass; Figure 4.3) and small grain sizes of the peds indicate that they most likely formed in-situ and were not eroded and transported into the lake from the surrounding hillslopes. We interpret this ped layer to represent a short period when water levels in Salvador were sufficiently low to expose surface sediments at the core location to the atmosphere at least episodically, if not for an extended period. *Chara haitensis* oospores were deposited immediately before and immediately after the formation of these peds (Figure 4.10), providing supporting evidence that this may have been a period of severely depressed water levels in Salvador. Woody organic macrofossils preserved in the peds and decreased $\delta^{13}$C$_{TOC}$ values in this section of the Salvador sediment core indicate that the exposed lake floor may have been colonized by woody C$_3$ plants (Figure 4.8). These organic macrofossils date to ~1210 cal yr B.P. (Table 4.2).
Castilla is a deeper lake than Salvador (Table 4.4) and there is no evidence that Castilla sediments also dried out during this time. A spike in sedimentary carbonate concentrations at 570 cm in the Castilla sediment core may reflect increased Ca$^{2+}$ ion saturation in the water column and consequent CaCO$_3$ precipitation, driven by a decrease in lake level (Figure 4.3). The $\delta^{18}$O$_{Cyth}$ record
from Castilla displays an ~1‰ increase around 1220 cal yr B.P., which may indicate decreased lake levels; however, this increase is relatively minor compared to variations in Castilla $\delta^{18}$O$_{Cyth}$ values in other intervals that do not show evidence of sediment desiccation (Figure 4.9). It is possible that this drought was relatively short-lived, or was a series of short-lived events, that went unrecorded in the Castilla $\delta^{18}$O$_{Cyth}$ record, which has a resolution of ~20–40 years through this section of the sediment core. It is also possible that the drying at Salvador was seasonal, whereas the time-averaging of Castilla $\delta^{18}$O$_{Cyth}$ values caused by sampling methods is insensitive to fluctuations at this temporal scale.
Taking into account the errors associated with radiocarbon dating, the interval of apparent desiccation of the Salvador sediments correlates to an extended period of increased regional aridity and a series of severe drought events between ~1000 and ~1200 cal yr B.P. that have been documented at numerous sites in the circum-Caribbean region. Increased aridity at this time has been linked to the Terminal Classic Collapse of the Mayan civilization by numerous researchers. Hodell et al. (1995; 2001; 2005a) presented isotopic and sedimentary geochemical evidence of drought at around this time from lakes throughout the Yucatan Peninsula. Haug et al. (2003) reported a series of three droughts during this interval dating to around 810 (1140), 860 (1090), and 910 A.D. (1040 cal yr B.P.) in their high resolution trace-metal record from the Cariaco Basin. Nyberg et al. (2001) reported an increase in the magnetic susceptibility of marine sediments off the coast of Puerto Rico during this time that they associated with an increase in the deposition of hematite-rich dust from Saharan Africa due to
intensified trade wind strength. Beets et al. (2006) reported an increase in dune activity and increase in the $\delta^{18}$O composition of landsnail shells on the Caribbean island of Guadeloupe, suggesting increased tradewind activity and decreased precipitation at this time. All of these findings indicate a more southerly mean boreal summer position of the ITCZ, which would have also decreased precipitation for the Las Lagunas area. The geographic diversity of these sites points to a regionally pervasive change in climate that may be indicative of a larger shift in the global climate system (Mayewski et al., 2004).
**Zone 4 (~890–700 cal yr B.P.)**
Zone 4 includes the first evidence of human activity in the Castilla and Salvador records. Maize grains deposited ~890 cal yr B.P. in Castilla represent the earliest evidence of maize agriculture from the interior of Hispaniola (Figures 4.6 and 4.7; Chapter 2). Decreased concentrations of arboreal pollen types, increases in pollen concentrations of herbaceous taxa, and marked increases in charcoal influx in both lakes (particularly Castilla) indicate deforestation and the establishment of agricultural fields. Sedimentation rates (Figure 4.4) and mineral influx (Figure 4.3) increase by two orders of magnitude in the Castilla sediment record, suggesting major increases in soil erosion in the watershed, possibly coupled with increased algal productivity in the lake.
The overall indication is that the prehistoric populations that settled the Las Lagunas area had a greater impact on the landscape than did earlier episodes of climate variability or later activities by modern humans (see Zone 1 discussion below). Due to the lack of archaeological research in this interior region of
Hispaniola, we can only guess as to the identity of prehistoric settlers. Based on existing archaeological chronologies for the island, these early settlers were most likely Ostionoid. According to Wilson (1997), the Ostionoid archaeological period extends from ~1450 to 450 cal yr B.P. on the island of Hispaniola.
Based on the higher sedimentation rates, mineral and charcoal influx, $\delta^{13}$C$_{\text{TOC}}$ values, and lower concentrations of arboreal pollen in the Castilla record compared to the Salvador record, it appears that land was used more intensively in the Castilla watershed (Figures 4.3–4.8). Salvador is surrounded on three sides by steep slopes unfavorable for agriculture, but Castilla occupies a relatively flat area that would have been suitable for a variety of agricultural uses including maize agriculture. The large (~6‰) swings in the Castilla $\delta^{13}$C$_{\text{TOC}}$ record throughout Zone 4 correlate well with variations in maize pollen concentrations, indicating the $\delta^{13}$C$_{\text{TOC}}$ variability is most likely responding to variations in the abundance of maize being cultivated in the Castilla watershed (Figure 4.8; Chapter 3; Lane et al., 2006).
A spike in the abundance of *Cythridella boldii* valves early in Zone 4 of the Castilla record correlates with one of the largest increases in $\delta^{18}$O$_{\text{Cyth}}$ values in the record at around 890 cal yr. B.P. Increases in Castilla $\delta^{18}$O$_{\text{Cyth}}$ values from ~0.5‰ to ~3.2‰ in less than 25 years indicate an abrupt and severe increase in lake E/P ratios. This peak in $\delta^{18}$O$_{\text{Cyth}}$ values corresponds to the first appearance of maize in the pollen record and indicates that Castilla and Salvador were settled by prehistoric populations during, or immediately after, an apparently severe period of drought. The absence of biogenic carbonates in the Castilla record throughout
the remainder of Zone 4 make it difficult to infer any climate variability that may have affected the prehistoric populations after their initial settlement of the area (Figure 4.9).
Human activity in the watershed of a lake can affect the $\delta^{18}$O value of lake water through the modification of watershed hydrology, but the typical isotopic shift would be in the opposite direction than that of the $\delta^{18}$O$_{Cyth}$ record presented here. Rosenmeier et al. (2002b) documented a decrease in the $\delta^{18}$O composition of lake water following severe deforestation of a watershed as a consequence of the decreased residence time of water in the soils and consequent reduction in the evaporative enrichment of the water prior to delivery to the lake itself.
While it is unlikely that human modification of the watershed could have caused the positive isotopic shift in the $\delta^{18}$O$_{Cyth}$ record, human activity could explain the absence of *Cythridella boldii* valves for the remainder of Zone 4. Ostracods are very sensitive to turbidity (e.g. Belis et al., 1999) and *C. boldii* valves disappear from the Castilla record just as mineral influx is peaking (Figure 4.12). A large increase in mineral influx would have increased the turbidity of Castilla during this time and could explain the temporary disappearance of *C. boldii* from the sediment record.
The apparently mild human impacts in the Salvador watershed do not appear to have affected ostracod communities of the lake. Valves of *Candona* sp. are present in the record through Zone 4, but not in sufficient concentrations for high-resolution isotopic analysis. The preservation of valves of *Candona* sp. in Zone 4 of the Salvador record indicates decreased lake levels and increased Ca$^{2+}$
Figure 4.12. Comparison of mineral influx and biogenic carbonate concentrations for Laguna Castilla and Laguna de Salvador. Highlighted sections indicate periods of increased mineral influx and decreased biogenic carbonate concentrations.
ion concentrations as compared to Zone 6. However, the absence of charophyte oospores and *C. boldii* valves indicates higher lake levels during Zone 4 than during the time encompassed by Zone 5.
**Zone 3 (~700–350 cal yr B.P.)**
Zone 3 marks a period of ecosystem recovery after human abandonment of the Castilla and Salvador watersheds around 700 cal yr B.P. Maize pollen drops out of the records, and pollen percentages for herbs in the Poaceae, Cyperaceae, and Asteraceae families decline along with sedimentation rates, charcoal influx, mineral influx, and $\delta^{13}$C$_{TOC}$ values (Figures 4.3–4.8). Arboreal pollen types increase, especially *Pinus* and those in the “Other Humid Montane Taxa” category, indicating at least some forest recovery after human abandonment of the watersheds (Figures 4.6 and 4.7).
Why humans abandoned the two watersheds at this time is unclear. A conspicuous (~5 cm) mineral clay deposit in the Castilla record punctuates the period of human occupation (Figure 4.2). It is possible that this is a storm deposit, but no similar deposit was found in the Salvador record that would be expected if a tropical storm or hurricane had affected the area.
Pines readily colonize and dominate poor soils at middle and high elevations in Hispaniola (Darrow and Zanoni, 1991), and the dominance of *Pinus* pollen through Zone 3 (Figures 4.6 and 4.7) may reflect the deterioration of soil quality in the Las Lagunas area due to the activities of prehistoric humans. A concomitant increase in the abundance of pine stomata in pollen samples from Zone 3 argues for a local increase in the abundance of pine (Remus et al., 2006).
Pine stomata are not effectively dispersed over long distances and the presence of stomata in lake sediments is generally interpreted to indicate the local presence of pines (Gervais and MacDonald, 2001). Kennedy et al. (2005) found this to be true in their study of modern surface pollen samples in the highlands of the Cordillera Central. Surface pollen samples from forest stands including pines contained pine stomata, while samples from grasslands lacking pine did not. An increase in the pines near the lake shore, along with their prodigious pollen production, may explain the decline in Urticales pollen percentages in the Salvador record at this time.
The conspicuous decrease in *Candona* sp. valves in the Salvador record (Figure 4.10), absence of *Cythridella boldii* valves in Castilla (Figure 4.9), and the sharp decline in herbaceous pollen types in both records (Figures 4.6 and 4.7) indicate wetter conditions, increased lake levels, and decreased salinity at both Castilla and Salvador in Zone 4, compared to Zone 5. While the absence of ostracods in Zone 3 of the Castilla record might have been a consequence of the drastic anthropogenic watershed impacts incurred in Zone 4, the absence of *Candona* sp. valves in the Salvador record, which were present in Zone 4 despite the impacts of human activity, indicate that it was likely a shift in lake hydrology and chemistry that caused a decrease in ostracod abundance (Figures 4.9 and 4.10).
High-resolution records of paleoprecipitation from the Cariaco Basin (Haug et al., 2001), the Florida Everglades (Winkler et al., 2001), the Caribbean island of Guadeloupe (Beets et al., 2006), and the coast of Puerto Rico (Nyberg et
also indicate that the period between $\sim 350$ and $\sim 700$ cal yr B.P. was relatively wet (Figure 4.11). As described previously, a concurrent increase in precipitation at all of these sites is typically ascribed to a more northerly mean boreal summer position of the ITCZ and/or higher Caribbean sea surface temperatures (SSTs).
Zone 3 is roughly coincident with the latest stages of the Medieval Warm Period (MWP; $\sim 950–650$ cal yr B.P.), but precedes the coldest periods of the Little Ice Age (LIA; $\sim 450–150$ cal yr B.P.). This is generally a time associated with relatively high late Holocene average temperatures globally (Jones et al., 1998; 2001; Moberg et al., 2005). The very earliest stages of this relative increase in global temperatures have been linked to an increase in solar output called the “Medieval Maximum” (Jirikowic and Damon, 1994; Stuiver et al., 1998). An increase in solar activity and related seasonal increase in Northern Hemisphere solar insolation may have “pulled” the ITCZ to a more northerly mean boreal summer position during this time (Peterson and Haug, 2006). Caribbean SST reconstructions also indicate a general increase in warm season SSTs between 700 and 550 cal yr B.P. that could have enhanced convective activity and atmospheric moisture availability in the region (Nyberg et al., 2002). In addition, lake sediment records from South America (Moy et al., 2002) indicate decreasing El Niño frequency and more persistent La Niña conditions during this period that would have also led to a more northerly mean boreal summer position of the ITCZ (Fedorov and Philander, 2000), warmer SSTs in the Caribbean (Giannini et al., 2000), and wetter conditions in the Las Lagunas area.
Zone 2 (~350–95 cal yr B.P.)
The decreased dominance of *Pinus* pollen and slight increases in broadleaf arboreal pollen (Urticales, *Cecropia*, *Trema*, and Arecaceae) in Zone 2 of both records may signal further recovery of forests from the impacts of prehistoric humans some 350 years earlier (Figures 4.6 and 4.7). One of the most conspicuous aspects of Zone 2 in the Castilla and Salvador sediment records is the abundance and variety of biogenic carbonates deposited during this interval (Figures 4.9 and 4.10), including the reappearance of *Chara haitensis* oospores in the Salvador record. The only other stratigraphic level with *C. haitensis* oospores in the Salvador record is associated with evidence of very low lake levels and desiccation of the Salvador sediments. The high abundance of oospores in Zone 2 also likely indicates low water levels. The increased abundance of *C. boldii* ostracod valves in the Castilla sediment record also indicates decreased water levels.
Zone 2 roughly overlaps the coldest periods of the LIA, which have only recently been recognized in paleoclimate records from the tropics (Thompson et al., 1995; deMenocal et al., 2000; Alin and Cohen, 2003; Behling et al., 2004; Brown and Johnson, 2005; Liu et al., 2005). Trace metal concentrations in the sediments of the Cariaco Basin reach their lowest levels since the Younger Dryas during the LIA, indicating extraordinarily dry conditions for northern South America (Figure 4.11; Haug et al., 2001; Peterson and Haug, 2006). Paleolimnological records from Aguada X’caamal, Mexico also indicate increased aridity and decreased lake levels during the LIA (Hodell et al., 2005b).
Meteorological records from Nassau, Bahamas extending back to A.D. 1811, which includes the latest stages of the LIA, indicate that the early 1800s included some of the coldest and driest conditions for the area in the last 200 years (Chenoweth, 1998). Caribbean SST reconstructions based on the oxygen isotope compositions of foraminifera and corals indicate a possible decrease of Caribbean SSTs of up to 3 °C (Winter et al., 2000; Watanabe et al., 2001; Nyberg et al., 2002; Haase-Schramm et al., 2003). A large decrease in SSTs certainly would have decreased evaporation and convective activity in the region (Hodell et al., 2005b). Furthermore, correlations between LIA records from the tropics and those from the high latitudes indicate intensified meridional airflow and increased meridional temperature gradient during this time that would have led to a more southerly mean boreal summer position of the ITCZ (Kreutz et al., 1997; Hodell et al., 2005b; Peterson and Haug, 2006). A decrease in Caribbean SSTs and a suppression of the annual cycle are the likely mechanisms responsible for increased aridity in the Las Lagunas area at this time.
The oxygen isotope signatures of biogenic carbonates in Zone 2 of the Castilla and Salvador records also indicate increasing aridity and decreased lake levels (Figures 4.9 and 4.10). For example, in the Castilla record, $\delta^{18}$O$_{\text{Cyth}}$ values reach as high as $\sim$4.0‰. This is an increase of $\sim$2‰ over the average $\delta^{18}$O$_{\text{Cyth}}$ values with the Castilla record. Although biogenic carbonates in Zone 2 have some of the highest $\delta^{18}$O values on record, there is no indication that either lake dried out completely, as Salvador apparently did during Zone 5.
If the $\delta^{18}$O values reflect E/P ratios, and presumably lake level variability, why don’t we see more positive $\delta^{18}$O values in the carbonate record in Zone 5? Based on the data presented here and other paleoclimate records from the region, we believe that the fundamental characteristics of these two arid periods may have differed. High-resolution records of the arid period around 1000–1200 cal yr B.P. indicate this was a period of generally arid conditions interrupted by a series of high-amplitude, extended drought events that occurred as often as once every 50 years (Haug et al., 2003; Hodell et al., 2005a). High-resolution records of the LIA in the circum-Caribbean seem to indicate a more prolonged and severe period of aridity, perhaps lasting 400 years, again interrupted by extreme drought events, but with these events perhaps only occurring once every 100 years (Hodell et al., 2005b; Peterson and Haug, 2006).
A more prolonged period of arid conditions during the LIA, perhaps accompanied by less seasonal or inter-annual variability, could have severely depressed lake levels over long intervals, leading to more positive $\delta^{18}$O values in the time-averaged biogenic carbonate record compared to the 1000–1200 cal yr B.P. drought. The 1000–1200 cal yr B.P. drought might have included one or more extreme short-term drought events, leading to the desiccation of Salvador, but lacked the long-term lake level draw down necessary to increase the time-averaged $\delta^{18}$O values to the levels observed during the LIA. Oxygen isotope records from Lake Valencia, Venezuela also display a larger shift in $\delta^{18}$O values during the LIA than during the 1000–1200 cal yr B.P. period (Curtis et al., 1999), and sediment density records from Lake Chichancanab, Mexico, indicate the
occurrence of more severe individual drought events during the 1000–1200 cal yr B.P. interval than at any other time in the last 2000 years (Hodell et al., 2005a). Thus, we propose that extreme drought events during the arid period from 1000–1200 cal yr B.P. were more severe than those that occurred during the LIA, but that the LIA was, on average, a drier interval of time in the Las Lagunas area, and potentially the circum-Caribbean as a whole.
This interpretation is supported by foraminiferal isotope records collected off the coast of Puerto Rico. Nyberg et al. (2002) presented isotopic evidence of high Caribbean SSTs and high sea surface salinities (SSSs) off the coast of Puerto Rico between 1000 and 1250 cal yr B.P. This pattern of increased SSTs and increased SSSs is unique because modern increases in SSTs typically result in increased evaporation and convective activity in the Caribbean and a decrease in SSSs. Nyberg et al. suggested that this unexpected pattern may be the result of more frequent or intensified El Niño events, which can cause a rise in Caribbean SSTs, but also suppress convective activity and precipitation, thereby increasing Caribbean SSSs (Giannini et al., 2000). It is interesting to note that multiple researchers have provided evidence of anomalously frequent and powerful El Niño events around 1200 cal yr B.P. (Quinn, 1992; Ely et al., 1993; Moy et al., 2002; Rein et al., 2004; Mohtadi et al., in press).
Nyberg et al. (2002) presented evidence of systematically different climate dynamics in the Caribbean during the LIA. Isotopic and foraminiferal faunal assemblage records from the coastal sediments of Puerto Rico indicate a ~2 °C drop in mean SSTs and an increase in SSSs during the LIA, which is the expected
pattern. According to the results of their artificial neural network analysis, the decrease in mean SSTs during the LIA were primarily attributable to significantly cooler SSTs during the winter, which Nyberg et al. primarily associated with intensified polar air outbreaks into the Caribbean. Nyberg et al. also suggested that increased upwelling, as a result of intensified trade winds, and decreased deep water formation in the North Atlantic, could have led to the decreased Caribbean SSTs during the LIA.
In any case, the data and interpretations presented by Nyberg et al. (2002) point to fundamentally different climatic conditions in the Caribbean between 1000 and 1250 cal yr B.P. and during the LIA, in line with the Castilla and Salvador records presented here. Intensified El Niño events around 1200 cal yr B.P. (Quinn, 1992; Ely et al., 1993; Moy et al., 2002; Rein et al., 2004; Mohtadi et al., in press) could have produced drought events severe enough to lead to the desiccation of Salvador. However, El Niño events are relatively short-lived climatic events and may not be recorded in the relatively coarse oxygen isotope records of Castilla and Salvador, leading to the relatively lower $\delta^{18}$O values in the Castilla and Salvador records around 1200 cal yr B.P., compared to the LIA. On the other hand, longer-term shifts in the Caribbean climate or ocean systems, such as the influence of more powerful polar fronts, decreased SSTs as a result of increased upwelling, or a decrease in warm water import from the tropical Atlantic as a result of decreased deep water formation in the North Atlantic (Nyberg et al. 2002), may have led to more consistently arid conditions on the island of Hispaniola during the LIA. These longer-term signals could be captured
in the time-averaged carbonate $\delta^{18}$O records of Castilla and Salvador and could explain the relatively higher average $\delta^{18}$O values recorded during the LIA compared to the period between 1250 and 1000 cal yr B.P., when lake levels were apparently lower.
**Zone 1 (~95 to ~54 cal yr B.P.)**
Zone 1, the period of most recent human occupation of the Las Lagunas watersheds, shows near-synchronous increases in mineral influx in both Castilla and Salvador, indicating increases in watershed erosion likely tied to historic human settlement and land use (Figure 4.3). Increases in herbaceous pollen types, particularly Poaceae and Amaranthaceae (Figures 4.6 and 4.7), also indicate human settlement and deforestation in both watersheds.
The pollen records of both Castilla and Salvador also include abrupt increases in the percentages of Myrtaceae pollen in Zone 1 (Figures 4.6 and 4.7). While other Myrtaceae are native to the mid-elevations of Hispaniola, most of the Myrtaceae pollen in the upper sediments is most likely the pollen of *Syzygium jambos* Alston. (rose apple), which is currently the dominant arboreal species along the shores of both lakes. The morphology of fossil Myrtaceae pollen isolated from the sediment cores is identical to that of modern pollen collected from the rose apple trees currently surrounding the lakes. The rose apple is an invasive tree that was introduced to the Caribbean in A.D. 1762 (Morton, 1987). Rose apple produces abundant fruit and may have been purposefully introduced to the Las Lagunas area as a source of food and possibly firewood.
The marked increase in the abundance of *Typha* pollen in Zone 1 is also noteworthy (Figures 4.6 and 4.7). *Typha domingensis* is an emergent aquatic plant that currently grows along the shores of both Castilla and Salvador. Increased dominance of *Typha* after historic human settlement may relate to increased nutrient availability. A large increase in the abundance of algal remains, particularly those from algae in the genus *Pediastrum*, through Zone 1 may indicate increasing eutrophication of the lake after human settlement and the introduction of livestock to the area (data not shown; Bradshaw et al., 2005). Local inhabitants have also reported an increase in aquatic plant biomass that they associate with the introduction of livestock to the area.
The $\delta^{13}$C$_{\text{TOC}}$ signatures of both Castilla and Salvador increase sharply at the beginning of Zone 1, most likely as a result of deforestation and the reintroduction of maize to the landscape (Figure 4.8; Lane et al., 2004). Castilla $\delta^{13}$C$_{\text{TOC}}$ values decline steadily through Zone 1 while Salvador $\delta^{13}$C$_{\text{TOC}}$ values remain high up until the present. This discrepancy may reflect the modern distribution of maize fields near the lakes. Maize fields are presently located relatively far away from the shore of Castilla, but are just a few meters away from the shore of Salvador. Considering the close proximity of the maize fields at Salvador, it is unclear why there are no maize pollen grains in the uppermost sediments of the Salvador core.
The sudden rise in mineral influx into both lakes associated with the modern occupation of the watersheds is coincident with a disappearance of ostracod valves from the Castilla sediment record, as was the case during the
prehistoric occupation of the Laguna Castilla watershed (Figure 4.12). Unlike the period of prehistoric occupation in Zone 4, the modern rise in mineral influx into Salvador is also coincident with the disappearance of ostracod valves. This is probably because modern mineral influxes into Salvador, and presumably human impacts in the Salvador watershed, are much higher than they ever were at any other time in the Salvador sediment record (Figure 4.12).
Much like the transition from Zone 5 to Zone 4, the transition from Zone 2 to Zone 1 includes some of the most extreme positive oxygen isotope excursions on record (Figures 4.9 and 4.10). This is the case for all of the biogenic carbonates present at this time. In the Salvador record, the $\delta^{18}O_{Cyth}$ values reach a maximum of $4.0\%$, the $\delta^{18}O_{Cand}$ values reach a maximum of $3.9\%$, and the $\delta^{18}O_{Chara}$ values reach a maximum of $3.5\%$ all at around 80 cal yr B.P. The peak in $\delta^{18}O_{Cyth}$ values in the Castilla record at around 196 cm (124 cal yr B.P. according to the Castilla age model) appears to occur ~40 years prior to the peaks in the Salvador sediments (Figure 4.12). Considering the rapidly changing sedimentation rates through this section of the two records, errors associated with radiocarbon dating, and the difficulty in calibrating radiocarbon dates of this young age, it is quite possible that the positive excursion in the Castilla $\delta^{18}O_{Cyth}$ record actually corresponds to the positive excursions in the Salvador carbonate records at around 80 cal yr B.P. This is further supported by the fact that the rise in mineral influx into both lakes occurs just after the positive peak in $\delta^{18}O_{Cyth}$ values (Figures 4.3, 4.9, and 4.10). If one assumes that both lakes were settled at
roughly the same time, which is likely considering their close proximity, then a simultaneous increase in mineral influx would be expected.
Synchronous shifts in proxy indicators of human presence and two periods of drought in the sediment records of the two lakes (Zones 1 and 4) are consistent with population migrations during severe drought events to land with perennially dependable water sources. According to archaeologists and historians, both prehistoric and historic humans appear to have primarily settled the coasts and fertile valleys of the island of Hispaniola (Rouse, 1992; Bolay, 1997; Wilson, 1997). Hispaniola as a whole has very few natural lakes or other sources of fresh water other than rivers, which are not necessarily annually dependable sources of water and are less than ideal for maintaining livestock during historic times. In times of severe drought, it is possible that humans were driven inland and into the highlands in search of water bodies such as the regionally unique lakes of Las Lagunas.
**Summary and Conclusions**
**Climate History**
Isolating climate signals in sediment records affected by human activity can be difficult (Horn, in press). The multi-proxy, multi-site approach we have employed here has improved our ability to separate anthropogenic and climate signals in the Castilla and Salvador sediment records. Figure 4.11 and Table 4.5 summarize the general climate variability for the Las Lagunas area over the last ~3000 years.
Table 4.5. Climate summary for the Las Lagunas area.
| Zone | Age (cal yr B.P.) | Age (AD/BC) | Climate Conditions | Notes |
|------|----------------------------|-------------------|--------------------|----------------------------------------------------------------------|
| 1 | 95 to—54 cal yr B.P. | A.D. 1855 to 2004 | Arid (?) | Increased calcium carbonate content in the sediments; paleoshorelines indicate higher lake levels in the past; record obscured by human activity |
| 2 | 350 to 95 cal yr B.P. | A.D. 1600 to 1855 | Arid | Abundant biogenic carbonates; increased $^8$O values |
| 3 | 700 to 350 cal yr B.P. | A.D. 1250 to 1600 | Mesic | Absence of biogenic carbonates; high arboreal pollen concentrations |
| 4 | 890 to 700 cal yr B.P. | A.D. 1060 to 1250 | Increasingly Mesic (?) | Presence of *Candona* valves in the Laguna de Salvador sediment record; record obscured by human activity |
| 5 | 1520 to 890 cal yr B.P. | A.D. 430 to 1060 | Arid | *Cythriidlella boldii* present in the Laguna Castilla sedimentary record; progressive decrease in arboreal pollen concentrations and increase in herbaceous pollen; evidence of desiccation in the Laguna de Salvador sedimentary record |
| 6 | 2250 to 1520 cal yr B.P. | 300 B.C. to A.D. 430 | Mesic | Absence of biogenic carbonates; high arboreal pollen concentrations |
| 7 | 2980 to 2250 cal yr B.P. | 1030 to 300 B.C. | Variable (?) | Poor pollen preservation; well-preserved roots; positive $^{13}$C values indicative of methanogenesis |
The precipitation regime of the Las Lagunas area is controlled primarily by the seasonal proximity of the ITCZ. When the ITCZ is displaced southwards, high pressure dominates the Las Lagunas area, limiting convective activity and the onshore flow of moisture from the Caribbean Sea. When the ITCZ reaches a more northerly mean position, the proximal-doldrum conditions enhance convective activity and onshore transport of moisture onto the Caribbean slope of the Cordillera Central. The close correlations between the Las Lagunas climate proxy records and proxy records of mean ITCZ position from throughout the circum-Caribbean, especially those from the Cariaco Basin (Figure 4.11), provide further support that shifts in the mean boreal summer position of the ITCZ over the last few millennia have been the primary driver of late Holocene climate variability in the region.
The Las Lagunas sediment records provide some of the best terrestrial records of discrete climatic “events” in the northeastern Caribbean. The first was a severe drought ~1210 cal yr B.P., possibly one of the most severe drought “events” of the last 2000 years. This drought led to the apparent desiccation of Salvador and may be related to the series of droughts linked to the Terminal Collapse of the Maya civilization on the Yucatan Peninsula. The Las Lagunas sediment records also provide evidence of a relatively wet Medieval Warm Period (MWP) in the eastern Caribbean. Zone 3 of the Castilla and Salvador proxy records coincides with the latest stages of the MWP and includes evidence of increased lake levels and C$_3$ forest dominance. Zone 2 in both records provides further evidence that the Little Ice Age (LIA) may have been, on average, one of
the most arid periods in the circum-Caribbean in the last 2000 years. There is no evidence that Castilla or Salvador ever dried out completely during the LIA, but high concentrations of *C. haitensis* oospores and other biogenic carbonates, as well as maximum $\delta^{18}$O values, indicate an extended period of depressed lake levels during the LIA. These three discrete climatic “events” appear to have had profound impacts on both the natural vegetation and disturbance regimes of the region and thus likely affected human populations that occupied the area, as well.
**Human-Environment Interactions**
Lake sediments have long been recognized as excellent archives of the environmental impacts of prehistoric human populations and societies. Over the last decade, lake sediments have also been increasingly recognized as excellent archives of information regarding the impact of climate change on human populations (e.g. deMenocal, 2001). The paleolimnological histories of Castilla and Salvador provide us with new information regarding both the environmental impacts of prehistoric and modern human populations in the interior of Hispaniola, and on the impacts of circum-Caribbean climate change on human populations.
The Las Lagunas lakes are marked by two distinct periods of human occupation over the last ~2000 years. The first occupation, commencing ~890 cal yr B.P., was coincident with what was apparently a severe drought “event” that punctuated an extended period of aridity for the region. The second occupation, commencing ~95 cal yr B.P., was also coincident with an apparently severe drought “event” punctuating an extended period of drought during the LIA.
Unlike most other records of prehistoric cultural responses to climate variability, such as those from the Yucatan Peninsula (Hodell et al., 1995) and the island of Guadeloupe (Beets et al., 2006), some of the most severe periods of drought in the Las Lagunas area appear to be associated with human occupation, as opposed to abandonment.
The limited number and size of lakes, steep topography, and poor soils of Las Lagunas probably made the area unsuitable or undesirable as a large population center at any point in time. However, freshwater lakes are rare on the island of Hispaniola and the Las Lagunas lakes represent a uniquely dependable inland water source. It is possible that humans were migrating out of large regional population centers on the island during periods of increased aridity and smaller populations were resettling in areas with dependable water sources, such as Las Lagunas. This hypothesis could explain the unexpected pattern of human settlement as opposed to abandonment during drought for the Las Lagunas area, but further research is necessary to verify this hypothesis and to place these potential population migrations into archaeological and historical context. While abundant attention has been devoted to the inter-island migrations of prehistoric Caribbean populations, very little attention has been devoted to the intra-island migrations of these same populations.
The activities of prehistoric populations had long lasting effects on the vegetation and disturbance regimes of the Las Lagunas area as well as aquatic organisms in Castilla. It appears as though the natural vegetation of the area had only just begun to recover some 350 years after prehistoric human abandonment.
only to be disturbed once again by the more recent occupation ~95 cal yr B.P.
The benthic ostracod *Cythridella boldii* disappears completely from the Castilla sediment record following prehistoric human occupation, most likely due to increased lake turbidity from increased mineral influx. It was not until some 100 years later that *C. boldii* finally returned to the Castilla sediment record.
After prehistoric site abandonment, charcoal values in both sediment records never approach earlier levels. This decrease in charcoal abundance in the Las Lagunas area may have been the result of a significant decrease in soil fertility due to prehistoric erosion and a subsequent decrease in plant biomass. While it is also possible that a shift in climate could lead to decreased charcoal abundance as a result of decreased fire occurrence, paleolimnological evidence indicates similar hydrological conditions both prior to and following human settlement in the area. These potentially long-lasting impacts of prehistoric human populations on vegetation and fire regimes should be kept in mind by researchers analyzing modern day “natural” fire regimes and land managers interested in instituting prescribed burns on Caribbean islands to recreate “natural” fire regimes and maintain “natural” vegetation assemblages.
The most recent occupation of the Las Lagunas watersheds has also had significant impacts on the landscape and the lakes. Like the prehistoric occupation, the most recent occupation of the Las Lagunas area is associated with deforestation and an increase in mineral influx into both lakes. Once again, this increase in mineral influx is coincident with the disappearance of ostracods from both sediment records. The increased abundance of the alga *Pediastrum* sp. and
increased dominance of *Typha domingensis* in the pollen record may indicate increased eutrophication in both lakes.
**Conclusions**
Terrestrial records of environmental change from the islands of the Caribbean are of great importance because of the unique biology, climatology, and history of these island settings. Despite the importance of these islands, the long-term environmental histories of most Caribbean islands remain poorly understood. The Castilla and Salvador sediment records provide evidence of regionally coherent climate variability that affected the interior of Hispaniola during the late Holocene and support for the hypothesis that variations in the mean latitudinal position of the ITCZ have been a primary driver of Holocene climate change in the circum-Caribbean region. The multi-proxy paleoenvironmental records of Castilla and Salvador also provide some of the first insights into prehistoric human-environment interactions in the interior of Hispaniola and provide testable hypotheses regarding the cultural response of Caribbean islanders to rapid climate change.
CHAPTER 5
Conclusions and Summary
This study has provided insights into late Holocene climate, vegetation, and human history in Hispaniola, and has contributed to methods for studying prehistoric agriculture using stable carbon isotopes. My dissertation research includes the earliest evidence of maize agriculture from the interior of Hispaniola (Chapter 2), evidence that the stable carbon isotope composition of bulk sediments can be used to estimate, at high temporal resolution, relative shifts in the abundance of maize being cultivated in a small neotropical watershed (Chapter 3), and a ~3000 cal yr B.P. multi-proxy record of paleoenvironmental change from two small lakes in the mid-elevations of the Cordillera Central of the Dominican Republic (Chapter 4).
Combined, these three studies have yielded new information regarding the geographic distribution of maize agriculture and importance of maize agriculture to prehistoric populations of Hispaniola, the impacts of both modern and prehistoric humans on the natural environment of the island of Hispaniola, and the impacts of climate change on the natural ecosystems and human populations of the island of Hispaniola. When compared with other records of climate change from the region, the multiproxy record of paleoenvironmental change that I have produced contributes insight into the regional coherence of, and possible mechanisms responsible for, late-Holocene climate changes in the circum-Caribbean region.
Maize pollen isolated from the sediments of Laguna Castilla and Laguna de Salvador dates back to ~ A.D. 1060 and represents the earliest and most securely dated evidence of maize agriculture from the interior of Hispaniola. Based on evidence preserved in archaeological sites throughout the Caribbean, many archaeologists and ethnobotanists believe that maize was a very minor component in the diets of prehistoric Caribbean populations. The abundance of maize pollen grains preserved in the Laguna Castilla and Laguna de Salvador sediments, combined with skeletal isotopic evidence from the interior of Puerto Rico (Stokes, 1998), indicate maize consumption may have been more prevalent in the interiors of Caribbean islands where marine resources were unavailable or too distant to be exploited efficiently. This finding emphasizes the need for more archaeological and ethnobotanical studies in the interiors of Caribbean islands.
The abundance of maize pollen in the Laguna Castilla sediment core, combined with high sedimentation rates during this period of prehistoric occupation, provided the necessary conditions to test the sensitivity of a relatively new proxy of forest clearance and maize agriculture. The stable carbon isotope composition of bulk sediments ($\delta^{13}$C$_{\text{TOC}}$) proved to be an effective proxy for the occurrence of prehistoric forest clearance and maize agriculture in the mesic neotropics. The stable isotope composition of sediments is sensitive to these activities because agricultural settings tend to be dominated by C$_4$ plants, which have stable carbon isotope compositions distinct from those of the C$_3$ plants that dominate undisturbed neotropical forests (Lane et al., 2004). Theoretically, the relative shift in $\delta^{13}$C$_{\text{TOC}}$ signatures through time may be indicative of the relative
extent of maize agriculture within a particular watershed (Lane et al., in review). My high-resolution analyses of $\delta^{13}$C$_{\text{TOC}}$ values, maize pollen concentrations, and mineral influx into Laguna Castilla document the sensitivity of $\delta^{13}$C$_{\text{TOC}}$ signatures to the amount of maize being cultivated within a small tropical watershed. Shifts in the $\delta^{13}$C$_{\text{TOC}}$ record lag shifts in maize pollen concentrations by a few years, perhaps due to the time required for the breakdown of maize tissues and subsequent transport of this carbon to the sedimentary basin. In addition, the relative shifts in $\delta^{13}$C$_{\text{TOC}}$ values ($\Delta^{13}$C$_{\text{TOC}}$) appear to be sensitive to variations in allochthonous carbon influx, something that must be considered in any future models intended to reconstruct the spatial scale of maize agriculture in a watershed using the $\delta^{13}$C signature of lake sediments.
My 3000 cal yr B.P. multi-proxy paleoenvironmental reconstruction indicates that the Laguna Castilla and Laguna de Salvador lake basins formed at different times and were initially probably shallow water, methanogenic environments prone to desiccation. Pollen assemblages indicate that the mid-elevations of the Cordillera Central were relatively moist from 2250 to 1520 cal yr B.P. Decreasing abundances of arboreal pollen types, increasing grass pollen concentrations, increasing $\delta^{13}$C$_{\text{TOC}}$ values, and sedimentary evidence that Laguna Salvador may have dried out completely, all indicate increasingly arid conditions for the region between 1520 and 890 cal yr B.P. The later portions of this arid period correspond well with regional evidence of drought from throughout the circum-Caribbean and may have been produced by the same shifts in atmospheric circulation that are associated with droughts on the Yucatan Peninsula that are
implicated in the collapse of the Mayan civilization (Hodell et al., 1995; 2005a Gill, 2000; Haug et al., 2003).
Humans settled the Laguna Castilla and Laguna de Salvador watersheds around 890 cal yr B.P. Drastic increases in mineral influx, charcoal influx, and sedimentation rates at Laguna Castilla, combined with the appearance of maize pollen at both sites and increases in weedy herbaceous pollen at the expense of arboreal taxa, indicate prehistoric forest clearance and agriculture. These prehistoric environmental impacts appear to have been more severe than other natural or anthropogenic disturbance over the last two millennia, especially in the Laguna Castilla basin. At ~700 cal yr B.P., all of these proxies reverse, indicating abandonment of the watersheds by humans for reasons that remain unclear.
Following abandonment of the Laguna Castilla and Laguna de Salvador watersheds, pines became the dominant arboreal species in the area, possibly as a result of decreased soil fertility due to the high erosion rates associated with the period of prehistoric human agriculture. The pollen records of both lakes indicate that arboreal taxa typical of the native lower montane moist forest, such as *Trema*, *Cecropia*, other genera in the Urticales order, and *Myrsine*, did not reach pre-occupation levels in the area for some 350 years following site abandonment. In addition, charcoal concentrations never again reached pre-occupation levels in either sediment record.
The most recent (and ongoing) occupation of the Laguna Castilla and Laguna de Salvador watersheds began ~95 cal yr B.P. and is also associated with increased erosion, deforestation, and possibly increased eutrophication of the
lakes as a result of livestock maintained in the area. The increased abundance of *Typha domingensis* pollen in the two sediment records and the appearance of the alga *Pediastrum* sp. likely indicate increased nutrient availability in the lakes. Local inhabitants have also reported an increase in aquatic plant biomass that they associate with the introduction of livestock to the area.
The presence of biogenic carbonates in the sediments of Laguna Castilla and Laguna de Salvador allowed the reconstruction of prehistoric evaporation/precipitation (E/P) ratios for both lakes using stable oxygen isotope ($\delta^{18}$O) analyses. The prehistoric and modern occupations of the Laguna Castilla and Laguna de Salvador watersheds coincide with the two largest positive oxygen isotope excursions on record. The synchronous shifts in proxy indicators of human occupation and drought twice in the sediment records of the two lakes may indicate population migration into the interior of Hispaniola in search of perennially dependable water sources during severe drought events. This pattern of occupation is opposite of that in most other circum-Caribbean geoarchaeological records, such as those from the Yucatan Peninsula (Hodell et al., 2005a) and the island of Guadaloupe (Beets et al., 2006), where drought is typically associated with site abandonment rather than occupation. Further paleolimnological studies of small lakes in the interior of Hispaniola and other Caribbean islands will be necessary to see if this pattern of climatically induced human migrations was common in the region.
The $\delta^{18}$O records of Laguna Castilla and Laguna de Salvador may also provide insights into regional climate changes and the mechanisms responsible for
these changes. For example, the most positive $\delta^{18}$O values on record in both lakes occurred during the Little Ice Age (LIA), indicating this period may have been one of the driest periods in the region over the last 3000 cal yr B.P. A positive, but relatively smaller, excursion in $\delta^{18}$O values is also evident around 1210 cal yr B.P. in the Laguna Castilla $\delta^{18}$O record and is accompanied by evidence of desiccation in the Laguna de Salvador sediment record.
The maximum $\delta^{18}$O values during the LIA are not associated with any evidence of lake desiccation in the Laguna Castilla or Laguna de Salvador sediment records. This shift in the relationship between these two proxy indicators may be indicating a fundamental shift in climate dynamics. The biogenic carbonate $\delta^{18}$O record is time-averaged because it consists of carbonates produced and deposited over an extended period of time; thus any short-lived droughts would be hard to detect using the $\delta^{18}$O record. If the drought, or series of droughts, that led to the desiccation of Laguna de Salvador ~1210 cal yr B.P. was short-lived, perhaps related to intensified El Niño events (Nyberg et al., 2002), it may not be detectable in the time-averaged $\delta^{18}$O record. However, longer-lived droughts would be detectable in the $\delta^{18}$O record. Nyberg et al. (2002) have proposed that the LIA may have consisted of a fundamental shift in the climate regime of the Caribbean as a result of intensified polar air outbreaks, intensified tradewinds, and/or decreased deep water formation in the North Atlantic. These types of changes could have lead to longer-lived (multi-decadal) droughts in the Caribbean as opposed to the short-term (annual) changes related to increased El Niño intensity or frequency.
On longer timescales, the paleoprecipitation records of Laguna Castilla and Laguna de Salvador correlate well with regional paleoprecipitation records, especially those from the Yucatan Peninsula and the Cariaco Basin. The correlation of these records provide further evidence that variations in the mean annual position of the Intertropical Convergence Zone (ITCZ) have been a primary driver of circum-Caribbean climate change throughout the Holocene (Hodell et al., 1991; Haug et al., 2001) and provide further evidence that the tropics were not immune to global climate change events (Mayewski et al., 2004) once thought to have affected only the high latitudes.
Despite the rapidly increasing number of paleoenvironmental records available from throughout the neotropics, voids still exist in our understanding of Holocene climate change, the impacts of these changes on ecosystems and human populations, and the impacts of prehistoric human populations on the natural environment, especially in the eastern Caribbean and tropical North Atlantic. This dissertation has contributed to an understanding of all of these topics and represents one of the very few paleoenvironmental reconstructions from the interior of any Caribbean island. Future high-resolution paleoenvironmental studies using new techniques, such as compound-specific isotopic analyses, will help to resolve and further refine the environmental history of the circum-Caribbean and the role of the neotropics in global climate change.
186
LIST OF REFERENCES
188
LIST OF REFERENCES
Abbott, M.B., Wolfe, B.B., Aravena, R., Wolfe, A.P., Seltzer, G.O., (2000) Holocene hydrological reconstructions from stable isotopes and paleolimnology, Cordillera Real, Bolivia. *Quaternary Science Reviews*, 19(17–18), 1801–1820.
Alin, S.R., Cohen, A.S., (2003) Lake-level history of Lake Tanganyika, East Africa, for the past 2500 years based on ostracode-inferred water-depth reconstruction. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 199(1–2), 31–49.
Anchukaitis, K.J., Horn, S.P., (2005) A 2000-year reconstruction of forest disturbance from southern Pacific Costa Rica. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 221(1–2), 35–54.
Aucour, A.M., Bonnefille, R., Hillaire-Marcel, C., (1999) Sources and accumulation rates of organic carbon in an equatorial peat bog (Burundi, East Africa) during the Holocene: carbon isotope constraints. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 150(3–4), 179–189.
Aylor, D.E., Baltazar, B., Schoper, J., (2005) Some physical properties of teosinte (*Zea mays* subsp. *parviglumis*) pollen. *Journal of Experimental Botany*, 56(419), 2401–2407.
Bartlett, A.S., Barghoorn, E.S., (1973) Phytogeographic history of Panama during the past 12,000 years: a history of vegetation, climate and sea-level change. In: A. Graham (Ed.), *Vegetation and Vegetational History of Northern Latin America*, pp. 203–299. Elsevier, Amsterdam.
Beach, T., (1998) Soil catenas, tropical deforestation, and ancient and contemporary soil erosion in the Peten, Guatemala. *Physical Geography*, 19(5), 378–405.
Beets, C.J., Troelstra, S.R., Grootes, P.M., Nadeau, M.J., van der Borg, K., de Jong, A.F.M., Hofman, C.L., Hoogland, M.L.P., (2006) Climate and pre-Columbian settlement at Anse a la Gourde, Guadeloupe, northeastern Caribbean. *Geoarchaeology-an International Journal*, 21(3), 271–280.
Behling, H., Pillar, V.D., Orloic, L., Bauermann, S.G., (2004) Late Quaternary Araucaria forest, grassland (Campos), fire and climate dynamics, studied by high-resolution pollen, charcoal and multivariate analysis of the Cambara do Sul core in southern Brazil. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 203(3–4), 277–297.
Belis, C.A., Lami, A., Guilizzoni, P., Ariztegui, D., Geiger, W., (1999) The late Pleistocene ostracod record of the crater lake sediments from Lago di Albano (Central Italy): changes in trophic status, water level and climate. *Journal of Paleolimnology*, 21, 151–169.
Bender, M.M., (1971) Variations on the 13C ratios of plants in relation to the pathway of photosynthetic carbon dioxide fixation. *Phytochemistry*, 10, 1239–1244.
Berger, A., Loutre, M.F., (1991) Insolation values for the climate of the last 10 million years. *Quaternary Science Reviews*, 10, 297–317.
Berglund, B.E., (1986) *Handbook of Holocene Paleoecology and Paleohydrology*. John Wiley and Sons, New York.
Bertran, P., Bonnissent, D., Imbert, D., Lozouet, P., Serrand, N., Stouvenot, C., (2004) Caribbean palaeoclimates since 4000 BP: the Grand-Case Lake record at Saint Martin. *Comptes Rendus Geoscience*, 336(16), 1501–1510.
Bigg, G., (2003) *The Oceans and Climate*. Cambridge University Press, Cambridge, UK.
Binford, M.W., (1982) Ecological history of Lake Valencia, Venezuela - Interpretation of animal micro-fossils and some chemical, physical, and geological features. *Ecological Monographs*, 52(3), 307–333.
Black, D.E., Peterson, L.C., Overpeck, J.T., Kaplan, A., Evans, M.N., Kashgarian, M., (1999) Eight centuries of North Atlantic Ocean atmosphere variability. *Science*, 286(5445), 1709–1713.
Black, D.E., Thunell, R.C., Kaplan, A., Peterson, L.C., Tappa, E.J., (2004) A 2000-year record of Caribbean and tropical North Atlantic hydrographic variability. *Paleoceanography*, 19(2), Art. No. PA2022, doi:10.1029/2003PA000982.
Bolay, E., (1997) *The Dominican Republic: A Country Between Rain Forest and Desert; Contributions to the Ecology of a Caribbean Island*. Margraf Verlag, Weikersheim.
Boom, A., Mora, G., Cleef, A.M., Hooghiemstra, H., (2001) High altitude C-4 grasslands in the northern Andes: relics from glacial conditions? *Review of Palaeobotany and Palynology*, 115(3–4), 147–160.
Bradbury, J.P., Leyden, B., Salgado-Labouriau, M., Lewis, W.M., Schubert, C., Binford, M.W., Frey, D.G., Whitehead, D.R., Weibezahn, F.H., (1981) Late Quaternary environmental history of Lake Valencia, Venezuela. *Science*, 214(4527), 1299–1305.
Bradshaw, E.G., Rasmussen, P., Nielsen, H.L., Anderson, N.J., (2005) Mid-to late-Holocene land use change and lake development at Dallund So, Denmark: trends in lake primary production as reflected by algal and macrophyte remains. *Holocene*, 15(8), 1130–1142.
Brenner, M., Binford, M.W., (1988) A sedimentary record of human disturbance from Lake Miragoane, Haiti. *Journal of Paleolimnology*, 1, 85–97.
Brincat, D., Yamada, K., Ishiwatari, R., Uemura, H., Naraoka, H., (2000) Molecular-isotopic stratigraphy of long-chain n-alkanes in Lake Baikal Holocene and glacial age sediments. *Organic Geochemistry*, 31(4), 287–294.
Brown, E.T., Johnson, T.C., (2005) Coherence between tropical East African and South American records of the Little Ice Age. *Geochemistry Geophysics Geosystems*, 6, Art. No. Q12005, doi:10.1029/2005GC000959.
Brown, P., Kennett, J.P., Ingram, B.L., (1999) Marine evidence for episodic Holocene megafloods in North America and the northern Gulf of Mexico. *Paleoceanography*, 14(4), 498–510.
Burney, D.A., Burney, L.P., Macphee, R.D.E., (1994) Holocene charcoal stratigraphy from Laguna Tortuguero, Puerto Rico, and the timing of human arrival on the island. *Journal of Archaeological Science*, 21(2), 273–281.
Bush, M.B., Piperno, D.R., Colinvaux, P.A., De Oliveira, P.E., Krissek, L.A., Miller, M.C., Rowe, W.E., (1992) A 14,300-yr paleoecological profile of a lowland tropical lake in Panama. *Ecological Monographs*, 62(2), 251–275.
Chenoweth, M., (1998) The early 19th century climate of the Bahamas and a comparison with 20th century averages. *Climatic Change*, 40, 577–603.
Chivas, A.R., De Deckker, P., Shelley, J.M.G., (1986) Magnesium and strontium in non-marine ostracod shells as indicators of paleosalinity and paleotemperature. *Hydrobiologia*, 143, 135–142.
Clark, G.M., Horn, S.P., Orvis, K.H., (2002) High-elevation savanna landscapes in the Cordillera Central, Dominican Republic, Hispaniola – Dambos in the Caribbean. *Mountain Research and Development*, 22, 288–295.
Clement, A.C., Seager, R., Cane, M.A., (2000) Suppression of El Niño during the mid-Holocene by changes in the Earth's orbit. *Paleoceanography*, 15(6), 731–737.
Clement, R.M., Horn, S.P., (2001) Pre-Columbian land-use history in Costa Rica: a 3000-year record of forest clearance, agriculture and fires from Laguna Zoncho. *Holocene*, 11(4), 419–426.
Cobb, K.M., Charles, C.D., Cheng, H., Edwards, R.L., (2003) El Niño/Southern Oscillation and tropical Pacific climate during the last millennium. *Nature*, 424(6946), 271–276.
Colinvaux, P.A., De Oliveira, P.E., Moreno, J.E., (1999) Chapter 3: Guide to piston coring of lake sediments, Amazon pollen manual and atlas. In *Amazon Pollen Manual and Atlas*. Hardwood Academic Publishers, Amsterdam. 332 pp.
Collatz, G.J., Berry, J.A., Clark, J.S., (1998) Effects of climate and atmospheric CO2 partial pressure on the global distribution of C-4 grasses: present, past, and future. *Oecologia*, 114(4), 441–454.
Conserva, M.E., Byrne, R., (2002) Late Holocene vegetation change in the Sierra Madre Oriental of central Mexico. *Quaternary Research*, 58(2), 122–129.
Covey, D.L., Hastenrath, S., (1978) Pacific El-Niño phenomenon and Atlantic circulation. *Monthly Weather Review*, 106(9), 1280–1287.
Covich, A., Stuiver, M., (1974) Changes in oxygen-18 as a measure of long-term fluctuations in tropical lake levels and molluscan populations. *Limnology and Oceanography*, 19, 682–691.
Craig, H., (1965) The measurement of oxygen isotope paleotemperatures. In: E. Tongiorgi (Ed.), *Stable Isotopes in Oceanographic Studies and Paleotemperatures*, pp. 9–130. Congiglio Nazionale della Richereche, Laboratorio de Geologia Nucleare, Pisa.
Cross, S.L., Baker, P.A., Seltzer, G.O., Fritz, S.C., Dunbar, R.B., (2000) A new estimate of the Holocene lowstand level of Lake Titicaca, central Andes, and implications for tropical palaeohydrology. *Holocene*, 10(1), 21–32.
Curtis, J.H., Brenner, M., Hodell, D.A., (1999) Climate change in the Lake Valencia Basin, Venezuela, approximate to 12,600 yr BP to present. *Holocene*, 9(5), 609–619.
Curtis, J.H., Brenner, M., Hodell, D.A., Balser, R.A., Islebe, G.A., Hooghiemstra, H., (1998) A multi-proxy study of Holocene environmental change in the Maya lowlands of Peten, Guatemala. *Journal of Paleolimnology*, 19(2), 139–159.
Curtis, J.H., Hodell, D.A., (1993) An isotopic and trace element study of ostracods from Lake Miragoane, Haiti: a 10,500 year record of paleosalinity and paleotemperature changes in the Caribbean. *Geophysical Monograph*, 78, 135–152.
Curtis, J.H., Hodell, D.A., Brenner, M., (1996) Climate variability on the Yucatan Peninsula (Mexico) during the past 3500 years, and implications for Maya cultural evolution. *Quaternary Research*, 46(1), 37–47.
Curtis, S., Hastenrath, S., (1995) Forcing of anomalous sea-surface temperature evolution in the tropical Atlantic during Pacific warm events. *Journal of Geophysical Research-Oceans*, 100(C8), 15835–15847.
Darrow, W.K., Zanoni, T., (1991) Hispaniolan pine (*Pinus occidentalis* Swartz) a little known sub-tropical pine of economic potential. *Commonwealth Forestry Review*, 69(2), 133–146.
Davies, S.J., Metcalfe, S.E., MacKenzie, A.B., Newton, A.J., Endfield, G.H., Farmer, J.G., (2004) Environmental changes in the Zirahuén Basin, Michoacan, Mexico, during the last 1000 years. *Journal of Paleolimnology*, 31(1), 77–98.
Dean, W.E., (1974) Determinations of carbonate and organic matter in calcareous sediments and sedimentary rocks by loss on ignition: comparison with other methods. *Journal of Sedimentary Petrology*, 44, 242–248.
Dean, W.E., (1981) Carbonate minerals and organic matter in sediments of modern north temperate hard-water lakes. In: F.G. Ethridge, R.M. Flores (Eds.), *Recent and Ancient Nonmarine Depositional Environments: Models for Exploration*, pp. 213–231. The Society of Economic Paleontologists and Mineralogists, Tulsa, Oklahoma.
Deines, P., (1980) The isotopic composition of reduced organic carbon. In: P. Fritz, J.C. Fontes (Eds.), *Handbook of Environmental Isotope Geochemistry, I*, pp. 329–406. Elsevier, New York.
Delorme, L.D., (1991) Ostracoda. In: J.H. Thorp, A.P. Covich (Eds.), *North American Freshwater Invertebrates*, pp. 691–717. Academic Press, Toronto.
deMenocal, P.B., Ortiz, J., Guilderson, T., Sarnthein, M., (2000) Coherent high- and low-latitude climate variability during the Holocene warm period. *Science*, 288(5474), 2198–2202.
deMenocal, P.B., (2001) Cultural responses to climate change during the Late Holocene. *Science*, 292(5517), 667–673.
Diaz, H.F., Markgraf, V., (2000) *El Niño and the Southern Oscillation: Multi-scale Variability and Global and Regional Impacts*. Cambridge University Press, New York.
Diefendorf, A.F., Patterson, W.P., Mullins, H.T., Tibert, N., Martini, A., (2006) Evidence for high-frequency late Glacial to mid-Holocene (16,800 to 5500 cal yr B.P.) climate variability from oxygen isotope values of Lough Inchiquin, Ireland. *Quaternary Research*, 65, 78–86.
Dull, R.A., (2006) The maize revolution: A view from El Salvador. In: J. Staller, R. Tykot, B. Benz (Eds.), *Histories of Maize: Multidisciplinary Approaches to the Prehistory, Biogeography, Domestication, and Evolution of Maize*, pp. 357–365. Elsevier Press, San Diego.
Ehleringer, J.R., Cerling, T.E., Helliker, B.R., (1997) C-4 photosynthesis, atmospheric CO2 and climate. *Oecologia*, 112(3), 285–299.
Ekdahl, E.J., Teranes, J.L., Guilderson, T.P., Turton, C.L., McAndrews, J.H., Wittkop, C.A., Stoermer, E.F., (2004) Prehistorical record of cultural eutrophication from Crawford Lake, Canada. *Geology*, 32(9), 745–748.
Ely, L.L., Enzel, Y., Baker, V.R., Cayan, D.R., (1993) A 5000-year record of extreme floods and climate change in the southwestern United States. *Science*, 262, 410–412.
Enfield, D.B., (1996) Relationships of inter-American rainfall to tropical Atlantic and Pacific SST variability. *Geophysical Research Letters*, 23(23), 3305–3308.
Enfield, D.B., Elfaro, E.J., (1999) The dependence of Caribbean rainfall on the interaction of the tropical Atlantic and Pacific oceans. *Journal of Climate*, 12(7), 2093–2103.
Enfield, D.B., Mayer, D.A., (1997) Tropical Atlantic sea surface temperature variability and its relation to El Niño Southern Oscillation. *Journal of Geophysical Research-Oceans*, 102(C1), 929–945.
Engstrom, D.R., Nelson, S.R., (1991) Paleosalinity from trace-metals in fossil ostracodes compared with observational records at Devils Lake, North-Dakota, USA. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 83(4), 295–312.
Ericson, D.B., Wollin, G., (1968) Pleistocene climates and chronology in deep-sea sediments. *Science*, 162, 1227–1234.
Faegri, K., Iverson, J., (1989) *Textbook of Pollen Analysis, 4th Edition*. Wiley, New York.
Fedorov, A.V., Philander, S.G., (2000) Is El Niño changing? *Science*, 288(5473), 1997–2002.
Finkel, R.C., Nishiizumi, K., (1997) Beryllium 10 concentrations in the Greenland Ice Sheet Project 2 ice core from 3–40 ka. *Journal of Geophysical Research*, 102, 26699–26706.
Fisher, E., Oldfield, F., Wake, R., Boyle, J., Appleby, P., Wolff, G.A., (2003) Molecular marker records of land use change. *Organic Geochemistry*, 34(1), 105–119.
Fontes, J.C., Gonfiantini, R., (1967) Comportement isotopique au cours de l’evaporation de deux bassins sahariens. *Earth and Planetary Science Letters*, 3, 258–266.
García Arévalo, M.A., Tavares, J., (1978) Presentation (arqueologia de Sanate). *Boletin del Museo del Hombre Dominicano*, 10, 31–44.
Gasse, F., Tehet, R., Durand, A., Gilber, E., Fontes, J.C., (1990) The arid-humid transition in the Sahara and the Sahel during the last deglaciation. *Nature*, 346, 141–146.
Gervais, B.R., MacDonald, G.M., (2001) Modern pollen and stomata deposition in lake surface sediments from across the treeline on the Kola Peninsula, Russia. *Review of Paleobotany and Palynology*, 114(3–4), 223–237.
Giannini, A., Cane, M.A., Kushnir, Y., (2001a) Interdecadal changes in the ENSO teleconnection to the Caribbean region and the North Atlantic oscillation. *Journal of Climate*, 14(13), 2867–2879.
Giannini, A., Chiang, J.C.H., Cane, M.A., Kushnir, Y., Seager, R., (2001b) The ENSO teleconnection to the tropical Atlantic Ocean: Contributions of the remote and local SSTs to rainfall variability in the tropical Americas. *Journal of Climate*, 14(24), 4530–4544.
Giannini, A., Kushnir, Y., Cane, M.A., (2000) Interannual variability of Caribbean rainfall, ENSO, and the Atlantic Ocean. *Journal of Climate*, 13(2), 297–311.
Gill, R.B., (2000) *The Great Maya Droughts: Water, Life, and Death*. University of New Mexico Press, Albuquerque.
Goman, M., Byrne, R., (1998) A 5000-year record of agriculture and tropical forest clearance in the Tuxtlas, Veracruz, Mexico. *Holocene*, 8(1), 83–89.
Gray, W.M., Landsea, C.W., Miekle, P.W., Berry, K.J., (1994) Predicting Atlantic basin seasonal tropical cyclone activity by 1 June. *Weather Forecasting*, 9, 103–115.
Grimm, E.C., Jacobson, G.L., Watts, W.A., Hansen, B.C.S., Maasch, K.A., (1993) A 50,000-year record of climate oscillations from Florida and its temporal correlation with the Heinrich events. *Science*, 261(5118), 198–200.
Guiney, J.L., (1999) Preliminary Report: Hurricane Georges 15 September – 01 October 1998. National Hurricane Center, Miami.
Haase-Schramm, A., Bohm, F., Eisenhauer, A., Dullo, W.C., Joachimski, M.M., Hansen, B., Reitner, J., (2003) Sr/Ca ratios and oxygen isotopes from sclerosponges: Temperature history of the Caribbean mixed layer and thermocline during the Little Ice Age. *Paleoceanography*, 18(3), Art. No. 1073, doi:10.1029/2002PA000830.
Haberyan, K.A., Horn, S.P., (1999) A 10,000 year diatom record from a glacial lake in Costa Rica. *Mountain Research and Development*, 19(1), 63–70.
Hastenrath, S., Heller, L., (1977) Dynamics of climatic hazards in Northeast Brazil. *Quarterly Journal of the Royal Meteorological Society*, 103(435), 77–92.
Haug, G.H., Gunther, D., Peterson, L.C., Sigman, D.M., Hughen, K.A., Aeschlimann, B., (2003) Climate and the collapse of Maya civilization. *Science*, 299(5613), 1731–1735.
Haug, G.H., Hughen, K.A., Sigman, D.M., Peterson, L.C., Rohl, U., (2001) Southward migration of the intertropical convergence zone through the Holocene. *Science*, 293(5533), 1304–1308.
Heaton, T.H.E., Holmes, J.A., Bridgwater, N.D., (1995) Carbon and oxygen isotope variations among lacustrine ostracods: Implications for palaeoclimatic studies. *Holocene*, 5(4), 428–434.
Heusser, C.J., (1971) *Pollen and Spores of Chile: Modern Types of the Pteridophyta*. University of Arizona Press, Tucson.
Higuera-Gundy, A., (1991) *Antillean Vegetational History and Paleoclimate Reconstructed from the Paleolimnological Record of Lake Miragoane, Haiti*. Ph.D. dissertation, University of Florida, Gainesville.
Higuera-Gundy, A., Brenner, M., Hodell, D.A., Curtis, J.H., Leyden, B.W., Binford, M.W., (1999) A 10,300 C-14 yr record of climate and vegetation change from Haiti. *Quaternary Research*, 52(2), 159–170.
Hillesheim, M.B., Hodell, D.A., Leyden, B.W., Brenner, M., Curtis, J.H.,
Anselmetti, F.S., Ariztegui, D., Buck, D.G., Guilderson, T.P., Rosenmeier,
M.F., Schnurrenberger, D.W., (2005) Climate change in lowland Central
America during the late deglacial and early Holocene. *Journal of
Quaternary Science*, 20(4), 363–376.
Hodell, D.A., Brenner, M., Curtis, J.H., (2005a) Terminal Classic drought in the
northern Maya lowlands inferred from multiple sediment cores in Lake
Chichancanab (Mexico). *Quaternary Science Reviews*, 24(12–13), 1413–
1427.
Hodell, D.A., Brenner, M., Curtis, J.H., Guilderson, T., (2001) Solar forcing of
drought frequency in the Maya lowlands. *Science*, 292(5520), 1367–1370.
Hodell, D.A., Brenner, M., Curtis, J.H., Medina-Gonzalez, R., Can, E.I.C.,
Albornaz-Pat, A., Guilderson, T.P., (2005b) Climate change on the
Yucatan Peninsula during the Little Ice Age. *Quaternary Research*, 63(2),
109–121.
Hodell, D.A., Curtis, J.H., Brenner, M., (1995) Possible role of climate in the
collapse of Classic Maya civilization. *Nature*, 375(6530), 391–394.
Hodell, D.A., Curtis, J.H., Jones, G.A., Higuera-Gundy, A., Brenner, M., Binford,
M.W., Dorsey, K.T., (1991) Reconstruction of Caribbean climate change
over the past 10,500 years. *Nature*, 352(6338), 790–793.
Holdridge, L.R., Grenke, W.C., Hatheway, W.H., Laing, T., Tosi, J.A., (1971)
*Forest Environments in Tropical Life Zones: A Pilot Study*. Pergamon
Press, Oxford.
Holmes, J.A., (1997) Recent non-marine Ostracoda from Jamaica, West Indies.
*Journal of Micropalaeontology*, 16, 137–143.
Holmes, J.A., (1998) A late Quaternary ostracod record from Wallywash great
pond, a Jamaican marl lake. *Journal of Paleolimnology*, 19(2), 115–128.
Holmes, J.A., Street-Perrott, F.A., Ivanovich, M., Perrott, R.A., (1995) A Late Quaternary paleolimnological record from Jamaica based on trace-element chemistry of ostracod shells. *Chemical Geology*, 124(1–2), 143–160.
Hooghiemstra, H., (1984) *Vegetational and Climatic History of the High Plain of Bogota, Columbia: A Continuous Record of the last 3.5 Million Years*. University of Amsterdam, The Netherlands.
Horn, S.P., (1986) Key to the Quaternary pollen of Costa Rica. *Brenesia*, 25–26, 33–44.
Horn, S.P., (1993) Postglacial vegetation and fire history in the Chirripó páramo of Costa Rica. *Quaternary Research*, 40(1), 107–116.
Horn, S.P., (2006) Pre-Columbian maize agriculture in Costa Rica: Pollen and other evidence from lake and swamp sediments. In: J. Staller, R. Tykot, B. Benz (Eds.), *Histories of Maize: Multidisciplinary Approaches to the Prehistory, Biogeography, Domestication, and Evolution of Maize*, pp.367–380. Elsevier Press, San Diego.
Horn, S.P., (in press) Late Quaternary climate and environmental history from lake and swamp sediments in Central America. J. Bundschuh and G.E. Alvarado (Eds.) *Central America: Geology, Resources, and Hazards*. Taylor and Francis, London.
Horn, S.P., Kennedy, L.M., Orvis, K.H., (2001) Vegetation recovery following a high elevation fire in the Dominican Republic. *Biotropica*, 33(4), 701–708.
Horn, S.P., Orvis, K.H., Kennedy, L.M., Clark, G.M., (2000) Prehistoric fires in the highlands of the Dominican Republic: Evidence from charcoal in soils and sediments. *Caribbean Journal of Science*, 36(1–2), 10–18.
Horn, S.P., Rodgers III, J.R., Orvis, K.H., Northrop, L.A., (1998) Recent land use and vegetation history from soil pollen analysis: testing the potential in the lowland humid tropics. *Palynology*, 22, 167–180.
Horn, S.P., Sanford, R.L., (1992) Holocene fires in Costa Rica. *Biotropica*, 24, 354–361.
Horst, O.H., (1992) Climate and the “encounter” in the Dominican Republic. *Journal of Geography*, 91(5), 205–210.
Huang, Y., Street-Perrott, F.A., Metcalfe, S.E., Brenner, M., Moreland, M., Freeman, K.H., (2001) Climate change as the dominant control on glacial-interglacial variations in C-3 and C-4 plant abundance. *Science*, 293(5535), 1647–1651.
Huang, Y.S., Shuman, B., Wang, Y., Webb, T., Grimm, E.C., Jacobson, G.L., (2006) Climatic and environmental controls on the variation of C-3 and C-4 plant abundances in central Florida for the past 62,000 years. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 237(2–4), 428–435.
Hughen, K., Lehman, S., Southon, J., Overpeck, J., Marchal, O., Herring, C., Turnbull, J., (2004a) C-14 activity and global carbon cycle changes over the past 50,000 years. *Science*, 303(5655), 202–207.
Hughen, K.A., Eglinton, T.I., Xu, L., Makou, M., (2004b) Abrupt tropical vegetation response to rapid climate changes. *Science*, 304(5679), 1955–1959.
Hughen, K.A., Overpeck, J.T., Peterson, L.C., Trumbore, S., (1996) Rapid climate changes in the tropical Atlantic region during the last deglaciation. *Nature*, 380(6569), 51–54.
Hughen, K.A., Southon, J.R., Lehman, S.J., Overpeck, J.T., (2000) Synchronous radiocarbon and climate shifts during the last deglaciation. *Science*, 290(5498), 1951–1954.
Islebe, G.A., Hooghiemstra, H., (1997) Vegetation and climate history of montane Costa Rica since the last glacial. *Quaternary Science Reviews*, 16(6), 589–604.
Islebe, G.A., Hooghiemstra, H., Brenner, M., Curtis, J.H., Hodell, D.A., (1996a) A Holocene vegetation history from lowland Guatemala. *Holocene*, 6(3), 265–271.
Islebe, G.A., Hooghiemstra, H., van’t Veer, R., (1996b) Holocene vegetation and water level history in two bogs of the Cordillera de Talamanca, Costa Rica. *Vegetatio*, 124(2), 155–171.
Ito, E., De Deckker, P., Eggins, S.M., (2003) Ostracodes and their shell chemistry: Implications for paleohydrologic and paleoclimatologic applications. *Paleontological Society Papers*, 9, 119–152.
Ivanochko, T.S., Ganeshram, R.S., Brummer, G.J.A., Ganssen, G., Jung, S.J.A., Moreton, S.G., Kroon, D., (2005) Variations in tropical convection as an amplifier of global climate change at the millennial scale. *Earth and Planetary Science Letters*, 235(1–2), 302–314.
Jacob, J.S., Hallmark, C.T., (1996) Holocene stratigraphy of Cobweb Swamp, a Maya wetland in northern Belize. *Geological Society of America Bulletin*, 108(7), 883–891.
Jirikowic, J.L., Damon, P.E., (1994) The Medieval solar-activity maximum. *Climatic Change*, 26(2–3), 309–316.
Johannessen, S., Hastorf, C.A., (1994) *Corn and Culture in the Prehistoric New World*. Westview Press, Boulder.
Jones, P.D., Briffa, K.R., Barnett, T.P., Tett, S.F.B., (1998) High-resolution palaeoclimatic records for the last millennium: interpretation, integration, and comparison with general circulation model control run temperatures. *Holocene*, 8, 455–471.
Jones, P.D., Osborn, T.J., Briffa, K.R., (2001) The evolution of climate over the last millennium. *Science*, 292(5517), 662–667.
Keatings, K.W., Heaton, T.H.E., Holmes, J.A., (2002) Carbon and oxygen isotope fractionation in non-marine ostracods: Results from a 'natural culture' environment. *Geochimica et Cosmochimica Acta*, 66(10), 1701–1711.
Keegan, W.F., (1985) *Dynamic Horticulturalists: Population Expansion in the Prehistoric Bahamas*. Ph.D. dissertation, University of California, Los Angeles, Los Angeles, CA.
Keegan, W.F., Deniro, M.J., (1998) Stable carbon- and nitrogen-isotope ratios of bone collagen used to study coral-reef and terrestrial components of prehistoric Bahamian diet. *American Antiquity*, 53(2), 320–336.
Kennedy, L.M., (2003) *Fire and Forest in the Highlands of the Cordillera Central, Dominican Republic: Modern Dynamics and Long-Term History*. Ph.D. dissertation, University of Tennessee, Knoxville, TN.
Kennedy, L.M., Horn, S.P., Orvis, K.H., (2005) Modem pollen spectra from the highlands of the Cordillera Central, Dominican Republic. *Review of Palaeobotany and Palynology*, 137(1–2), 51–68.
Kennedy, L.M., Horn, S.P., Orvis, K.H., (2006) A 4000-year record of fire and forest history from Valle de Bao, Cordillera Central, Dominican Republic. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 231(3–4), 279–290.
Kjellmark, E., (1996) Late Holocene climate change and human disturbance on Andros Island, Bahamas. *Journal of Paleolimnology*, 15(2), 133–145.
Kreutz, K.J., Mayewski, P.A., Meeker, L.D., Twickler, M.S., Whitlow, S.I., Pittalwala, I.I., (1997) Bipolar changes in atmospheric circulation during the Little Ice Age. *Science*, 277, 1294–1296.
Lachniet, M.S., Burns, S.J., Piperno, D.R., Asmerom, Y., Polyak, V.J., Moy, C.M., Christenson, K., (2004) A 1500-year El Nino/Southern Oscillation and rainfall history for the Isthmus of Panama from speleothem calcite. *Journal of Geophysical Research-Atmospheres*, 109, Art. No. D20117, doi:10.1029/2004JD004694.
Lane, C.S., Horn, S.P., Mora, C.I., (2004) Stable carbon isotope ratios in lake and swamp sediments as a proxy for prehistoric forest clearance and crop cultivation in the Neotropics. *Journal of Paleolimnology*, 32(4), 375–381.
Lane, C.S., Horn, S.P., Mora, C.I., Orvis, K.H., (2006) High resolution analyses of sediment characteristics, delta-13C values, and microfossils in late Holocene sediments from Laguna Castilla, Dominican Republic (abstract). *Geological Society of America Abstracts with Programs*, 38.
Lane, C.S., Horn, S.P., Taylor, Z.P., Mora, C.I., (in press) Assessing the scale of prehistoric human impact in the Neotropics using stable carbon isotope analyses of lake sediments. *Latin American Antiquity*.
League, B.L., Horn, S.P., (2000) A 10000 year record of Páramo fires in Costa Rica. *Journal of Tropical Ecology*, 16, 747–752.
Lewis, W.M., Weibezahn, F.H., (1981) Chemistry of a 7.5-m sediment core from Lake Valencia, Venezuela. *Limnology and Oceanography*, 26, 907–924.
Leyden, B.W., (1985) Late Quaternary aridity and Holocene moisture fluctuations in the Lake Valencia Basin, Venezuela. *Ecology*, 66(4), 1279–1295.
Leyden, B.W., Brenner, M., Dahlin, B.H., (1998) Cultural and climatic history of Coba, a lowland Maya city in Quintana Roo, Mexico. *Quaternary Research*, 49(1), 111–122.
Leyden, B.W., Brenner, M., Hodell, D.A., Curtis, J.H., (1994) Orbital and internal forcing of climate on the Yucatan Peninsula for the past ca. 36 ka. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 109(2–4), 193–210.
Linsley, B., Dunbar, R., Wellington, G.M., Mucciarone, D., (1994) A coral-based reconstruction of Intertropical Convergence Zone variability over Central America since 1707. *Journal of Geophysical Research*, 99, 9977–9994.
Liogier, A.H., (1981) Ecosistemas de montañas en la República Dominicana. *Anuario*, 5, 87–102.
Lister, G.S., (1988) Stable isotopes from lacustrine ostracoda as tracers for continental palaeoenvironments. In: P. De Deckker, J.P. Colin, J.P. Peypouquet (Eds.), *Ostracoda in the Earth Sciences*, pp. 201–218. Elsevier, Amsterdam.
Liu, K.B., Reese, C.A., Thompson, L.G., (2005) Ice-core pollen record of climatic changes in the central Andes during the last 400 yr. *Quaternary Research*, 64(2), 272–278.
Lucas, W.J., (1983) Photosynthetic assimilation of exogenous HCO₃⁻ by aquatic plants. *Annual Review of Plant Physiology*, 34, 71–104.
Lucke, A., Schleser, G.H., Zolitschka, B., Negendank, J.F.W., (2003) A Lateglacial and Holocene organic carbon isotope record of lacustrine palaeoproductivity and climatic change derived from varved lake sediments of Lake Holzmaar, Germany. *Quaternary Science Reviews*, 22(5–7), 569–580.
Ludlow-Wiechers, B., Alvarado, J.L., Aliphat, M., (1983) El polen de Zea (maíz y teosinte): perspectivas para conocer el origen de maíz. *Biotica*, 8, 235–258.
Luna, S., Figueroa, J., Baltazar, M., Gomez, R., Townsend, R., Schoper, J., (2001) Maize pollen longevity and distance isolation requirements for effective pollen control. *Crop Science*, 41, 1551–1557.
Malmgren, B.A., Kennett, J.P., (1976) Principal component analysis of Quaternary planktic foraminifera in the Gulf of Mexico: Paleoclimatic applications. *Marine Micropaleontology*, 1, 299–306.
Malmgren, B.A., Winter, A., Chen, D., (1998) El-Niño-Southern Oscillation and North Atlantic Oscillation control of climate in Puerto Rico. *Journal of Climate*, 11, 2713–2717.
Markgraf, V., D'Antoni, H.L., (1978) *Pollen Flora of Argentina: Modern Spore and Pollen Types of Pteridophyta, Gymnospermae, and Angiospermae*. University of Arizona Press, Tucson.
Martin, P.H., Fahey, T.J., (2006) Fire history along environmental gradients in the subtropical pine forests of the Cordillera Central, Dominican Republic. *Journal of Tropical Ecology*, 22, 289–302.
Mayewski, P.A., Rohling, E.E., Stager, J.C., Karlen, W., Maasch, K.A., Meeker, L.D., Meyerson, E.A., Gasse, F., van Kreveld, S., Holmgren, K., Lee-Thorp, J., Rosqvist, G., Rack, F., Staubwasser, M., Schneider, R.R., Steig, E.J., (2004) Holocene climate variability. *Quaternary Research*, 62(3), 243–255.
Mayle, F.E., Burbridge, R., Killeen, T.J., (2000) Millennial-scale dynamics of southern Amazonian rain forests. *Science*, 290, 2291–2294.
McAndrews, J., Berti, A.A., Norris, G., (1973) *Key to the Quaternary Pollen and Spores of the Great Lakes Region*. Royal Ontario Museum, Ottawa.
McAndrews, J.H., Ramcharan, E.K., (2003) Holocene pollen diagram from Lake Antoine, Grenada. *Congress of the International Union for Quaternary Research*, 16, 122.
Metcalfe, S.E., O'Hara, S.L., Caballero, M., Davies, S.J., (2000) Records of Late Pleistocene-Holocene climatic change in Mexico – a review. *Quaternary Science Reviews*, 19(7), 699–721.
Milliman, J.D., Syvitski, J.P.M., (1992) Geomorphic/tectonic control of sediment discharge to the ocean: the importance of small mountainous rivers. *Journal of Geology*, 100, 525–544.
Moberg, A., Sonechkin, D.M., Holmgren, K., Datsenko, N.M., Karlen, W., (2005) Highly variable Northern Hemisphere temperatures reconstructed from low- and high-resolution proxy data. *Nature*, 443, 613–617.
Mohtadi, M., Romero, O.E., Kaiser, J., Hebbein, D. (in press) Cooling of the southern high latitudes during the Medival Period and its effect on ENSO. *Quaternary Science Reviews*.
Moore, P.D., Webb, J.A., (1978) *An Illustrated Guide to Pollen Analysis*. Hodder and Stoughton, London.
Moore, P.D., Webb, J.A., Collinson, M.E., (1991) *Pollen Analysis, 2nd Edition*. Blackwell Scientific Publications, Oxford.
Morton, J., (1987) Rose apple. In: J.F. Morton (Ed.), *Fruits of Warm Climates*, pp. 383–386. J.F. Morton, Miami, FL.
Moy, C.M., Seltzer, G.O., Rodbell, D.T., Anderson, D.M., (2002) Variability of El Niño Southern Oscillation activity at millennial timescales during the Holocene epoch. *Nature*, 420, 162–165.
Myers, R., O’Brien, J., Mehlman, D., Bergh, C., (2004) *Fire Management Assessment of the Highland Ecosystems of the Dominican Republic*. The Nature Conservancy, Arlington, VA.
Myster, R.W., Fernandez, D.S., (1995) Spatial gradients and patch structure on two Puerto Rican landslides. *Biotropica*, 27, 149–159.
Myster, R.W., Walker, L.R., (1997) Plant successional pathways on Puerto Rican landslides. *Journal of Tropical Ecology*, 13, 165–173.
Newsom, L.A., (2006) Caribbean maize: Current research on the archaeology, history, and biogeography. In: J. Staller, R.H. Tykot, B. Benz (Eds.), *Histories of Maize: Multidisciplinary Approaches to the Prehistory, Linguistics, Biogeography, Domestication, and Evolution of Maize*, pp. 325–335. Elsevier, San Diego.
Newsom, L.A., Deagan, K.A., (1994) *Zea mays* in the West Indies: the archaeological and early historic record. In: S. Johannessen, C.A. Hastorf (Eds.), *Corn and Culture in the Prehistoric New World*, pp. 203–217. Westview Press, Boulder, CO.
Newsom, L.A., Pearsall, D.M., (2003) Trends in Caribbean Island Archaeobotany. In: P.E. Minnis (Ed.), *People and Plants in Ancient Eastern North America*, pp. 347–412. Smithsonian Institution, Washington, D.C.
Newsom, L.A., Wing, E.S., (2004) *Land and Sea: Native American Uses of Biological Resources in the Caribbean*. University of Alabama Press, Tuscaloosa.
Newton, A., Thunell, R., Stott, L., (2006) Climate and hydrographic variability in the Indo-Pacific Warm Pool during the last millennium. *Geophysical Research Letters*, 33(19), Art. No. L19710, doi: 10.1029/2006GL027234.
Northrop, L.A., Horn, S.P., (1996) PreColumbian agriculture and forest disturbance in Costa Rica: Palaeoecological evidence from two lowland rainforest lakes. *Holocene*, 6(3), 289–299.
Nyberg, J., Kuippers, A., Malmgren, B.A., Kunzendorf, H., (2001) Late Holocene changes in precipitation and hydrography recorded in marine sediments from the northeastern Caribbean Sea. *Quaternary Research*, 56(1), 87–102.
Nyberg, J., Malmgren, B.A., Kuippers, A., Winter, A., (2002) A centennial-scale variability of tropical North Atlantic surface hydrography during the late Holocene. *Palaeogeography Palaeoclimatology Palaeoecology*, 183(1–2), 25–41.
Oana, S., Deevey, E.S., (1960) Carbon 13 in lake waters and its possible bearing on paleolimnology. *American Journal of Science*, 258A, 253–272.
Ogrine, N., Lojen, S., Faganeli, J., (2002) A mass balance of carbon stable isotopes in an organic-rich methane-producing lacustrine sediment (Lake Bled, Slovenia). *Global and Planetary Change*, 33, 57–72.
Ohara, S.L., Metcalfe, S.E., Street-Perrott, F.A., (1994) On the arid margin - the relationship between climate, humans and the environment - a review of evidence from the highlands of Central Mexico. *Chemosphere*, 29(5), 965–981.
Oldfield, F., Wake, R., Boyle, J., Jones, R., Nolan, S., Gibbs, Z., Appleby, P., Fisher, E., Wolff, G., (2003) The late-Holocene history of Gormire Lake (NE England) and its catchment: a multiproxy reconstruction of past human impact. *Holocene*, 13(5), 677–690.
O’Leary, M.H., (1981) Carbon isotope fractionation in plants. *Phytochemistry*, 20, 553–567.
Ortega, E., Guerrero, J., (1981) *Estudio de Cuatro Nuevos Sitios Paleoarcaicos en las Isla de Santo Domingo*. Ediciones Museo del Hombre Dominicano, Santo Domingo.
Ortiz Aguilu, J.J., Melendez, J.R., Jacome, A.P., Maiz, M.M., Colberg, M.L., (1991) Intensive agriculture in pre-Columbian West Indies: The case for terraces. In: A. Cummins, P. King (Eds.), *Proceedings of the Fourteenth Congress of the International Association for Caribbean Archaeology*, pp. 278–285. Barbados Museum and Historical Society, Barbados.
Orvis, K.H., (2003) The highest mountain in the Caribbean: controversy and resolution via GPS. *Caribbean Journal of Science*, 39, 378–380.
Orvis, K.H., Clark, G.M., Horn, S.P., Kennedy, L.M., (1997) Geomorphic traces of Quaternary climates in the Cordillera Central, Dominican Republic. *Mountain Research and Development*, 17(4), 323–331.
Orvis, K.H., Horn, S.P., Clark, G.M., Kennedy, L.M., (2005) Synchronous bog initiation circa 4500 cal. yr. BP at high elevation sites in Hispaniola (abstract). *2005 Abstracts Volume*. The Association of American Geographers, 101st Annual Meeting, Philadelphia, Pennsylvania. AAG Executive Council, Washington D.C. Compact disk.
Pagán-Jiménez, J.R., Rodríguez López, M.A., Chanlatte Baik, L.A., Narganes Storde, Y., (2005) La temprana introducción y uso de algunas plantas domésticas, silvestres y cultivos en Las Antillas precolombinas. *Diálogo Antropológico*, 3, 7–33.
Panamerican Union, (1967) *Reconocimiento y Evaluación de los Recursos Naturales de la República Dominicana*, Washington, D.C.
Pearsall, D.M., (2002) Analysis of charred botanical remains from the Tutu site, U.S. Virgin Islands. In: E. Righter (Ed.), *The Tutu Archaeological Village Site: A Multidisciplinary Case Study in Human Adaptation*, pp. 109–134. Routledge, London.
Pentecost, A., Andrews, J.E., Dennis, P.F., Marca-Bell, A., Dennis, S., (2006) Charophyte growth in small temperate water bodies: Extreme isotopic disequilibrium and implications for the palaeoecology of shallow marl lakes. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 240(3–4), 389–404.
Peros, M.C., Reinhardt, E.G., Davis, A.M., (2007) A 6000 cal yr record of ecological and hydrological changes from Laguna de la Leche, north coastal Cuba. *Quaternary Research*, 67(1), 69–82.
Petersen, J.B., (1997) Taino, Island Carib, and prehistoric Amerindian economies in the West Indies: tropical forest adaptations to island environments. In: S.M. Wilson (Ed.), *The Indigenous People of the Caribbean*, pp. 118–139. University Press of Florida, Gainesville.
Peterson, L.C., Haug, G.H., (2006) Variability in the mean latitude of the Atlantic Intertropical Convergence Zone as recorded by riverine input of sediments to the Cariaco Basin (Venezuela). *Palaeogeography, Palaeoclimatology, Palaeoecology*, 234(1), 97–113.
Peterson, L.C., Haug, G.H., Hughen, K.A., Rohl, U., (2000) Rapid changes in the hydrologic cycle of the tropical Atlantic during the last glacial. *Science*, 290(5498), 1947–1951.
Peterson, L.C., Overpeck, J., Kipp, N., Imbrie, J., (1991) A high-resolution late Quaternary upwelling record from the anoxic Cariaco Basin, Venezuela. *Paleoceanography*, 6, 99–119.
Piperno, D.R., Bush, M.B., Colinvaux, P., (1991) Paleoecological perspectives on human adaptation in Central Panama, II. The Holocene. *Geoarchaeology*, 6, 227–250.
Poore, R.Z., Dowsett, H.J., Verardo, S., Quinn, T.M., (2003) Millennial- to century-scale variability in Gulf of Mexico Holocene climate records. *Paleoceanography*, 18(2) Art. No. 1048, doi: 10.1029/2002PA000868.
Poore, R.Z., Quinn, T.M., Verardo, S., (2004) Century-scale movement of the Atlantic Intertropical Convergence Zone linked to solar variability. *Geophysical Research Letters*, 31(12) Art. No. L12214, doi: 10.1029/2004GL019940.
Poveda, G., Mesa, O.J., (1997) Feedbacks between hydrological processes in tropical South America and large-scale ocean-atmospheric phenomena. *Journal of Climate*, 10(10), 2690–2702.
Proctor, V.W., Griffin III, D.G., Hotchkiss, A.T., (1971) A synopsis of the genus *Chara*, series Gymnobasalia (subsection Wildenowia RDW). *American Journal of Botany*, 1971, 894–901.
Pubellier, M., Vila, J.M., Boisson, D., (1991) North Caribbean neotectonic events - the tran-Haitian fault system - Tertiary record of an oblique transcurrent shear zone uplifted in Hispaniola. *Tectonophysics*, 194(3), 217–236.
Purper, I., (1974) *Cytheridella boldii* Purper, sp. nov. (Ostracoda) from Venezuela and a revision of the Genus *Cytheridella* Daday, 1905. *Anais da Academia Brasileira de Ciencias*, 46, 636–662.
Quinn, W.H., (1992) A study of Southern Oscillation-related climatic activity for AD 622–1900 incorporating Nile River flood data. In: H.F. Diaz, V. Markgraf (Eds.), *El Niño Historical and Paleoclimate Aspects of the Southern Oscillation*, pp. 119–149. Cambridge University Press, Cambridge.
Raynor, G.S., Ogden, E.C., Hayes, K.V., (1972) Dispersion and deposition of corn pollen from experimental sources. *Agronomy Journal*, 64, 420–427.
Reimer, P.J., Baillie, M.G.L., Bard, E., Bayliss, A., Beck, J.W., Bertrand, C.J.H., Blackwell, P.G., Buck, C.E., Burr, G.S., Cutler, K.B., Damon, P.E., Edwards, R.L., Fairbanks, R.G., Friedrich, M., Guilderson, T.P., Hogg, A.G., Hughen, K.A., Kromer, B., McCormac, G., Manning, S., Ramsey, C.B., Reimer, R.W., Remmele, S., Southon, J.R., Stuiver, M., Talamo, S., Taylor, F.W., van der Plicht, J., Weyhenmeyer, C.E., (2004a) IntCal04 terrestrial radiocarbon age calibration, 0–26 cal kyr BP. *Radiocarbon*, 46(3), 1029–1058.
Reimer, P.J., Brown, T.A., Reimer, P.J., (2004b) Discussion: Reporting and calibration of post-bomb C-14 data. *Radiocarbon*, 46, 1299–1304.
Rein, B., Luckge, A., Sirocko, F. (2004) A major Holocene ENSO anomaly during the Medieval period. *Geophysical Research Letters*, 31, doi:10.1029/2004GL020161.
Remus, B.A., Horn, S.P., Orvis, K.H., Stork, A.J., Lane, C.S., Kennedy, L.M., (2006) Searching for pine stomata in circum-Caribbean lake sediments (abstract). *2006 Abstracts Volume*. The Association of American Geographers, 102nd Annual Meeting, Chicago, Illinois. AAG Executive Council, Washington D.C. Compact disk.
Rittenour, T.M., Brigham-Grette, J., Mann, M.E., (2000) El Niño-like climate teleconnections in New England during the late Pleistocene. *Science*, 288(5468), 1039–1042.
Rogers, J.C., (1984) The association between the North Atlantic oscillation and the Southern Oscillation in the Northern Hemisphere. *Monthly Weather Review*, 112, 1999–2015.
Rosenmeier, M.F., Hodell, D.A., Brenner, M., Curtis, J.H., Guilderson, T.P., (2002a) A 4000-year lacustrine record of environmental change in the southern Maya lowlands, Peten, Guatemala. *Quaternary Research*, 57(2), 183–190.
Rosenmeier, M.F., Hodell, D.A., Brenner, M., Curtis, J.H., Martin, J.B., Anselmetti, F.S., Ariztegui, D., Guilderson, T.P., (2002b) Influence of vegetation change on watershed hydrology: implications for paleoclimatic interpretation of lacustrine delta O-18 records. *Journal of Paleolimnology*, 27(1), 117–131.
Roubik, D.W., Moreno, J.E., (1991) *Pollen and Spores of Barro Colorado Island*. Monographs in Systematic Botany, Missouri Botanical Gardens.
Rouse, I., (1992) *The Tainos: Rise and Decline of the People who Greeted Columbus*. Yale University Press, New Haven.
Ruddiman, W.F., Mix, A.C., (1993) The North and Equatorial Atlantic at 9000 and 6000 yr B.P. In: H.E. Wright, J. Kutzbach, T. Webb III, W.F. Ruddiman, F.A. Street-Perrott, P. Bartlein (Eds.), *Global Climates since the Last Glacial Maximum*, pp. 94–124. University of Minnesota Press, Minneapolis, MN.
Salgado-Labouriau, M.L., (1980) A pollen diagram of the Pleistocene-Holocene boundary of Lake Valencia, Venezuela. *Reviews of Palaeobotany and Palynology*, 30, 297–312.
Salgado-Labouriau, M.L., (1987) Late Quaternary aridity in the Lake Valencia Basin (Venezuela). *Ecology*, 68(5), 1551–1553.
Schmidt, M.W., Spero, H.J., Lea, D.W., (2004) Links between salinity variation in the Caribbean and North Atlantic thermohaline circulation. *Nature*, 428, 160–163.
Schubert, C., Medina, E., (1982) Evidence of Quaternary glaciation in the Dominican-Republic - some implications for Caribbean paleoclimatology. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 39(3–4), 281–294.
Sluyter, A., (1997a) Analysis of maize (*Zea mays* subsp. *mays*) pollen: normalizing the effects of microscope-slide mounting media on diameter determinations. *Palynology*, 21, 35–39.
Sluyter, A., (1997b) Regional, Holocene records of the human dimension of global change: Sea-level and land-use change in prehistoric Mexico. *Global and Planetary Change*, 14(3–4), 127–146.
Smith, F.A., Walker, N.A., (1980) Photosynthesis by aquatic plants: effects of unstirred layers in relation to assimilation of CO$_2$ and HCO$_3^-$ and to carbon isotope discrimination. *New Phytology*, 86, 245–259.
Speer, J.H., Orvis, K.H., Grissino-Mayer, H.D., Kennedy, L.M., and Horn, S.P., (2004) Assessing the dendrochronological potential of *Pinus occidentalis* Swartz in the Cordillera Central of the Dominican Republic. *Holocene*, 14, 563–569.
Staller, J., Tykot, R., Benz, B., (2006) *Histories of Maize: Multidisciplinary Approaches to the Prehistory, Linguistics, Biogeography, Domestication, and Evolution of Maize*. Elsevier, San Diego.
Stewart, G.R., Turnbull, M.H., Schmidt, S., Erskine, P.D., (1995) $^{13}$C natural abundance in plant communities along a rainfall gradient: A biological integrator of rainfall availability. *Functional Plant Biology*, 22, 51–55.
Stockmarr, J. (1971) Tablets with spores used in absolute pollen analysis. *Pollen et Spores*, 13, 615–621.
Stokes, A.V., (1998) *A Biogeographic Survey of Prehistoric Human Diet in the West Indies Using Stable Isotopes*. Ph.D. dissertation, University of Florida, Gainesville, FL.
Street-Perrott, F.A., Ficken, K.J., Huang, Y.S., Eglinton, G., (2004) Late quaternary changes in carbon cycling on Mt. Kenya, East Africa: an overview of the delta C-13 record in lacustrine organic matter. *Quaternary Science Reviews*, 23(7–8), 861–879.
Street-Perrott, F.A., Hales, P.E., Perrott, R.A., Fontes, J.C., Switsur, V.R., Pearson, A., (1993) Late Quaternary palaeolimnology of a tropical marl lake: Wallywash Great Pond, Jamaica. *Journal of Paleolimnology*, 9, 3–22.
Street-Perrott, F.A., Huang, Y.S., Perrott, R.A., Eglinton, G., Barker, P., Ben Khelifa, L., Harkness, D.D., Olago, D.O., (1997) Impact of lower atmospheric carbon dioxide on tropical mountain ecosystems. *Science*, 278(5342), 1422–1426.
Stuiver, M., (1970) Oxygen and carbon isotope ratios of fresh-water carbonates as climatic indicators. *Journal of Geophysical Research*, 75(27), 5257–5257.
Stuiver, M., (1975) Climate versus changes in 13C content of the organic component of lake sediments during the late Quaternary. *Quaternary Research*, 5, 251–262.
Stuiver, M., Braziunas, T.F., (1989) Atmospheric 14C and century-scale solar oscillations. *Nature*, 338, 405–408.
Stuiver, M., Braziunas, T.F., Becker, B., Kromer, B., (1991) Climatic, solar, oceanic, and geomagnetic influences on late-glacial and Holocene atmospheric 14C/12C change. *Quaternary Research*, 35, 1–24.
Stuiver, M., Reimer, P.J., (1993) Extended 14C database and revised CALIB 3.0 14C age calibration program. *Radiocarbon*, 35, 215–230.
Stuiver, M., Reimer, P.J., Braziunas, T.F., (1998) High precision radiocarbon age calibration for terrestrial and marine samples. *Radiocarbon*, 40, 1127–1151.
Talbot, M.R., (1990) A review of the palaeohydrological interpretation of carbon and oxygen isotopic ratios in primary lacustrine carbonates. *Chemical Geology (Isotope Geoscience Section)*, 80, 261–279.
Talbot, M.R., Johannessen, T., (1992) A high resolution palaeoclimate record for the last 27,500 years in tropical West Africa from the carbon and nitrogen isotopic composition of organic matter. *Earth and Planetary Science Letters*, 110, 23–37.
Taylor, M.A., Enfield, D.B., Chen, A.A., (2002) Influence of the tropical Atlantic versus the tropical Pacific on Caribbean rainfall. *Journal of Geophysical Research-Oceans*, 107(C9) Art. No. 3127, doi: 10.1029/2001JC001097.
Tedesco, K., Thunell, R., (2003a) High resolution tropical climate record for the last 6,000 years. *Geophysical Research Letters*, 30(17) Art. No. 1891, doi:10.1029/2003GL017959
Tedesco, K.A., Thunell, R.C., (2003b) Seasonal and interannual variations in planktonic foraminiferal flux and assemblage composition in the Cariaco Basin, Venezuela. *Journal of Foraminiferal Research*, 33(3), 192–210.
Telford, R.J., Heegaard, E., Birks, H.J.B., (2004a) All age-depth models are wrong: but how badly? *Quaternary Science Reviews*, 23(1–2), 1–5.
Telford, R.J., Heegaard, E., Birks, H.J.B., (2004b) The intercept is a poor estimate of a calibrated radiocarbon age. *Holocene*, 14(2), 296–298.
Thomason, J.M., Lane, C.S., Horn, S.P., Orvis, K.H., (2007) Modern and fossil ostracods and charophytes in lake sediments from Las Lagunas, Dominican Republic (abstract). To appear in *2007 Abstracts Volume*. The Association of American Geographers, 103rd Annual Meeting, San Francisco, California. AAG Executive Council, Washington D.C. Compact disk.
Thompson, L.G., Mosley-Thompson, E., Davis, M.E., Lin, P.N., Henderson, K.A., Coledai, J., Bolzan, J.F., Liu, K.B., (1995) Late-glacial stage and Holocene tropical ice core records from Huascaran, Peru. *Science*, 269(5220), 46–50.
Tolentino, L., Peña, M., (1998) Inventario de la vegetación y uso de la tierra en la República Dominicana. *Moscosoa*, 10, 179–203.
Turney, C., Baillie, M., Clemens, S., Brown, D., Palmer, J., Pilcher, J., Reimer, P.J., Leuschner, H.H., (2005) Testing solar forcing of pervasive Holocene climate cycles. *Journal of Quaternary Science*, 20(6), 511–518.
van Klinken, G.J., (1991) *Dating and Dietary Reconstruction by Isotopic Analysis of Amino Acids in Fossil Bone Collagen - With Special Reference to the Caribbean*. Ph.D. dissertation, University of Groningen, The Netherlands.
van Loon, H., Rogers, J.C., (1978) Seesaw in winter temperatures between Greenland and northern Europe. Part I: General description. *Monthly Weather Review*, 106, 296–310.
von Grafenstein, U., Erlermkeuser, H., Trimborn, P., (1999) Oxygen and carbon isotopes in modern fresh-water ostracod valves: assessing vital offsets and autecological effects of interest for palaeoclimate studies. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 148(1–3), 133–152.
Wahl, D., Byrne, R., Schreiner, T., Hansen, R., (2006) Holocene vegetation change in the northern Peten and its implications for Maya prehistory. *Quaternary Research*, 65(3), 380–389.
Walker, L.R., Zarin, D.J., Fetcher, N., Myster, R.W., Johnson, A.H., (1996) Ecosystem development and plant succession on landslides in the Caribbean. *Biotropica*, 28, 566–576.
Watanabe, T., Winter, A., Oba, T., (2001) Seasonal changes in sea surface temperature and salinity during the Little Ice Age in the Caribbean Sea deduced from Mg/Ca and O-18/O-16 ratios in corals. *Marine Geology*, 173(1–4), 21–35.
Watts, W.A., (1969) A pollen diagram from Mud Lake Marion county north-central Florida. *Geological Society of America Bulletin*, 80(4), 631–642.
Watts, W.A., (1971) Postglacial and interglacial vegetation history of southern Georgia and central Florida. *Ecology*, 52(4), 676–690.
Watts, W.A., (1975) Late Quaternary record of vegetation from Lake Annie, south-central Florida. *Geology*, 3(6), 344–346.
Watts, W.A., (1980) The late Quaternary vegetation history of the southeastern United-States. *Annual Review of Ecology and Systematics*, 11, 387–409.
Watts, W.A., Hansen, B.C., (1994) Pre-Holocene and Holocene pollen records of vegetation history from the Florida peninsula and their climatic implications. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 109(2–4), 163–176.
Watts, W.A., Hansen, B.C.S., Grimm, E.C., (1992) Camel Lake - a 40000-yr record of vegetational and forest history from northwest Florida. *Ecology*, 73(3), 1056–1066.
Watts, W.A., Stuiver, M., (1980) Late Wisconsin climate of northern Florida and the origin of species-rich deciduous forest. *Science*, 210(4467), 325–327.
Whitehead, D.R., Langham, E.J., (1965) Measurement as a means of identifying fossil maize pollen. *Bulletin of the Torrey Botanical Club*, 92, 7–20.
Whitmore, T.J., Brenner, M., Curtis, J.H., Dahlin, B.H., Leyden, B.W., (1996) Holocene climatic and human influences on lakes of the Yucatan Peninsula, Mexico: An interdisciplinary, palaeolimnological approach. *Holocene*, 6(3), 273–287.
Wilson, S.M., (1990) *Hispaniola: Caribbean Chiefdoms in the Age of Columbus*. The University of Alabama Press, Tuscaloosa.
Wilson, S.M., (1993) The cultural mosaic of the indigenous Caribbean. *Proceedings of the British Academy of Sciences*, 81, 37–66.
Wilson, S.M., (1997) *The Indigenous People of the Caribbean*. University Press of Florida, Gainesville.
Wing, E.S., (2001) Native American use of animals in the Caribbean. In: C.A. Woods, F.E. Sergile (Eds.), *Biogeography of the West Indies: Patterns and Perspectives*, pp. 481–518. CRC Press, Boca Raton, FL.
Winkler, M.G., Sanford, P.R., Kaplan, S.W., (2001) Hydrology, vegetation, and climate change in the southern Everglades during the Holocene. *Bulletins of American Paleontology*, 361, 57–97.
Winter, A., Ishioroshi, H., Watanabe, T., Oba, T., Christy, J., (2000) Caribbean sea surface temperatures: two-to-three degrees cooler than present during the Little Ice Age. *Geophysical Research Letters*, 27, 3365–3368.
Wood, R.D., (1967) *Charophytes of North America: A Guide to the Species of Charophyta of North America, Central America, and the West Indies*. Stella’s Printing, West Kingston, Rohde Island.
Wood, R.D., Imahori, K., (1964) *A Revision of the Characeae*. Weinheim, New York.
Zarikian, C.A.A., Swart, P.K., Gifford, J.A., Blackwelder, P.L., (2005) Holocene paleohydrology of Little Salt Spring, Florida, based on ostracod assemblages and stable isotopes. *Palaeogeography, Palaeoclimatology, Palaeoecology*, 225(1–4), 134–156.
APPENDIX A
220
I used the following schedule to concentrate pollen in sediment samples from the Las Lagunas sediment cores. This processing schedule was developed by Drs. Sally Horn and John C. Rodgers III, following standard palynological techniques (Berglund, 1986). This procedure takes about six hours to complete with a batch of six samples, and must be performed in a laboratory fume hood. We use an IEC benchtop centrifuge with a 6 x 15 ml swinging bucket rotor set to rotate at about 2500 RPM. All centrifuge times are 2 minutes with time measured from initial start up. Gloves and goggles should be worn for all chemical steps and use of HF also requires use of a respirator and face shield.
1. Place wet sediment (sample volumes for the uppermost, watery sediments of Laguna Castilla and Laguna Salvador were 2.5 cc, and sample volumes for deeper sediments were 0.5 cc) in pre-weighed, 15 ml polypropylene centrifuge tubes and reweigh.
2. Add 1 tablet *Lycopodium* spores to each tube (Batch # 710961 = 13,911 spores/tablet).
3. Add a few ml 10% HCl, and let reaction proceed; slowly fill tubes until there is about 10 ml in each tube. Stir well, remove stirring sticks, and place in hot water bath for 3 minutes. Remove from bath, centrifuge, and decant.
4. Add 10 ml hot distilled water, stir, centrifuge and decant. Repeat for a total of two washes.
5. Add about 10 ml 5% KOH, stir, remove stirring sticks, and place in boiling bath for 10 minutes; stir again after 5 minutes. Remove from bath and stir again. Centrifuge and decant.
6. Add 10 ml hot distilled water, stir, centrifuge, and decant. Repeat for a total of 4 washes.
7. Fill tubes about \( \frac{1}{2} \) way with distilled water, stir, and pour through 125 \( \mu \)m mesh screen, collecting liquid in a labeled beaker underneath. Use a squirt bottle of distilled water to wash the screen, and to wash out any material remaining in the centrifuge tube.
8. Centrifuge down material in beaker by repeatedly pouring beaker contents into correct tube, centrifuging, and decanting.
9. Add 8 ml of 49–52% HF and stir. Place tubes in boiling bath for 20 minutes, stirring after 10 minutes. Remove from bath and centrifuge and decant.
10. Add 10 ml hot Alconox solution (made by dissolving 4.9 cm\(^3\) commercial Alconox® powder in 1000 ml distilled water). Stir well and let sit for 5 minutes. Centrifuge and decant.
11. Add more than 10 ml hot distilled water to each tube, so that top of water comes close to top of tube. Stir, centrifuge, and decant. Check top of tubes for oily residue after decanting. If present, remove carefully with wadded paper towel. Also at this time, examine the tubes to see if they still contain silica. If silica is present, repeat steps 9–11. Assuming that
no samples need retreatment with HF, continue washing with hot distilled water as above for a total of 3 hot water washes.
12. Add 10 ml of glacial acetic acid, stir, centrifuge, and decant.
13. Make acetolysis mixture by mixing together 9 parts acetic anhydride and 1 part concentrated sulfuric acid. Add about 8 ml to each tube and stir. Remove stirring sticks and place in boiling bath for 5 minutes. Stir again after 2.5 minutes. Remove from bath and centrifuge and decant.
14. Add 10 ml glacial acetic acid, stir, centrifuge, and decant.
15. Add 10 ml hot distilled water, stir, centrifuge, and decant.
16. Add 10 ml 5% KOH, stir, remove stirring sticks, and heat in vigorously boiling bath for 5 minutes. Stir again after 2.5 minutes, then remove sticks. After 5 minutes, centrifuge and decant.
17. Add 10 ml hot distilled water, stir, centrifuge, and decant. Repeat for a total of 3 washes.
18. After decanting last water wash, use vortex mixer for 20 seconds to mix sediment in tube.
19. Add 1 drop 1% safranin stain to each tube. Use vortex mixer for 10 seconds. Add distilled water to make 10 ml. Stir, centrifuge, and decant.
20. Add a few ml tertiary-butyl alcohol (TBA), use vortex mixer for 20 seconds. Fill to 10 ml with TBA, stir, centrifuge, and decant.
21. Add 10 ml TBA, stir, centrifuge, and decant.
22. Vibrate samples using the vortex mixer to mix the small amount of TBA left in the tubes with the microfossils. Carefully transfer the liquid to
precleaned and labeled glass vials. Centrifuge down residue in vials and decant. Repeat as necessary until all material is transferred from tubes to vials.
23. Add several drops of silicone oil (2000 cs viscosity) to each vial, more if a lot of residue remains. Stir with a clean toothpick.
24. Place uncorked samples in a dust-free cabinet to let the residual TBA evaporate.
25. Stir again after 1 hour, adding more silicone oil if necessary.
26. Check the samples after 24 hours; if there is no alcohol smell, cap the vials. If the alcohol smell persists, allow more time for evaporation.
Chad Steven Lane was born in Santa Maria, California in 1979. He attended the University of Denver in Denver, Colorado and graduated in 2001 *magna cum laude* with a Bachelor of Science degree in Environmental Sciences and a minor in Physics. Chad was introduced to the subjects of paleoecology, paleoclimatology, and biogeography by his undergraduate advisor, Dr. Donald Sullivan, who graciously invited Chad to work in his laboratory. Chad’s initial research focused on lacustrine sedimentary records of climate and vegetation change over the last 20,000 years collected from the lakes of Grand Mesa, Colorado.
In 2001 Chad entered the graduate program in geography at the University of Tennessee, and began studying paleoecology and paleoclimatology in Costa Rica under the direction of Dr. Sally Horn in Geography and Dr. Claudia Mora of the Department of Earth and Planetary Science. Chad pursued several research projects as a masters student, including exploring the potential for using stable carbon isotopes in sediment records as a proxy for prehistoric agriculture and tropical forest clearance in the Costa Rican lowlands and the development of a method for the controlled laboratory production of reference charcoal. Chad’s masters thesis, co-directed by Drs. Horn and Mora, focused on stable carbon isotope signatures in the sediments of a glacial lake within the high-elevation páramo surrounding Cerro Chirripó, Costa Rica’s highest mountain peak. As a Master’s student, Chad was funded as a teaching assistant in Geography and as a research assistant with the Global Environmental Change Research Group,
composed of faculty from the departments of Geography, Earth and Planetary Sciences, and Ecology and Evolutionary Biology.
In July 2002 Chad assisted Dr. Sally Horn and Dr. Kenneth Orvis with field work in the Dominican Republic funded by the National Geographic Society. This work included a trip to the small town of Las Lagunas in the mid-elevations of the Cordillera Central to core Laguna Castilla. Chad deeply enjoyed his time in Las Lagunas and found the location a compelling study site. His desire to conduct Ph.D. research in the area led to further field work at the site funded by the Global Environmental Change Research Group, and ultimately to the NSF grant project at Las Lagunas and Saladillo of which his dissertation is a part. During his dissertation work, Chad was funded by the NSF grant and as an instructor and head graduate teaching assistant for introductory physical geography courses. He also had Yates and Hilton Smith Fellowships from the University of Tennessee. In the future, Chad plans to remain in the world of academia where he can continue to research environmental change in the circum-Caribbean and other regions of the world, and to share his enthusiasm for science with undergraduate and graduate students.
|
Prevalence of Degenerative Imaging Findings in Lumbar Magnetic Resonance Imaging Among Young Adults
Jani Takatalo, MSc,* Jaro Karppinen, MD, PhD,† Jaakko Niinimäki, MD,‡ Simo Taimela, MD, PhD,§ Simo Näyhä, MD, PhD,¶ Marjo-Riitta Järvelin, MD, PhD,‖ Eero Kylönen, MD, PhD,* and Osmo Tervonen, MD, PhD#
Key words: magnetic resonance imaging, lumbar spine, prevalence, intervertebral disc degeneration, Modic changes, anular tears, herniations, young adults. Spine 2009; 34:1716–1721
Low back pain (LBP) is common already among children and the prevalence increases with age,¹⁻⁴ lifetime prevalence being up to 50% of 20-year-old men.² The etiology of LBP in adolescence remains largely unknown. Some authors claim that adolescent LBP is primarily related to psychosocial factors,³⁻⁶ but a discogenic origin has also been suggested since intervertebral disc degeneration (DD) at 15 years of age was strongly associated with recurrent or continuous LBP at 18 and 23 years.⁷ Furthermore, the likelihood of LBP increases with higher grade of DD in adults.⁸ Studies of the prevalence of magnetic resonance imaging (MRI) findings and their relationships with LBP have mainly been conducted in adult populations. Although LBP is quite common among young adults, the prevalence of MRI findings at this age is virtually unknown.
DD is known to be common among asymptomatic.⁹,¹⁰ In addition to DD, other lumbar degenerative changes on MRI have been reported to occur in adult populations without LBP. The prevalence of DD (56%–72%), disc bulging (20%–81%), protrusions (27%–33%), extrusions (0%–18%), high intensity zone (HIZ) lesions (6%–33%), and anular tears (56%) ranges widely in asymptomatic subjects.¹⁰⁻¹² Moreover, the prevalence of endplate and bone marrow changes (Modic changes) varies from 2% to 7% among asymptomatic.¹² We investigated the prevalence of DD, anular tears, degree of disc displacement, and Modic changes in lumbar MRI in a population-based cohort of young adults.
Materials and Methods
Study Population
The study population consisted of a subcohort of the Northern Finland Birth Cohort 1986 (NFBC 1986), which originally included data on 9479 children born in Northern Finland between July 1, 1985 and June 30, 1986. In 2003 to 2004, a postal questionnaire about symptoms and personal characteristics was sent to 2969 cohort members living within 100 km of the city of Oulu. The respondents were included in the Oulu Back Study (OBS). The response rate was 67% (1987 responses). All respondents to the postal questionnaire were invited to participate in a clinical and laboratory examination (isometric strength measurements, body sway, blood samples, weight, height, and questionnaire about symptom and personal...
characteristics), performed in 2005 to 2006. All 874 who were examined at the age of 19 were further invited to lumbar spine MRI about 2 years later (Figure 1). The present study was conducted in accordance with The Declaration of Helsinki. The ethical committee of Oulu University Hospital approved the study protocol before its initiation.
**Magnetic Resonance Imaging**
MRI scans were obtained with a 1.5-T unit (Sigma, General Electric, Milwaukee, WI) using the imaging protocol for sagittal T1-weighted (440/14 [repetition time msec/echo time msec]) spin echo and T2-weighted (3960/116) fast spin echo of the entire lumbar spine. The number of excitations for T1-weighted images was 1 and for T2-weighted images 4. Echo train length for T2-weighted images was 29. Image matrix was $256 \times 224$ for T1-weighted images and $448 \times 224$ for T2-weighted images. Field of view was $28 \times 28$ cm and slice thickness 4 mm with 1-mm interslice gap.
**Disc Degeneration**
The degree of DD was graded on T2-weighted images with a modified Pfirrmann scale as grade 1 (normal shape, no horizontal bands, distinction of nucleus, and annulus is clear), grade 2 (nonhomogeneous shape with horizontal bands, some blurring between nucleus and annulus), grade 3 (nonhomogeneous shape with blurring between nucleus and annulus, annulus shape still recognizable), grade 4 (nonhomogeneous shape with hypointensity, annulus shape not intact and distinction between nucleus and annulus impossible, disc height usually decreased), and grade 5 (same as grade 4 but collapsed disc space). The modification of Pfirrmann scale was that the hyperintensity and isointensity of intervertebral disc to cerebrospinal fluid were not used as criteria for grades 1 through 3, because the cerebrospinal fluid was always hyperintense to the discs with the used MR sequences. Grades 1 to 2 were classified as normal discs, while grades 3, 4, and 5 were defined as degenerated (Figure 2).
**Modic Changes**
The vertebral endplate and bone marrow changes were graded as absent, type I (hypointense in T1-weighted sequences and hyperintense in T2-weighted sequences), type II (hyperintense in both sequences) (Figure 3), and type III (hypointense in T1-weighted sequences and hypointense in T2-weighted sequences), as presented by Modic et al.\textsuperscript{1,4}
**Annular Tears**
A radial tear of the intervertebral disc is characterized as an abnormal finding on MRI where nucleus pulposus protrudes into the horizontally separated annulus fibrosus. It is defined as a hyperintense linear area from the nucleus pulposus towards the outer annulus fibrosus as observed on T2-weighted image (Figure 3).\textsuperscript{1,5–17} HIZ lesions are defined as a high intensity signal located in the substance of the posterior annulus fibrosus, being brighter than nucleus pulposus (Figure 2).\textsuperscript{18}
**Degree of Disc Displacement**
The herniated disc was subdivided into bulging (subligamentous) herniation, and extrusion. Bulging was defined as displacement of the disc material greater than 50% of the disc circumference. When disc displacement was less than 50% of the disc circumference, the herniation was evaluated as either protrusion (subligamentous herniation), or extrusion. The extrusion was defined as protrusion if the greatest distance between the edges of the disc material beyond the disc place was less than the distance between the edges of the base in any of the same planes. The extrusion was characterized as a greater diameter of the extruded fragment than of its base in any one plane. This classification of disc displacement is published by Fardon et al.\textsuperscript{17} The total number of herniations (subligamentous herniations and extrusions combined) and number of extrusions were recorded as 2 variables.
**Data Analysis**
The MRI images were read by 3 observers. One (J.T.) evaluated the Modic changes and DD, while 2 more experienced observers (J.K., J.N.) evaluated in addition to these, annular tears (HIZ lesions and radial tears separately) and degree of disc displacement. The inter-rater agreement in DD and Modic changes evaluation was performed with kappa statistics with each level (or end plate in case of Modic changes) evaluated as independent finding. Inter-rater agreement was found to be very good ($\kappa = 0.841$) in DD and moderate ($\kappa = 0.513$) in Modic changes. To evaluate selection bias, the MRI participants were compared with the rest of the OBS survey respondents ($n = 1424$) and the rest of the invited subjects ($n = 311$). The 95% confidence intervals for the prevalence of symptoms were based on binomial distribution. The $\chi^2$ test and Fisher exact test were used in the statistical analyses stratified by gender. Data analyses were performed using SPSS software for Windows (version 15.0, SPSS Inc., Chicago, IL) and The NAG Fortran Mark 21 software library (The Numerical Algorithms Group Ltd., Oxford, UK).
**Results**
Subjects were scanned with MR imaging at Oulu University Hospital between November 2005 and February 2008 while they were 20 to 22 years old (mean: 21.2 years). Scans were obtained from 558 (325 women, 233 men) subjects (64% of the invited). The male participants in the MRI study were more likely to report LBP (45% vs. 41%) and have consulted a physician, physiotherapist, nurse, or a health professional because of LBP during the past 6 months, whereas “Consultation for LBP” includes those who had consulted a physician, physiotherapist, nurse, chiropractic, or any health care professional because of LBP.
| | MRI Study Participants | Rest of the OBS Survey Respondents |
|--------------------------|------------------------|-----------------------------------|
| | Female | Male | All | Female | Male | All |
| Gender | 58% | 42% | 100% | 53% | 47% | 100% |
| Weight (mean; kg) | 57 | 72 | 63 | 59 | 71 | 65 |
| Height (mean; cm) | 165 | 179 | 170 | 165 | 177 | 171 |
| BMI (mean; kg/m²) | 21 | 22 | 22 | 22 | 23 | 22 |
| Smoking | | | | | | |
| Non-smoker | 60% | 67% | 63% | 51% | 52% | 52% |
| Current smoker* | 19% | 17% | 18% | 28% | 31% | 30% |
| Sitting† (mean; h) | 7.6 | 8.1 | 7.2 | 8.5 | 9.5 | 9.0 |
| Physical activity‡ | 13% | 23% | 17% | 7% | 15% | 11% |
| Reporting LBP§ | 58% | 45% | 52% | 56% | 41% | 49% |
| Consultation for LBP¶ | 6% | 8% | 7% | 6% | 5% | 5% |
*Smoking on 5 to 7 days per week.
†Total mean self-reported sitting hours leisure time and motor vehicle) per day.
‡Exercising at least 3 hours a week on leisure time.
§“Reporting LBP” (LBP = low back pain) includes individuals who had not consulted a physician, physiotherapist, nurse, or a health professional because of LBP during the past 6 months, whereas “Consultation for LBP” includes those who had consulted a physician, physiotherapist, nurse, chiropractic, or any health care professional because of LBP.
**Disc Degeneration**
The number of evaluated intervertebral discs was 2789, of which 373 (13.4%) were degenerated. One disc was not evaluated due to a previous operation. At least one degenerated disc occurred in 47.2% of all subjects. Men had a significantly higher prevalence of lumbar DD (at
---
**Figure 2.** HIZ lesions (arrows) at L3–L4 and L4–L5 levels of an individual (female) with the 3 lowest lumbar discs showing degenerative signal loss without marked height reduction in T2-weighted image.
**Figure 3.** Modic type 2 change (arrowheads) and radial tear (arrow) at L5–S1 level in the same individual in T1- (left) and T2-weighted images.
Table 2. Percentages (Numbers) of Degenerative Imaging Findings in Males and Females and the Gender Difference in Percentage Together With Its 95% Confidence Interval (CI)
| | Male (n = 233) | Female (n = 325) | Difference % (CI) |
|------------------|---------------|-----------------|-------------------|
| DD | 55% (127) | 43% (130) | 12.5 (7–20) % |
| Bulge | 26% (61) | 24% (78) | 2 (0–10) % |
| Herniation | 5.6% (13) | 2.5% (8) | 3.1 (0–6.5) % |
| Radial tear | 10.7% (25) | 8.0% (26) | 2.7 (0–7.7) % |
| HIZ | 4.3% (10) | 8.6% (28) | 4.3 (0.3–8.3) % |
| Modics | 1.7% (4) | 1.2% (4) | 0.5 (0–1.5) % |
least one level degenerated) compared with women (54.5% vs. 42.5%, P = 0.006) (Table 2). Degenerated discs occurred typically at the 2 lowest lumbar levels (L4–L5 and L5–S1), representing 85% of all degenerated discs, and men had a significantly higher prevalence at both levels (P = 0.048 and P = 0.009, respectively) (Figure 4). In addition, the prevalence of DD per disc was significantly higher in men than women (15.5% vs. 11.9%, P = 0.02). Almost half (47%) of the degenerated discs at L4–L5 and L5–S1 were grade 4, in contrast to 29% at the higher levels. Grade 5 degeneration was not detected at all. Overall, grade 4 degenerated discs were more commonly observed in women than men (49% vs. 39%, P = 0.027).
Of all subjects scanned, 95 (17%) had multiple level DD, i.e., 2 or more degenerated discs. The prevalence of multiple level degeneration was significantly higher in men than women (21% vs. 14%, P = 0.036); the most common pattern was a combination of DD at L4–L5 and L5–S1 (65% of all cases with multiple level DD).
**Degree of Disc Displacement**
A quarter of subjects had at least one bulging disc. The prevalence of bulging discs did not differ between genders (Table 2). Disc bulges were typically observed at the 3 lowest intervertebral discs being highest at L5–S1 level in both genders (Figure 5). The prevalence of disc bulging was 6% from all discs analyzed, and highest at L5–S1 (18.1%). Of all subjects with bulging discs, 6% of men and 7% of women had multiple disc bulges.
Herniations (subligamentous herniations and extrusions combined) were rare among the scanned subjects, at 3.8% (Table 1). Herniations occurred either at L4–L5 or L5–S1 (24% vs. 76%, respectively; Figure 5). Only 1 female and 1 male subject had an extrusion, both located at L5–S1. Herniations were slightly more prevalent in men than women, (5.6% vs. 2.5%, P = 0.047). No multiple herniations were found.
**Annular Tears**
Of all subjects, 9.1% had at least 1 radial tear with no significant differences between genders (Table 2). Only 1 female subject had multiple radial tears at L4–L5 and L5–S1. Typically radial tears were located at the 2 lowest intervertebral discs (Figure 5).
The prevalence of HIZ lesions was slightly lower than radial tears, being 1.5% of all evaluated discs. Of the subjects, 6.8% had at least one HIZ lesion. The prevalence of HIZ lesions was significantly higher in women...
than men (8.6% vs. 4.3%, $P = 0.046$) (Table 2). Four women had multiple lesions. HIZ lesions were typically observed at L4–L5 or L5–S1 (Figure 5).
**Modic Changes**
The prevalence of Modic changes (at least 1) was 1.4%, with no significant difference between genders. Of all Modic changes, 7 (64%) were of type I while 4 type II changes were observed. Ten of 11 findings were adjacent to a grade 4 degenerated disc. One Modic change (type I) was observed adjacent to a grade 3 disc at the lower L3 end plate. Moreover, in 6 of 9 subjects Modic changes occurred only at one side of the degenerated disc. Multiple findings were observed in 3 subjects. All Modic changes adjacent to a grade 4 disc were at the 2 lowest levels.
**Discussion**
The results of this study indicate that DD and bulging are common MRI findings in the lumbar spine among young adults aged 20 to 22 years. Radial tears and HIZ lesions were frequent as well, while protrusions and Modic changes were relatively rare. The prevalence of extrusions was very low at this age. Women had significantly higher prevalence of HIZ lesions whereas men had significantly higher prevalence of DD and herniations.
A straightforward comparison with earlier research is not possible due to lack of studies in the same age group. Salminen *et al.* studied the prevalence of DD in adolescence and reported 31% and 42% of subjects to have at least 1 level DD at the age of 15 and 18, respectively. In a Danish study, 21% of 13-year-old school children were reported to have at least one disc with decreased disc signal intensity in the lumbar spine. Thus, our finding (47% prevalence of DD) is in accordance with the earlier results if we assume that the prevalence of degenerative findings increases with age. Our 13% prevalence of DD per analyzed disc is lower than the prevalence of 20% in an adult population reported by Weishaupt *et al.*, and is most likely due to the young age of the scanned subjects in our study. Higher prevalence of DD per all discs in men has not been reported earlier. In addition, multiple DD was more frequent in men. Interestingly, women had a higher prevalence of grade 4 DD.
The prevalence of DD found here was highest at the 2 lowest lumbar levels, in accordance with observations by others. Among Danish children, HIZ lesions were most likely at L5–S1 and in boys, whereas no significant difference between the 2 lowest levels or genders was observed in the 40-year-old Danish population. Our results showed that HIZ lesions occurred significantly more often in women. The prevalence of HIZ lesion per disc was lower here than in earlier studies among older subjects. We found a few multiple HIZ lesions, as also reported earlier. Furthermore, in some individuals HIZ lesions co-occurred with radial tears, as observed earlier.
Previous studies have reported prevalences of Modic changes ranging from 12% to 58%, depending on whether subjects were symptomatic or not. Comparison of these studies to the present one is difficult because our subjects were so young. In the 13-year-old Danish population only 2 Modic changes were observed, 1 at L1 and 1 at S1. The prevalence of Modic changes was reported at 0.5%, whereas in the present study it was 1.4%. We found mostly type I Modic changes. The L4–L5 and L5–S1 levels are reportedly those typically affected, in accordance with our results. However, we found no gender association with Modic changes as earlier reported.
The strength of the present study is the large general population-based cohort of young adults. Despite their healthier lifestyle habits the MRI study male participants had a higher prevalence of LBP compared with the rest of OBS survey male respondents. This may have some impact on the generalizability of our results in the general population, the extent of which can determined only after studying the association between LBP and sciatica symptoms and MRI findings. The invitations to the MR imaging were sent only to those who participated in a clinical examination in 2005 to 2006. No marked selection bias was found between the scanned and the invited nonparticipants. In the present study, the kappa statistics showed the inter-rater agreement to be very good in DD but only moderate in case of Modic changes. Thus, DD can be graded quite reliably with only little experience in evaluating MRI images, while evaluation of Modic changes requires more experience. In practice, the most misevaluated Modic changes were Schmorls nodes with a halo of bone marrow signal change around them.
Gene therapy has been suggested as a potential treatment method for DD in the near future.\textsuperscript{24–27} However, as our results have shown, the prevalence of DD is high already among young adults. Although DD at young ages has been recognized as a risk factor for LBP in early adulthood,\textsuperscript{7} not all subjects with DD at young age will suffer chronic or recurrent LBP. As knowledge about the pathophysiology of DD increases, we hope it will become possible to recognize vulnerable individuals who are likely to develop chronic disabling LBP. As almost a half of the population of young adults have DD on MRI, this is not yet possible. Future studies are needed to assess the associations between these abnormalities in MRI and LBP and sciatica symptoms and limitations in activities of daily living.
In summary, lumbar DD and bulges are already very common at the age of 20. Modic changes and disc herniations are, however, relatively rare at this age. DD and herniations are more common in men than women, whereas HIZ lesions are found more frequently in women.
### Key Points
- This large (n = 558) population-based cross-sectional MRI study investigated the prevalence of degenerative findings of the lumbar spine among young adult members of a birth cohort.
- DD and bulging were common whereas Modic changes and herniations were relatively rare (1.4% and 3.8% of subjects, respectively) at 21 years of age. Almost half of these young Finnish adults had at least one degenerated disc, and a quarter had a bulging disc.
- DD at single or multiple levels and herniation were more frequent in men (55% vs. 43%, 21% vs. 14%, and 5.6% vs. 2.5%, respectively), whereas HIZ lesions were more prevalent in women (8.6% vs. 4.3%).
- The prevalence of the Modic changes was 1.4%, without gender difference, type I being more common than type II.
### References
1. Tamaela S, Kuiala UM, Salonen JJ, et al. The prevalence of low back pain among children and adolescents in a nationwide, cohort-based questionnaire survey in Finland. *Spine* 1997;22:121–6.
2. Leboeuf-Yde C, Kyvik KO. At what age does low back pain become an common problem? A study of 29,424 individuals aged 12–41 years. *Spine* 1998;23:228–34.
3. Wedderkopp N, Leboeuf-Yde C, Andersen LB, et al. Back pain reporting pattern in a Danish population-based sample of children and adolescents. *Spine* 2001;26:1879–83.
4. Jones A, Clarke A, Freeman BJ, et al. The Modic classification: inter- and intraobserver error rates in clinical practice. *Spine* 2003;30:1867–70.
5. Jaglal SF, Riddle ML, Norris SL, et al. Low back pain in schoolchildren: a study of familial and psychological factors. *Spine* 1995;20:1265–70.
6. Watson KD, Papagorgiou AC, Jones GT, et al. Low back pain in schoolchildren: the role of mechanical and psychosocial factors. *Arch Dis Child* 1998;83:12–7.
7. Rankine JJ, Heinlmo MO, Penti J, et al. Recurrent low back pain and early disc degeneration in the young. *Spine* 1999;24:1134–21.
8. Burtski G, Silberstein M. The symptomatic lumbar disc in patients with low-back pain. Magnetic resonance imaging appearances in both a symptomatic and control population. *Spine* 1993;18:808–11.
9. de la Torre DR, Dina TS, et al. Abnormal magnetic-resonance scans of the lumbar spine in asymptomatic subjects: a prospective investigation. *J Bone Joint Surg Am* 1990;72:403–8.
10. Jensen MC, Brant-Zawadzki MN, Obuchowski N, et al. Magnetic resonance evaluation of the lumbar spine in people without back pain. *N Engl J Med* 1994;331:69–73.
11. Stadnik TW, Lee RR, Coen HL, et al. Annular tears and disc herniation: prevalence and contrast enhancement on MR images in the absence of low back pain or sciatica. *Radiology* 1998;206:49–55.
12. Weinstein JN, Wiltse LL, Bono CM, et al. MR imaging of the lumbar spine: prevalence of intervertebral disc bulging and sequestration, nerve root compression, end plate abnormalities, and osteoarthritis of the facet joints in asymptomatic volunteers. *Radiology* 1998;206:61–6.
13. Pfirrmann CW, Metzdorf A, Zanetti M, et al. Magnetic resonance classification of lumbar disc degeneration. *Spine* 2001;26:1873–8.
14. Modic MT, Steinberg P, Ross J, et al. Degenerative disk disease: assessment of changes in vertebral body marrow with MR imaging. *Radiology* 1988;166:193–9.
15. Yu SW, Sether LA, Hu FS, et al. Tears of the annulus fibrosus: correlation between MR and pathology findings in cadavers. *AJNR Am J Neuroradiol* 1988;9:347–70.
16. Yu S, Haughton VM, Serher LA, et al. Criteria for classifying normal and degenerated lumbar intervertebral disks. *Radiology* 1989;170:523–6.
17. Fardon DF, Milette PC. The Combined task forces of the North American Spine Society, American Society of Spine Radiology, and American Society of Neuroradiology. Nomenclature and classification of lumbar disc pathology. *Spine* 2001;26:E93–114.
18. April C, Bogduk N. High-resolution x-ray imaging of painful lumbar discs on radiographs. *Spine* 1992;17:100–5.
19. Kjellin L, Leboeuf-Yde C, Sorensen JS, et al. An epidemiologic study of MRI and low back pain in 13-year-old children. *Spine* 2005;30:798–806.
20. Braithwaite I, White J, Saifuddin A, et al. Vertebral end-plate [Modic] changes on lumbar intervertebral MRI correlation with pain reproduction at lumbar discography. *Eur Spine J* 2004;13:77–8.
21. Kjellin P, Leboeuf-Yde C, Korsholm L, et al. Magnetic resonance imaging and low back pain in adults: a diagnostic imaging study of 40-year-old men and women. *Spine* 2003;30:1173–80.
22. Rankine JJ, Gill KP, Hutchinson CE, et al. The clinical significance of the high-intensity zone on lumbar spine magnetic resonance imaging. *Spine* 1999;24:1491–3.
23. Jensen TS, Karppinen J, Sorensen JS, et al. Vertebral endplate signal [Modic] changes: a systematic literature review of prevalence and association with non-specific low back pain. *Eur Spine J* 2008;17:1407–22.
24. Karasovskiy M, Schweizer ME, Carmon JA, et al. Reactive endplate marrow changes: a systematic morphologic and epidemiologic evaluation. *Skelet Radiol* 2003;34:125–9.
25. Chen Y. Orthopedic applications of gene therapy. *J Orthop Sci* 2001;6:199–207.
26. Chadderdon RC, Shimer AL, Gilbertson LG, et al. Advances in gene therapy for intervertebral disc degeneration. *Spine* 2004;4:5341–7.
27. Levicoff EA, Gilbertson LG, Kang JD. Gene therapy for disc repair. *Spine* 2005;5:S287–96.
|
Summary: In this report the results from the three kinds of evaluation of the HANDS software are brought together in order to obtain a common evaluation and conclusion. The report suggests some conclusions and recommendations from HANDS to a future research and agenda. It also suggests a plan for the further exploitation and co-operation among the partners after the project period has been presented below.
Contact details:
Project Co-ordinator: Professor Peter Øhrstrøm
Organisation: Aalborg University
Tel: +45 9940 9015 Fax: +45 9815 9434
E-mail: email@example.com
Project website address: http://hands-project.eu
Revision history:
Preliminary version, October 24, 2011.
Revised, Nov. 3, 2011
Revised, Nov. 5, 2011
Revised, Nov. 8, 2011
Revised, Nov. 19, 2011
Table of Contents
Testing the Hands tools: Bringing three strands together ................................................. 4
Integrated Test Results and Conclusions ............................................................................. 6
Towards a future research agenda ....................................................................................... 9
A plan for the further exploitation and co-operation .......................................................... 15
The formal organizational structure of HANDS Open ...................................................... 18
References .......................................................................................................................... 20
Testing the Hands tools: Bringing three strands together
One of the major themes in the research papers and in the deliverables is the description of how the HANDS project may lead to results which are useful for teenagers with autism in their everyday life. The expectation and the goal of the project is that teenagers with autism will be able to benefit a lot from software systems like the HANDS tools in their daily routines and that the proper use of such tools can contribute significantly to the integration of young people with autism in society. It is, however, still an open question to which extent such goals (including the obvious socio-economic and societal implications of their realisation) can in fact be obtained. From a more general point of view, it is the expectation that the HANDS project will lead to a deeper understanding of the potential in using software tools for such social purposes. A plan for the further exploitation and co-operation among the partners after the project period has been presented below. Clearly the HANDS project has also made it clear that there is a number of problems, which should be studied in more details.
In the following we shall first of all give an overview of the general methodological setup which has been used in HANDS. In order to do so it will be described how the interaction between the three lines of research has been organized within the project. It will be explained how a constructive dialogue between researchers from the traditions of cognitive psychology, education studies and persuasive design studies can give rise to a common conclusion and a very rich evaluation of the potential of the HANDS toolset.
1.1 An overview of the HANDS Research Strategy
During the first reporting period, the design and implementation of prototype 1 of the HANDS tools were prepared with inspiration and ideas from the three scientific traditions (cognitive psychology, education studies, and persuasive design). First of all the system requirements were formulated from the three perspectives by the three respective university partners (ELTE, LSB, AAU). This was done not only based on theoretical consideration, but the formulation of the system requirements was also carried out under inspiration from dialogues with the four partner schools.
The system requirements from the university partners should be seen as input to a partner forum in which the schools were given a central role. This partner forum (discussion group) was led by AAU-researcher Morten Aagaard. As chairman of the group it has been his obligation to lead the discussions in the group and to make sure that the discussions end up in a reasonable agreement. It is important to remember that the choice of the resulting set of requirements should not only reflect the theoretical perspectives from with the three university partners are working. It has been even more important for the consortium that the requirements reflect the ethical and educational values held at the four partner school. This way the process can - to some extent - be said to be inspired from the idea of user participatory design and also from the idea of value sensitive design. In fact these
perspectives have been seen as more and more important during the project period. The consortium has renamed the partner forum as The User Participatory Design Group (UPDG) in order to make the emphasis on such ideas even stronger.
The figure below illustrates the general idea. It is evident that the partner discussion group (UPDG) has had a very important role to play in the process. The work in the discussion group has not in itself given rise to any detailed description. However, it will be fair to say that the work in the discussion takes form of a rational debate aiming at consensus in the group.
Figure 1: The overall procedure used in the system design and system evaluation within the Hands project. This procedure involves clear aspects of user participatory design and also clear aspects of value sensitive design.
The test of the HANDS tools will highly focus on what is going on at the schools and in general in the interaction between the individual teenager and his or her teacher. And clearly the data are collected at the schools after criteria decided by the university partners. It is crucial that the deliverable D6.3.1 and D6.4.1 are catalogues of all test results and other evaluation data – seen as raw data. These data will later be analyzed from the perspectives of psychology, education, and persuasive technology, respectively. These perspectives are, of course, very different. However, each analysis has to be brought into the partner discussion group (UPDG). When the analysis is presented as part of the rational debate in UPDG there will be a certain homogeneity, which will make it possible to formulate a general evaluation uniting the results of the evaluation activities across the evaluation sites. Again this is an important result of the partner discussions.
In the next chapter, we shall try to show how the results from the three kinds of evaluation can be brought together in order to obtain some common conclusions. In chapter 3, we shall present some attempts in the HANDS consortium at formulating a research agenda related to the Hands project. One of the first major steps in this respect will be the collection of results and open research problems in a book on HANDS. In chapter 4, we shall present a plan for the further exploitation and co-operation among the HANDS partners, after the end of the HANDS project.
**Integrated Test Results and Conclusions**
The results of the tests obviously have different kinds of implications. First of all, some rather general conclusions regarding the HANDS potential as such are presented. Secondly, conclusions regarding the use of HANDS in a school setting and the role of the teachers in the use of HANDS are brought forward. Thirdly, conclusions regarding the technical side of the HANDS software are included.
On the basis of the studies mentioned above, the following general conclusions can be drawn on the effectiveness and visual design of the HANDS Mobile toolset:
- The visual user interface of the HANDS Mobile toolset has been designed adaptively, that is, in accordance with the specific attentional needs of adolescents with ASD. This conclusion assumes that the actual visual settings of the user interface are set carefully according to the specific needs of the individual user. These are pre-requisites of any successful intervention.
- In case of *specific, much-focused psycho-educational interventions*, such as supporting adolescents with ASD in performing specific social or daily-life behaviours that are problematic for them, HANDS Mobile toolset has proven to be a *highly efficient medium of intervention*, at least in a very short term. In such situations, HANDS-assisted intervention *can be significantly more effective* than traditional
(‘paper and pencil’) support tools. Again, this conclusion assumes that the decision to use the HANDS toolset is made on the basis of careful consideration of the individual user’s specific needs for support, and the actual focus and content of the intervention is set and designed on the basis of such considerations and a professional understanding of principles of psycho-educational intervention and support in ASD.
- Appropriately used on a regular basis in a longer run (months), the HANDS toolset seems to have some more general positive effects on developing social and daily life skills in teenagers with ASD. From our studies, we cannot (yet) positively tell whether these effects may be significantly stronger than those of applying traditional means of psycho-educational intervention, but our results suggest that they are at least at the same level. Again, these longer-run positive effects pre-assume a careful consideration whether applying HANDS-based mobile cognitive support is appropriate in case of the given individual, and also a careful composition and continuous monitoring of the specific details and contents of the interventions based on expertise in evidence-based psycho-educational approaches to autism.
- It should further be emphasised that these potential beneficial effects all seem to depend on
a. the individual’s specific needs, strengths, weaknesses, motivations and attitudes;
b. the pedagogical approach and expertise of the teacher;
c. the institutional and professional culture of the school;
d. and most probably on several further factors related to the socio-emotional context of the pedagogical intervention.
These factors were not quantitatively investigated, but qualitative research findings from other teams of the HANDS Consortium strongly suggest their relevance.
The tests support the following conclusions regarding the role of the teachers and the schools in the further development of mobile persuasive tools for young people with ASD:
- The teachers should use tunnels system and similar methods from persuasive design. Such methods should be further refined and developed. The users of such systems should make use of the existing best practice examples developed during the HANDS project and available on the HANDS system.
- Teachers should develop interventions for HANDS and similar systems based on recognition of the fact that student awareness of needs and internal motivation for change of behaviour is a key mediating factor. Rather than starting from a position of “teacher knows best”, they should work collaboratively with children and young people to identify interventions, that the child or young person themselves assents to.
• In school based implementations of such systems, strong consideration should be given to increasing the autonomy of the child or young person in terms of their level of control over the interventions that are developed for them in HANDS like systems. Although some level of adult supervision and facilitation will always be required in school based implementations, the balance should be “tipped” further towards the child’s own control of the development of interventions.
• The data also suggests that the schools’ perspective on teaching social and life skills to its students can be a limiting factor for developing the use of support tools outside the school, with the eventual aim of student autonomy.
• Consideration should be given to implementing and testing HANDS-like technology within Further Education and post-16 environments. This could be extended to Higher Education and workplace settings where there would be an equivalent focus on life skills, although this would depend on the status and function of intermediaries such as Higher Education support staff in mediating the use of HANDS-like technology with young people with ASD.
• Consideration should also be given to placing greater emphasis on the role of parents in mediating the use of a HANDS like technology. Where parents were more involved in the planning stage of HANDS, i.e. deciding what, how and when to put scenarios and interventions onto the HANDS tool, teachers felt more and more informed and confident in their decisions about what to put on the HANDS system. Where parents were involved in implementing and supporting their child’s use of HANDS with the teacher, the child had someone outside of school to turn to when they had technical or otherwise difficulties with using the HANDS tool and phone.
The test results also support the following conclusions regarding the technical side of the systems:
• HANDS like systems should include a specific Smartphone application that allows easy access to the CoMe server application via a well designed “app” interface on the Smartphone itself. This will facilitate the ability to more rapidly update interventions on HANDS or similar systems, leading to greater flexibility.
• Experiences across three of the test school sites provide further evidence that students’ relation to HANDS is shaped by their identification with the Smartphone and the other phone features.
• For systems similar to HANDS, developers should aim to accommodate the use of the child’s own mobile device for the mobile persuasive application. An inescapable corollary of this, given the current diversity in Smartphone operating platforms, is that any future HANDS-like system should be developed on a cross platform basis.
• The software specification and development function should be very tightly aligned to the user perspective. The software must a) load rapidly, b) react to use inputs rapidly and c) function highly reliable. Furthermore, technical factors such as battery lifetime and charging the phone remain problematic. Clearly achieving these reliability parameters will be dependent on a combination of hardware and software factors.
• HANDS-like technology should come pre-loaded with personal trainer-like functions which are set to remind the user to charge their phone. Furthermore, a charging protocol should be implemented by the key intermediary working with the child with ASD, focusing on ensuring that they regularly charge their device.
• Additionally, HANDS-like technology should be specified to minimize battery drain, thus maximizing the charging life span for the user.
**Towards a future research agenda**
As the HANDS system has proven to be effective and the research on its efficiency successful, we see two major directions for future research agenda that would take the current project and its results as starting points. These two major directions are related to potential extensions of target groups, and to improving development and test methodologies.
### 3.1 On potential extension of user/mediator groups
(1) The relative success of the current HANDS project and system opens the way to extend the target groups of HANDS and/or its future versions *within the field of autism spectrum*. At least two potential further ‘segments’ of the autism spectrum appear as considerable candidates for such a further extension: (a) *lower functioning / younger children* than the subjects involved in the current HANDS development, and (b) *high functioning teenagers and young adults who live/study/work in an inclusive environment*, and *not* in autism-specific institution (school or daily home, etc.). While a future version of HANDS could support social communication and relatively elementary self-management skills in the former potential target group, it could assist the individuals in the latter potential target group in their daily independence and effective learning and working.
Naturally, extending a future version of HANDS to any or both of these potential target groups requires careful analysis of pros, cons, risks and potential benefits as well as ethical considerations, but it can be argued several ways that both directions are promising.
(2) Both potential extensions of the HANDS target group raise the issue that involving new beneficial groups may necessitate to *find adequate novel mediator groups* – that is, well-trained support persons, who are able to tailor and manage the contents of the HANDS mobile system in accordance to the individual needs and challenges. *Parents* as potential mediators appear as plausible candidates in both groups, while kindergarten nurses may also play such a role in younger children group. Other professionals, such as social workers, too, may play this role in case of the high functioning teenager and adult groups.
Any of these possibilities come to reality; there can be no doubt that *well-designed extensive training programs must be elaborated and implemented* in order to make these mediators ready for the task. These training programmes have to involve not only knowledge on the appropriate support practices for individuals with ASD, and on the technical details of managing contents in the HANDS system, but on the ethical principles and rules concerning such a support activity.
(3) Beyond finding further target groups within the autism spectrum, it is a reasonable research goal to identify such *groups beyond the autism spectrum*, too. As argued in this document at section 1.2, such further potential beneficial groups include
- people (older children, adolescents and adults) with Attention Deficit / Hyperactivity Disorder (ADHD);
- mainly adults with Developmental Intellectual Disabilities;
- adults with early phase and/or mild form of Alzheimer Disorder;
- people with traumatic brain injury, mainly with frontal/prefrontal brain injuries.
As these groups show some cognitive/behavioural difficulties partly analogous to those shown by individuals with ASD, we can reasonably expect that HANDS system can be made appropriate for their cognitive support as well.
Again, naturally, extending a future version of HANDS to any of these potential target groups
- requires a careful analysis of pros, cons, risks and potential benefits as well as ethical considerations;
- necessitates finding adequate novel mediator groups;
- and makes it indispensable to elaborate and implement well-designed extensive training programmes for these mediators.
### 3.2 On research methodology for efficiency testing of assistive ICT for groups with special needs
(1) As HANDS seems to be a useful assistive tool for some individuals with ASD, but not for all of them – similarly to other ICT-based assistive tools, as reviewed in part 2 of this document – it seems to be an important research task to attempt *to profile successful cases* of application. Within the HANDS project, such an attempt has been made with considerable success by the ALE research group, but certainly wide space has been left for clarifying the specific conditions, both within and around the individual that render ICT-assisted intervention/support successful versus unsuccessful.
(2) We strongly believe that the overall mixed-mode research strategy within the HANDS consortium, in general, as well as the multi-method, multiple-levels quantitative efficiency testing within the Cognitive Psychology Work Package have proven to be highly
productive within the current HANDS project. This success raises the possibility and need for elaborating a general methodological model for testing assistive ICT tools for groups with specific needs.
(3) Such a general model, in our conviction, should involve specific practices in order to avoid occasional within-project interferences between (a) development and testing, and (b) between testing practices in different involved disciplines.
(4) Finally, as integrated analysis of log-data (electronic footprints) and psychometric data, elaborating refined methodological models for such integrative analyses of these data sources would certainly be a highly useful research goal.
3.3 Preparations of a Book on the Results of HANDS and the Open Question Discovered in the Project
Based on the discussions among the HANDS partners there is a high degree of agreement on what would be important to study further in this context. This common understanding has led to the following proposal of a scientific book dealing with the most central research issues related to the HANDS project.
Following an approach by IOS Press, a Dutch academic publishing house, the HANDS consortium has signed an agreement to publish a book about the experiences and outcomes from the HANDS project by December 2012.
3.3.1 General Outline of the Book
The book will form part of the IOS’s series: Ambient Intelligence and Smart Environments (AISE). The book will include chapters on (subject to editorial confirmation):
1. Introduction – Wider context of assistive and ambient technology, use of mobile devices, E-inclusion, and technology design strategies
2. ASD - the need for inclusion and approaches to E-Inclusion
3. Persuasive Design Theory (for mobile devices) and its application to psycho-educational settings
4. Experiences with implementation – technological and pedagogical factors, including case studies of “success stories”
5. The Ethics of Persuasive Design in Psycho-educational settings including specific issues around the use of GPS
6. Persuasive Design in Action – relevant factors
7. Mixed Mode Methodology / Recommendations for the use of persuasive mobile design with young people with cognitive and social impairments
3.3.2 Some Key Highlights Expected in the Book
*Autism and Technology*
The book will show how the specific cognitive, behavioural and motivational patterns characterising individuals with autism spectrum conditions lead to specific needs for support and inclusion, and what are the core approaches to exploit information and communication technology for fulfilling these needs. Accordingly, a short introduction is given to the most important basic facts about Autism Spectrum Disorders (ASDs) first where biological foundations, in interaction with some social factors, lead to atypical patterns of key human abilities. Consequently, individuals with ASD often show severe difficulties in social engagement, social participation, as well as in daily life management. These difficulties, in turn, give way to a very high risk of social isolation and marginalisation. We also examine briefly the kinds of specific problems and needs for support, relevant to teenagers with ASD. We present a short overview of the existing key approaches to using ICT tools to support individuals on the Autism Spectrum, with an emphasis not primarily on technological, but on functional aspects. A map of this highly specific but growing field is outlined, with the HANDS system located on it.
*The experience of using HANDS*
When covering experiences in the HANDS project, there will be a focus on the teachers using the HANDS mobile tool and what we have learned about the challenges and potential of using persuasive cognitive support tools with young people with ASD in educational settings. These include “product” orientated experiences, encompassing technical, usability, and functionality issues. For example, the use of the HANDS mobile tool in school highlighted the importance of flexible screen layouts. This was found to be particularly important as some children had a preference for text based layouts whilst others had a preference for image based layouts. This led to the development, in the second HANDS prototype, of a flexible layout design tool that allowed for case by case variation of the balance between text and images on any intervention screen. “Process” orientated experiences included the lessons learned from the implementation process itself, particularly with regards to the extent to which HANDS was incorporated into existing pedagogical practice as well as challenges and difficulties associated with achieving a successful integration. For example, the use of the HANDS mobile tool in school highlighted the importance of battery life and indicated the need to develop a charging protocol on a child by child basis. This frequently involved liaison with parents and development of strategies for use of HANDS across both home and school.
This experience will be illustrated with examples drawn from the direct experience of the teachers and other professionals working with the HANDS mobile tool. In other words, it will illustrate how teachers as professionals can creatively reflect on their own practice in the context of the introduction of a new technology tool. Such reflections will be contextualized in a framework based on the work of Schon (1985) and will make use of the concept of the “reflective practitioner” in illustrating how teachers think about and solve issues involved when working with children with ASD, and the role potentially played by the technology when addressing such issues.
Additionally, the use of the HANDS mobile tool will be presented by teachers working with individual children (teacher-child dyads). These portraits will include: i) a sketch of the teacher and their classroom including their overall experience of and attitude towards ICT in the classroom, ii) a sketch of the child and their strengths and weaknesses, iii) the particular child’s problem(s) that the teacher focused on when using HANDS, iv) the teacher’s experience of using HANDS and how it related to their existing practice, with a particular focus on challenges and difficulties, v) the teacher’s perception of the difference HANDS made to the child and the problem(s) in focus, vi) links with parents and the home, including the use of the HANDS mobile tool outside of the school, vi) the teacher’s recommendations for future development and implementation of mobile ICT for young people with ASD.
Reference will also be made to institutional and structural issues and the potential role that they can play in mediating successful engagement with new technologies in the classroom.
**Ethics**
The HANDS-project may be described as a research project studying the transforming effects of state of the art information- and communication technology on the social life of a group of vulnerable young people. As promising as this may sound, the mixture of research, new technology and young and vulnerable people evidently calls for careful consideration of the ethics of such endeavors. From the very outset of the HANDS-project an ethical board was formed with the purpose of discussing general ethical questions related to HANDS, of ethically evaluating the requirements of the systems supposed to be tested and used by children and young people with an autism diagnosis, and of ethically evaluating all clinical tests involving children and young people with an autism diagnosis. In the course of the project the Ethical Board of HANDS has processed several applications for testing and has been closely involved in discussions of a number of ethical aspects of the project.
We will discuss the specific ethical questions and problems of involving young people with an Autism Spectrum Disorder (ASD) in a research project such as HANDS as well as the ethical questions and problems associated with the use of the specific technologies in HANDS.
**Methodology, Evaluation Outcomes and Future Recommendations**
We will present an overview of the complex empirical methodology by which efficiency and applicability of the HANDS system have been tested. We will summarise the main results from these tests; and explore vistas for future developments and wider applications for both the HANDS system and the test methodology. The first part of the chapter is devoted to a brief outline of these contexts and methodological dilemmas and to an overview of the scheme of the multi-mode methodology that has been designed for testing efficiency and applicability of the HANDS system. As this scheme is based on both a share of labour and cooperation between methodologically basically autonomous research streams, the second part of the chapter introduces briefly the designs and methods used by the three disciplinal strands. Important aspects of the overall design has been that the Cognitive Psychology stream applied quantitative methods, while Applicability in the Learning Environment and Persuasive Design strands used mainly qualitative methods. All streams, however, have had access to the same shared database of raw data, and they have communicated closely during analysis and interpretation. There will be a brief overview of the major research findings – both of those that arose from the contribution of the specific research streams, and of those that arose genuinely from integrated interpretations. Throughout this, specific attention has been paid to the benefits from cooperative acts between the three research streams. This review of main findings suggests that, with adequate institutional and technological background and pedagogical embedding, the HANDS system is an efficient element in the toolset for supporting some teenagers on the Autism Spectrum, as its use is able, in selected cases, to enhance their social, self management and daily life skills. Also, the specific, complex research design outlined previously has proven to be highly productive to reveal positive effects as well as their contextual preconditions and some limitations. Based on these results, we focus on future perspectives in three ways: what are the vistas for further development of the HANDS system; what are the promising directions for widening the target group(s) of potential users; and in what ways could the overall successful methodological approach be further improved.
A plan for the further exploitation and co-operation
The HANDS partners have discussed how the results of the HANDS project and the software developed during the project period can be used in the future for the benefit of young people with an autism diagnosis, who need to be better integrated in society. The conclusion of these discussions is that a new organization should be formed. This organization should continue and further develop the efforts made in the HANDS project. HANDS Open has been chosen as a name of the new organization in question. This name should be seen as an indication of the intention to extend the use of the tools and the work with the further development of the methods to more schools for young people with autism.
The background for the formation of HANDS Open is a common understanding of the intellectual property rights (IPR) related to HANDS. Since all partners have contributed to the establishment of the scientific results as well as the development of the HANDS tools, all partners have some IPR related to HANDS. However, since the co-operation in HANDS has been rather close, it makes no sense to make statements deciding precisely which partners should be entitled to which intellectual property rights. For all practical purposes, it makes more sense just to place the rights in question in the fellowship of partners. The idea is that HANDS Open, for all practical purposes, should function as the joined owner of the HANDS results and the HANDS software.
The formation of HANDS Open
All HANDS partners have been invited to join the HANDS Open organization, and all partners have in fact agreed to join. HANDS Open will be led by a board. The agreement is that all partners can appoint one member of the board. The partners have also elected Morten Aagaard, Aalborg University, as chairman of a working group for HANDS Open. The chairman is going to lead HANDS Open through its first period. The partners agree that one of the first obligations for the working group will be to formulate proper and formal regulations for the organization assisted by the office for legal matters at Aalborg University.
New members of HANDS Open
It should be possible to include new members in HANDS Open by unanimous decisions of the board. Such new members could be schools for young people with autism or research units with the ambition to further develop the HANDS ideas and techniques further. It could also be companies wanting to market the HANDS products (the software, the courses etc.).
The purpose of HANDS Open
HANDS Open is supposed to deal with a number of rather different challenges. The work carried out within the organization will include activities regarding the present HANDS
software and activities as well as the further development of the software and the further development of the use of the HANDS ideas and techniques.
One major task to be carried out within the HANDS Open cooperation has to do with the HANDS server. It is very essential for the use of the HANDS software that the server is active and constantly under supervision and maintenance. During the project period this work has been carried out at Aalborg University where the server is presently placed. This will continue after the end of the project period. If Aalborg University at some point in the future should wish to stop this activity, it will be the obligation of the board of HANDS Open to find an acceptable solution regarding the server. (In fact, the present leaders of the computer department of Aalborg Municipality have already indicated that they might be ready to accept the obligation of hosting and maintaining the HANDS server, if it turns out to be needed in the future.)
It will also be an important task for the HANDS Open organization to develop the HANDS tools and techniques further. In particular, the educational perspectives will be important. The HANDS tools should be seen in an e-learning context. It is essential that the teachers are trained in the use of the HANDS tools and the use of the data stored on the HANDS server. For this reason, one very important challenge for HANDS Open will be to establish a certified education for experienced practitioners employed at schools for young people with autism (see below).
It should also be mentioned that it will be an important challenge within the HANDS Open organization to investigate the potential extension of HANDS or its spin-offs to other groups in need for cognitive support, especially groups with needs somewhat analogous to those of people with ASD.
Furthermore, it will be important for HANDS Open to establish more research in order to develop new ICT tools, which can be relevant along with the HANDS tools which have already been developed. One such project could be that of “Human sensing”, which has already been preliminary discussed among the HANDS partners (- see http://www.humansensing.blogspot.com and Rosalind Picard’s paper on sensor use and individuals with autism, “Future Affective Technology for Autism and Emotion Communication”, www.media.mit.edu/affect/pdfs/09.Picard-PhilTranRoyalSocB.pdf). In addition, it is essential that the HANDS Open cooperation pay due regard to the ethical issues involved in the HANDS techniques. In particular, it is important that the procedures involved in the treatment of person sensitive data is constantly monitored in order to make sure that no personal right and no personal integrity is being violated.
Finally, it will be important for the cooperation within HANDS Open to establish further relevant business contacts in order to explore the commercial potential in the HANDS software and the HANDS techniques. In the present situation, the companies already related to the HANDS project, do not want to invest in the further development of the
HANDS software and the HANDS techniques etc. however, this might have to do with the general financial climate. It is conceivable, that things may be different in a better financial situation, which will hopefully occur soon again.
**Towards a certified education for experienced practitioners employed at schools for young people with autism**
The main idea in HANDS is that the individual teacher at a school for young people with autism should tailor individual tools for each of her or his students, and that the teacher should benefit from the fact that the use of the tools are monitored at the HANDS server when supervising her or his students. Given that this is the case, it is obvious that the training of the teachers at the schools is essential for the success of the HANDS ideas. One of the conclusions in the HANDS project is that too little emphasis has been put on the training of the teachers in the use of the HANDS software and techniques. For this reason it will obviously be a good idea to establish a certified HANDS education for experienced practitioners employed at schools for young people with autism. This education should obviously equip the teachers with sufficient knowledge and sufficient technical skills in order to use the HANDS toolbox to tailor individual tools and in order to make use of the data stored on the HANDS server. Furthermore, it should be a substantial part of this education to make the participants aware of the ethical problems and challenges involved in treating person sensitive data carefully and respectfully.
Clearly, this kind of certified education has to be established on a national basis – at least for language reasons. On the other hand, obviously many parts of the educations will be alike in spite of the language differences. In fact, this educational activity can, to a large extend, be organized and supported from HANDS Open, although this is an international organization.
**Financial issues**
The HANDS partners do not have to pay anything in order to become and to stay members of the HANDS Open organization. It is expected that the activities in HANDS Open to a large extent can be carried out without a common economy. One option is to make case-to-case decisions on the application for funding that might be needed. Likewise special arrangements may be needed whenever various costs related to HANDS Open have to be covered. It may be decided that companies, schools, or other organizations have to pay a certain amount in order to become members of HANDS Open. The board will have to decide how any income of this kind should be spend and administered.
The formal organizational structure of HANDS Open
The formal structure of HANDS Open is quite straightforward:
```
HANDS Open Board
|
|-----------------|
| Partner 1 |
| Partner 2 |
| Partner 3 |
|
|-----------------|
| HANDS Education |
| New development |
```
*Formal structure of the HANDS Open organisation*
The HANDS Open Board has the overall responsibility of the HANDS Open organization. It consists of all partners in HANDS Open. If new partners should be included, the HANDS Board has to accept the new partners unanimously. This structure minimizes the organization and the scenario-oriented management decisions in a rapidly changing market.
The responsibility of the HANDS Open Board is
- To manage the HANDS Open collaboration in a scenario-driven manner.
- To make an overall ICT development strategy.
- To confirm and support large funding applications.
- To consider new collaboration partners.
- To decide on common experiments.
- To describe the overall purpose and quality of HANDS Open educations.
- To confirm educations.
- To manage ICT services (server, e-store, web service).
- To facilitate new initiatives.
Legally speaking, the HANDS Open organization consists of a “Collaboration Agreement”. Partners should be responsible for taking initiatives with respect to their capacity, their knowledge about national market conditions. Rather than letting the HANDS Open Board take such responsibility.
In that sense, the responsibility for a strong HANDS Open collaboration is in the hands of the members’ initiative.
Some activities within the HANDS Open organization may have a revenue. It will be the responsibility of the HANDS Open Board to decide how this revenue should be spent or split between partners. Minor activities are mandatory, though,
- Developing a local HANDS templates
- Maintaining locally hosted websites
This way, HANDS Open is in line with the recommendations in D8.1.
A potential split of revenue between partners has to balance the local as well as global needs for HANDS. A potentially global HANDS Open organization should obviously focus on further development of the HANDS Open software.
The certified HANDS Open education is one cross-disciplinary activity expected to take place within the organization. Others could be:
- Research – experiments that require participants from several countries and several schools.
- Research – large international and interdisciplinary projects, e.g. funded by the European Commission, inspired by the work done at Aalborg University (AAU)\(^1\) and Massachusetts Institute of Technology (MIT)\(^2\).
- School driven collaboration, e.g. oriented towards the use of sensor technology.
- Exchange of teachers and/or pupils between the schools.
Further considerations regarding the practical implementation of the HANDS Open idea may be found in the Exploitation section in the Final Project Report.
---
\(^1\) Vbn.aau.dk/en/projects/humansensing(f89a6e2f-3e08-46c1-992b-29f385c62ab9).html
\(^2\) www.media.mit.edu/affect/pdfs/09.Picard-PhilTranRoyalSocB.pdf
References
Beaumont, R., & Sofronoff, K. (2008). A multi-component social skills intervention for children with Asperger syndrome: The Junior Detective Training Program. *The Journal of Child Psychology and Psychiatry*, 49, 743–753.
Bernard-Opitz V. Sriram N. Nakhoda-Sapuan S. (2001). Enhancing social problem solving in children with autism and normal children through computer-assisted instruction. *Journal of Autism and Developmental Disorders*, 31(4), pp. 377-384.
Calvano, R., Mesibov, G., O’Callaghan, C. (2006). Computer Assisted Improvement of Social and Executive Functioning in Asperger’s Disorder. *Presentation on ASA’s 37th National Conference on Autism Spectrum Disorders (July 13-15, 2006)*
Cihak D.F. et al. (2010). The use of video modelling via a video iPod and a system of least prompts to improve transitional behaviours for students with autism spectrum disorders in the general education classroom. *Journal of Positive Behaviour Intervention*, 12 (2) 103-115.
Golan, O., Baron-Cohen, S. (2006). Systemizing empathy: Teaching adults with Asperger syndrome or high-functioning autism to recognize complex emotions using interactive multimedia. *Development and Psychopathology* 18:589-615
Gyori, M., Kanizsai-Nagy, I., Stefanik, K., Vigh, K., Őszí, T., Balázs, A., & Stefanics, G. (2008). Report on initial cognitive psychology requirements on software design and content design & content. HANDS Poject deliverable D2.2.1.
Gyori, M., Stefanik, K., Kanizsai-Nagy, I., Őszí, T., Vigh, K., Balázs, A., & Stefanics, G. (2008). Report on test methodology and research protocols. HANDS Project deliverable D2.1.1.
Gyori, M., Stefanik, K., Kanizsai-Nagy, I., Őszí, T., Vigh, K., Várnagy, Z., & Balázs, A. (2010). Evaluation of Prototype 1 and Requirements for Prototype 2 - in the perspective of Cognitive Psychology. HANDS Project deliverable D2.4.2.
Gyori, M., Stefanik, K., Kanizsai-Nagy, I., Őszí, T., Vigh, K., Várnagy, Z., & Balázs, A. (2011). Report on efficiency testing. HANDS Project deliverable D2.5.1.
Hetzroni, O. E., & Tannous, J. (2004). Effects of a computer-based intervention program on the communicative functions of children with autism. *Journal of Autism and Developmental Disorders*, 34, pp. 95–113.
Hopkins I.M., Gower M.W., Perez T.A., Smith D.S, Amthor F.R., Wimsatt F.C., Biasini F.J. (2011). Avatar Assistant: Improving Social Skills in Students with an ASD Through a Computer-Based Intervention. *Journal of Autism and Developmental Disorders*, Online First™, 3 February 2011
Matson, J. L. and Sturmey, P. (2011). *International Handbook of Autism and Pervasive Developmental Disorders*. Springer.
Mechling L.C., Gast D.L., Seid N.H. (2009). Using a personal digital assistant to increase independent task completion by students with autism spectrum disorder. *Journal of Autism and Developmental Disorders*, 39, 1420-34.
Mechling L.C., Savidge E.J. (2011). Using a Personal Digital Assistant to Increase Completion of Novel Tasks and Independent Transitioning by Students with Autism Spectrum Disorder. *Journal of Autism and Developmental Disorders*, 41, 687-704
Mitchell, P., Parsons, S. and Leonard, A. (2007). Using virtual environments for teaching social understanding to 6 adolescents with autistic spectrum disorders. *Journal of Autism and Developmental Disorders*, 37(3), 589-600.
Reichow, B., Volkmar, F. R., & Cicchetti, D. V. (2008). Development of the evaluative method for evaluating and determining evidence-based practices in autism. *Journal of Autism & Developmental Disorders*, 38, 1311-1319.
Reichow, B. (2011) Development, procedures, and application of the evaluative method for determining evidence-based practices in autism. In F. R. Volkmar, B. Reichow, D. V., Cicchetti, & P. Doehring (Eds.). Evidence-based practices and treatments for children with autism. New York, NY: Springer.
Seida J.K., Ospina M.B., Karkhaneh M., Hartling L., Smith V., Clark B. (2009). Systematic reviews of psychosocial interventions for autism: an umbrella review. *Dev Med Child Neurol*. 51(2), 95-104.
Wainer A. L., Ingersoll B. R. (2010). The use of innovative computer technology for teaching social communication to individuals with autism spectrum disorders. *Research in Autism Spectrum Disorders*, 5 (1), pp. 96-107.
Whalen, C., Liden, L., Ingersoll, B., Dallaire, E., and Liden, S. (2006). Behavioral Improvements Associated with Computer-Assisted Instruction for Children with Developmental Disabilities. *Journal of Speech and Language Pathology and Applied Behavior Analysis*
Whalen, C., Moss, D., Ilan, A. B., Vaupel, M., Fielding, P., Maconald, K., et al. (2010). Efficacy of TeachTown: Basics computer-assisted intervention for the intensive comprehensive autism program in Los Angeles unified school district. *Autism*, 14, 179–197.
|
Adaptive Optimal Stochastic Control of Delay–Tolerant Networks
Eitan Altman, Francesco De Pellegrini, Daniele Miorandi, Giovanni Neglia
To cite this version:
Eitan Altman, Francesco De Pellegrini, Daniele Miorandi, Giovanni Neglia. Adaptive Optimal Stochastic Control of Delay–Tolerant Networks. IEEE Transactions on Mobile Computing, Institute of Electrical and Electronics Engineers, 2016, pp.1 - 15. <10.1109/TMC.2016.2611507>. <hal-01414802>
Adaptive Optimal Stochastic Control of Delay–Tolerant Networks
Eitan Altman, Francesco De Pellegrini, Daniele Miorandi and Giovanni Neglia
September 2016
Abstract
Optimal stochastic control of delay tolerant networks is studied in this paper. First, the structure of optimal two-hop forwarding policies is derived. In order to be implemented, such policies require knowledge of certain global system parameters such as the number of mobiles or the rate of contacts between mobiles. But, such parameters could be unknown at system design time or may even change over time. In order to address this problem, adaptive policies are designed that combine estimation and control: based on stochastic approximation techniques, such policies are proved to achieve optimal performance in spite of lack of global information. Furthermore, the paper studies interactions that may occur in the presence of several DTNs which compete for the access to a gateway node. The latter problem is formulated as a cost-coupled stochastic game and a unique Nash equilibrium is found. Such equilibrium corresponds to the system configuration in which each DTN adopts the optimal forwarding policy determined for the single network problem.
1 Introduction
Delay–Tolerant Networks (DTNs) are sparse and/or highly mobile wireless ad hoc networks where no continuous connectivity guarantee can be assumed [2]. One central problem in DTNs is related to the routing of messages towards intended
*Published in IEEE Transactions on Mobile Computing. A shorter version appeared in the Proc. of IEEE INFOCOM, Rio de Janeiro (BR), Apr. 2009 [1]. E. Altman and G. Neglia are with University of Côte d’Azur, Inria, 2004 Route des Lucioles, Sophia-Antipolis (France), email: firstname.lastname@example.org; D. Miorandi is with U-Hopper s.r.l., Via Antonio da Trento 8, 38122 Trento (Italy), email: email@example.com; this work was carried out while he was with CREATE-NET. F. De Pellegrini is with CREATE-NET, via alla Cascata 56/D, 38123, Povo, Trento (Italy), email: firstname.lastname@example.org.
destinations. Protocols developed in the mobile ad hoc networks field, indeed, would fail because a complete route to destination may not exist most of the time. A common technique for overcoming lack of connectivity is to disseminate multiple copies of the message in the network: this enhances the probability that at least one of them will reach the destination node within a given time delay [3]. This is referred to as epidemic-style forwarding [4], because, alike the spread of infectious diseases, each time a message–carrying node encounters a new node not having a copy thereof, the carrier may infect this new node by passing on a message copy; newly infected nodes, in turn, may behave similarly. The destination receives the message when it meets an infected node. Hereafter, two variants will be considered. In the first one, that referred to as address-centric case, nodes are assigned unique identifiers, and the source node generates messages destined to one specific node. In the second one, that is called data-centric case and is meant to model a content-centric DTN system [5], messages include metadata describing the content. Nodes have a filter based on which they discriminate whether a message is of interest or not, e.g., by matching with the message’s metadata.
This paper addresses the case where mobile nodes have no a priori information on the encounter pattern. In order to develop all the theory thoroughly, the analysis is confined to the case when only the source of the message can generate new copies and hand them over to relays in radio range, while other infected nodes are allowed to forward it to the destination(s) only. The latter is referred to as two-hop forwarding [6, 7]. The main concern of this work is the optimal stochastic control of such routing protocol. The control variable is the probability of transmitting a message upon a suitable transmission opportunity, i.e., a contact with a node that is not the intended destination. The goal is to optimize the probability to deliver a message within a given time delay, while satisfying specific energy constraints.
The control of forwarding schemes in DTNs has received some attention in the literature. In [8], the authors propose an epidemic forwarding protocol based on the susceptible-infected-removed (SIR) model [9] and show that it is possible to increase the message delivery probability by tuning the parameters of the underlying SIR model. In [10] a detailed general framework is proposed in order to capture the relative performance of different self-limiting strategies. None of these two papers formalize specific optimisation problems. In [11] and its follow-up [12], the authors assume the presence of a set of special mobile nodes, the ferries, whose mobility can be controlled. Algorithms to design ferry routes are proposed in order to optimize network performance. Some works have been tackling the problem of estimation and consistency of the network state, e.g., how to produce reliable estimates of mobiles presence [13], or the number of mobiles [14], or even the intermeeting frequencies [15]. Compared to such works, the technique proposed in this paper performs such estimation implicitly: based on the theory of stochastic
approximations applied to optimal forwarding control, no explicit estimates of the system state are needed but the number of released copies of the message. Works more similar to the one presented here are [16, 17, 18]. In [16] the authors consider buffer constraints and derive, based on some approximations, buffer scheduling policies in order to minimize the delivery time. The optimization goal in [17] can be considered a relaxed version of our problem, e.g., the weighted sum of delivery time and energy consumption. Adopting that optimization goal, the authors of [19] provide a rigorous proof of the fact that the dynamics corresponding to the closed loop optimal control acting over a fluid approximation is the fluid limit – for large number of relays – of the optimal Markov decision process.
Also, under a fluid model approximation, the work in [18] provides a general framework for the optimal control of the broad class of monotone relay strategies. Apart from the differences in the optimization functions, most of the above works do not address the problem of on-line estimation of optimal policies; an attempt is done in [10, 16] based on some heuristics for the estimation. Finally, game theoretical tools are applied to the study of competing classes of nodes in a DTN scenario.
The contributions developed in this paper can be summarized with the following three results:
- The analytical characterization of optimal control policies for two-hop routing in DTNs. Optimal policies are proved to have the threshold structure.
- A method – rooted in stochastic approximation theory – able to attain the optimal policy in the absence of knowledge of system-level parameters. Also, the scheme does not require explicit acknowledgement from the destination.
- The optimal control problem is extended to the case of several competing classes of mobile devices. The framework, in this case, is that of cost-coupled stochastic games [20]. The game is proved to have a unique Nash equilibrium, where the sources’ best response coincides with the one determined for the single-class case.
The remainder of the paper is organized as follows. In Sec. 2, the problem is formalized and the system model used in the subsequent sections is presented. In Sec. 3 the structure of the optimal control policies is derived. Methods are presented in Sec. 4 to attain optimal control when part of the system’s parameters are unknown. The multiclass case is introduced in Sec. 5. Numerical results are presented in Sec. 7. Sec. 8 concludes the paper.
2 System Model
A network of \((N + P)\) mobile nodes is considered, each equipped with some form of proximity wireless communications. For a given message, there exist \(P\) potential destinations. In particular, \(P\) takes value 1 in the address-centric case, whereas in the data-centric case it is equal to the number of nodes whose filter matches the message metadata. Let \(N\) be the number of potential relay nodes (including the source). The network is assumed to be sparse, so that, at any time instant, nodes are isolated with high probability. Communication opportunities arise whenever, due to mobility patterns, two nodes get within mutual communication range. Such events are referred to as “contacts.”
The time between subsequent contacts of any pair of nodes is assumed to follow an exponential distribution with parameter \(\lambda > 0\). The validity of this assumption for synthetic mobility models (including, e.g., Random Walk, Random Direction, Random Waypoint) has been discussed in [7]. There exist studies based on traces collected from real-life mobility [21] that argue that inter-contact times may follow a power-law distribution. But, it has been shown that these traces and many others exhibit an exponential tail after a cutoff point [22].
For the sake of tractability, the sequences of inter-contact times are assumed to be mutually independent. The degree to which real-world traces follow this assumption depends on a number of factors, related in particular to the correlation of the mobile devices’ trajectories. Experimental evidence suggests that this assumption can model relatively well scenarios in which mobile trajectories are not correlated and the mobility pattern leads to a “fast mixing” of the IDs of nodes that meet (a good example is Random Waypoint in a sparse scenario with fast moving nodes). On the other hand, this assumption does not hold in case of (i) mobiles that tend to remain around a given point (e.g., Random Walk) (ii) nodes moving in clusters and/or mobility characterized by attraction points (points of interest).
The case of heterogeneous DTNs, i.e., with different intermeeting intensity, is out of the scope of the present work; nevertheless, the presented framework can be extended to handle such a case as well [23].
There can be multiple source-destination(s) sets, but the analysis is here limited to a single message, eventually with many copies, spreading in the network.\(^1\) For simplicity, a message is assumed to be generated at time \(t = 0\). The message transmitted is also assumed to be relevant before some deadline \(\tau\). This applies, e.g., to environmental information or data referring to events of transient nature (e.g., social events or meetings). The message contains a timestamp reporting its
\(^1\)Results in sections 3 and 4 are valid when multiple messages are present in the network at the same time, provided that the bandwidth and the buffer are large enough to assure that the different propagation processes are not interfering.
generation time, so that it can be dropped when it becomes irrelevant.\footnote{Time elapsed since generation can be traced summing up the time elapsed at each node with no need for global synchronization.} Due to disconnected operations, it is assumed that no feedback mechanism allows the source or other mobiles to know whether the message has been successfully delivered to the destination within time $\tau$. The focus of the work is on a set of relaying strategies that can be defined as probabilistic two-hop routing strategies. At each encounter between the source and a mobile that does not have the message, the message is relayed with some probability taking values in $U = [u_{\text{min}}, u_{\text{max}}] \subseteq [0, 1]$. It is worth observing that $0 < u_{\text{max}} \leq 1$ is equivalent to a thinning operation of the meeting point process, i.e., assuming $\lambda' = u_{\text{max}} \lambda$; $u_{\text{min}} > 0$ is required in certain rewarding mechanisms used in order to incentivize relays to take part to the forwarding process [24, 25].
If a mobile that is not the source has the message and it is in contact with another mobile, then it transfers the message if and only if the other mobile is the destination node. It is worth remarking that, by using the two-hop routing, control is operated only by the source and the main energy cost to deliver a message is met by the source of the message and not by relay nodes.
A discrete time model is adopted in the rest of the work, i.e., time is slotted with time slot duration $\Delta$, where $\tau = K \Delta$. The $n$-th slot corresponds to the interval $[n \Delta, (n + 1) \Delta)$. In this discrete time setting, a mobile receiving a copy during a tagged time slot can forward it starting from the following one. Moreover the forwarding probability during $[n \Delta, (n + 1) \Delta)$ is a constant and it is denoted by $u_n$.
Let $X_n$ be the number of mobiles, not including the potential destinations, that have a copy of the message at time $n \Delta$ (i.e., the beginning of the $n$-th slot), $X_0 = 1$. Under the assumptions above, $X_n$ is a Markov chain with possible states $1, \cdots, N$. The transition rates depend on the forwarding probability used by the source at each time slot, so that a natural way to optimize the performance of the system is to control this parameter.
The problem addressed in this paper is to maximize the probability to deliver the message to the destination,\footnote{In the data-centric case, one randomly chosen potential destination node is considered.} by time $K \Delta$, under a constraint on the expected number of message copies injected in the system By posing a constraint on the number of copies, the proposed model poses a constraint on the energy expended at the source and at relays. Actually, if constant per-contact energy is required in order to forward a message, such a constraint can express the network energy consumption due to both transmission and reception of a message.
The aim is to determine optimal time-dependent forwarding policies the source
can implement.
More formally a forwarding policy (control policy) is defined as a function $\mu : \{0, 1, \ldots, K - 1\} \to U$. The control used at time $n$ is denoted by $u_n$.
In what follows a key role will be played by two types of forwarding policies, static and threshold policies, defined as follows:
**Definition 2.1.** A policy $\mu$ is a static policy if $\mu$ is a constant function, i.e., $u_n = p \in U$, for $n = 0, 1, 2, \ldots, K - 1$. A policy $\mu$ is a threshold policy if there exists $h \in \{0, 1, 2, \ldots, K - 1\}$ (the threshold) such that $u_n = u_{\text{max}}$ if $n < h$ and $u_n = u_{\text{min}}$ if $n > h$.
Static and threshold policies are different from the implementation standpoint. In fact, with static policies, at each communication opportunity, message forwarding is done with constant probability $p$. Conversely, with threshold policies, each time a mobile has a forwarding opportunity, it checks the time $t$ elapsed since the message generation time and it forwards the message with some probability $u(t)$.
It is worth observing that static and threshold policies depend on few parameters only, i.e., the control $p$ for static policies, and the threshold $h$ and the corresponding value $u_h$ for dynamic policies.
The following section provides the characterization of optimal static and threshold policies. In Table 1 the notation used throughout the paper is reported for the sake of reading.
### 3 Characterization of Optimal Policies
#### 3.1 Preliminaries
Let $F_D(n)$ denote the probability that a message generated at time 0 is received before $n \cdot \Delta$ under control policy $\mu$ (by the intended destination in the address-centric case and by one randomly chosen potential destination in the data-centric case). The objective is to derive policies that maximize $F_D(K)$, while satisfying the following constraint on the expected number of message copies: $\mathbb{E}[X_K] \leq \Psi$. Let $X_0$ be the number of nodes with a copy of the message at time 0.
Before proceeding with the main statements of this section, closed forms for the dynamics of $\mathbb{E}[X_n]$ and $F_D(n)$ are derived.
Let $\zeta_r(j)$ be the indicator that the $j$-th relay among the $N - X_0$ relays that do not have the message at time 0, receives the message in slot $[r \Delta, (r + 1) \Delta)$, for $0 \leq r \leq n - 1$.
Random variables $\zeta_r(j)$ are stochastically increasing in the control actions $u_k$, and so are their sums (see [26] for definition and properties of usual stochastic
| Symbol | Meaning |
|--------|---------|
| $P$ | number of destinations |
| $N$ | number of relay nodes (including the source) |
| $\lambda$ | intermeeting intensity |
| $\Delta$ | time slot |
| $\tau = K \cdot \Delta$ | timeout value |
| $X_n$ | number of nodes having a copy of the message at time $n\Delta$ |
| $\Psi$ | maximum expected number of message copies |
| $F_D(n)$ | probability that the message is delivered by time $n\Delta$ |
| $\mu(\cdot)$ | control policy |
| $u_n$ | value taken by the control variable (i.e., forwarding probability) at time $n\Delta$ |
| $p$ | value taken by the control variable under static control |
| $h$ | time threshold |
| $\theta$ | $= \sum_{k=0}^{K-1} u_k$ |
| $\beta$ | $\theta$ value for the optimal policy |
| $\zeta_r(j)$ | indicator that the $j$–th mobile relay, among the $N - X_0$ ones that do not have the message at time 0, receives it during the $r$-the slot |
| $\overline{X}_m$ | estimate of $\mathbb{E}[X_K]$ at the $m$-th round of the stochastic approximation algorithm |
| $\Pi_H(u)$ | projection over $H$ of the value $u$ |
| $\{\cdot\}^{(i)}$ | superscript indicates that the quantity refers to the $i$-th class of mobile nodes |
| $Y_n^{(i)}$ | number of class $i$ infected nodes that can transmit to the destination during the $n$-th time slot |
| $S_n$ | total number of infected nodes that can transmit to the destination during the $n$-th time slot |
| $S_n^{(-i)}$ | total number of infected nodes that can transmit to the destination during the $n$-th time slot but class $i$-th ones |
Table 1: Notation used throughout the paper.
order). Also, variables $\zeta_r(j)$ are independent and identically distributed Bernoulli random variables. We observe that the expected value $E[\zeta_r(j)] = (1 - \exp(-\lambda u_r \Delta)) \exp(-\lambda \Delta \sum_{h=0}^{r-1} u_h)$ is obtained by combining the independent increment of the intermeeting process with a thinning argument, because the source node implements independent random decisions in each slot.
Hence, $\sum_{r=0}^{n-1} \zeta_r(j)$ is the indicator that the $j$–th relay among the $N-X_0$ relays that do not have the message at time 0 receives the message within slot $n-1$. It holds
$$\Pr\left\{\sum_{r=0}^{n-1} \zeta_r(j) = 1\right\} = 1 - \exp\left(-\lambda \Delta \sum_{h=0}^{n-1} u_r\right)$$
The dynamics of the number of message copies obeys to the equation
$$X_n = X_0 + \sum_{j=1}^{N-X_0} \sum_{r=0}^{n-1} \zeta_r(j). \quad (1)$$
As observed above, $X_k$ is stochastically increasing in the control action $u_k$ as well: if policies $\mu'$ and $\mu$ differ only at index $k$, $0 \leq k \leq K$, where $u'_k > u_k$, it follows that
$$X'_n >_{st} X_n, \forall n > k \quad (2)$$
This formalizes the intuition that, under the monotonicity conditions of our theoretical framework, the higher the forwarding probability, the higher the number of infected nodes. A stronger characterization for the monotonicity in the control action is provided later in this section.
For the expected number of message copies, it holds
$$E[X_n] = X_0 + E\left[\sum_{j=1}^{N-X_0} \zeta_{0,n}(j)\right]$$
$$= X_0 + (N - X_0) \left(1 - \exp(-\lambda \Delta \sum_{k=0}^{n-1} u_k)\right) \quad (3)$$
To characterize $F_D(n)$, some further notation is introduced. Hereafter let $X$ denote the stochastic process corresponding to the number of nodes with a copy of the message until time $n$, i.e. $X = (X_0, X_1, \ldots, X_{n-1})$ and let $x = (x_0, x_1, \ldots, x_{n-1})$ represent its sample path $x$. Let $\chi = \{0, 1, \ldots, N\}$ so that $X \in \chi^n$ and $X_h \in \chi$ for all $h = 0, 1, \ldots, n-1$. Also, let $P_X(x), x \in \chi^n$, be the
probability distribution of $X$. Also, $G(n) = 1 - F_D(n)$ describes the complementary cumulative distribution function of the delay, which writes
$$G(n) = 1 - F_D(n) = \Pr\{\text{no delivery by } n\Delta\}$$
$$= \sum_{x \in X^n} P_X(x) \Pr\{\{\text{no delivery by } n\Delta\}|X = x\}$$
$$= \sum_{x \in X^n} P_X(x) \Pr\{\bigcap_{r=0}^{n-1} \{\text{no delivery in slot } r\}|X = x\}$$
$$= \sum_{x \in X^n} P_X(x) \prod_{r=0}^{n-1} e^{-\lambda x_r \Delta} = \mathbb{E}\left[e^{-\lambda \Delta \sum_{r=0}^{n-1} X_r}\right]$$
Now, it is possible to observe that
$$\sum_{r=0}^{n-1} X_r = nX_0 + \sum_{j=0}^{N-X_0} (n-r-1)\zeta_r(j)$$
$$= nX_0 + \sum_{j=0}^{N-X_0} V(j)$$ \hspace{1cm} (4)
where the auxiliary random variables
$$V(j) := \sum_{r=0}^{n-2} (n-r-1)\zeta_r(j)$$ \hspace{1cm} (5)
In (5), index $j$ identifies the $j$-th relay among those which do not have the message at time 0; also, slot $n-2$ is the last slot when $j$ can receive the message in order to have a message copy at time $(n-1)\Delta$, i.e., at the beginning of slot $n-1$.
Hence, $V(j)$ is the number of slots during which $j$ has been holding the message within the interval $[r\Delta,(n-1)\Delta)$. In fact, $(n-1)-r$ is the number of slots elapsed till the end of the interval once $j$ received the message in slot $r$. Note that $V(j)$ and $V(i)$ are i.i.d for $j \neq i$ because the intermeeting intervals of source and relays $i$ and $j$ are i.i.d. as well.\footnote{Observe that independence assumption is needed just for intermeetings where one of the nodes is the source or a destination} Also, control $u_{n-1}$ does not appear in (5): in fact, by model assumption, new nodes infected in slot $n-1$ are able to deliver the message to the destination only starting from the following time slot, i.e., beyond time $n\Delta$.
In particular, let \( p_V(v) = \Pr\{V = v\} \) (index \( j \) is omitted for simplicity’s sake), it holds
\[
p_V(v) = \begin{cases}
e^{-\lambda \Delta \sum_{r=0}^{n-2} u_r}, & v = 0 \\
(1 - e^{-\lambda \Delta u_{n-v}}) e^{-\lambda \Delta \sum_{r=0}^{n-v-2} u_r}, & v = 1, \ldots, n - 1
\end{cases}
\]
Hence, the following expression is obtained
\[
G(n) = E \left[ e^{-\lambda \Delta \left( nX_0 + \lambda \Delta \sum_{j=0}^{N-X_0} V(j) \right)} \right]
\]
\[
= e^{-\lambda \Delta nX_0} E \left[ e^{-\lambda \Delta V(1)} \right]^{N-X_0}
\]
(6)
From (6), it follows that \( X_n \) has stronger monotonicity properties than those expressed by (2).
**Corollary 3.1.** Let \( k \in \{0, 1, \ldots, K - 2\} \) and let \( \mu = (u_0, \ldots, u_{K-1}) \) be a given policy. A new policy \( \mu' = (u'_0, \ldots, u'_{K-1}) \) is defined as: \( u'_n = u_n \) for \( n \neq k \) and \( u'_k > u_k \). Let \( X'_K \) and \( F'_D(\cdot) \) be the number of message copies and the CDF of message delay when \( \mu' \) is employed, respectively. It then holds: \( F_D(K) < F'_D(K) \), \( E[X_K] < E[X'_K] \).
**Proof.** The statement concerning \( E[X_K] \) follows by inspection of (3).
The statement concerning \( F_D(K) \) requires some further insight. First observe that from (6), and considering \( 0 \leq r \leq K - 2 \), it follows:
\[
\frac{d}{du_r} p_V(v) = -\lambda \Delta p_V(v), \quad \text{if } 0 \leq v < K - r - 1
\]
\[
\frac{d}{du_r} p_V(v) = \frac{\lambda \Delta e^{-\lambda \Delta u_r}}{1 - e^{-\lambda \Delta u_r}} e^{-\lambda \Delta \sum_{i=0}^{n-r-2} u_h}, \quad \text{if } v = K - r - 1
\]
\[
\frac{d}{du_r} p_V(v) = 0, \quad \text{if } K - r - 1 < v \leq K - 1
\]
(7)
Given policy \( \mu = (u_0, \ldots, u_{K-1}) \), consider the new policy \( \mu' = (u'_0, \ldots, u'_{K-1}) \), defined as: \( u'_n = u_n \) for \( n \neq k \) and \( u'_k > u_k \). Denote the corresponding r.v. defined in (5) as \( V' \) and \( V \), respectively and let \( p_{V'}(v) = p_V(v) - \delta p(v) \). From (7), it follows \( \delta p(v) > 0 \) for \( 0 \leq v < K - k - 1 \), \( \delta p(v) = 0 \) for \( K - k - 1 < v \leq K - 1 \), and, finally:
\[
\delta p(K - k - 1) = - \sum_{h=0}^{K-k-2} \delta p(h)
\]
Now, it is possible to write
\[
E \left[ e^{-\lambda \Delta V'} \right] - E \left[ e^{-\lambda \Delta V} \right] = \sum_{h=0}^{K-k-1} \frac{1}{z^h} (p_{V'}(v) - p_V(v))
\]
\[
= \sum_{h=0}^{K-k-1} \frac{1}{z^h} \delta p(h) = \frac{1}{z^{K-k-1}} \sum_{h=0}^{K-k-2} \delta p(h) - \sum_{h=0}^{K-k-2} \frac{1}{z^h} \delta p(h)
\]
\[
= \sum_{h=0}^{K-k-2} \delta p(h) \left( \frac{1}{z^{K-k-1}} - \frac{1}{z^h} \right) < 0
\]
(8)
where \( z := e^{\lambda \Delta} > 1 \) for the ease of notation. The statement follows from the strict monotonicity of \( G(n) \) in \( E \left[ e^{-\lambda \Delta V} \right] \) as in (6).
Also, as a direct consequence of the previous proposition, the following holds:
**Corollary 3.2.** If an optimal policy exists, either it is the static policy \( \mu_{max} \) with \( \mu_{max}(n) = u_{max}, \forall n \), or it saturates the constraint, i.e., \( E[X_K] = \Psi \).
**Proof.** Consider a policy \( \mu \), that is different from \( \mu_{max} \) (i.e., \( k \) exists s.t. \( \mu(k) < u_{max} \)) and does not saturate the constraint \( (E[X_K] < \Psi) \). As the expected number of infected nodes is a increasing continuous function of the forwarding probabilities, from \( \mu \) a new policy \( \mu' \) can be obtained, by increasing the forwarding probability in \( k \), while satisfying the constraint \( E[X'_K] \leq \Psi \). The new policy has better performance, since \( F'_D(K) > F_D(K) \), which is a contradiction.
It is immediate to observe that the set of admissible policies is empty if and only if the policy \( \mu_{min}(n) = u_{min} \) for all \( n \), does not satisfy the constraint.
### 3.2 Threshold structure of the optimal policy
Many Markov Decision Processes (MDPs) have optimal policies with threshold structure. There has been much work on characterizing properties of the cost and/or transition probabilities of an MDP that imply the threshold structure in the optimal policies. Some examples are [27, 28]. In absence of constraints, MDPs are known to possess pure optimal policies, so it is not surprising that unconstrained MDPs often have pure threshold optimal policies.
The control problem proposed in this work falls into the category of constrained MDP where one criterion is maximized while keeping the other one below a threshold. For such problems (where there is a single constraint) it is known from from [29] that pure optimal policies need not exist, but that Markov (and in some cases stationary) policies do exist which require randomization in at most one state.
In the rest of this section, the optimal policy in the constrained case is a threshold policy, where randomization is needed at most once, i.e, at the time that coincides with the threshold. Thus the optimal policy uses one pure action below the threshold, another one above it, and a randomization when at the threshold. Interestingly, there are many MDPs that have optimal pure threshold policies but for which, when adding constraints, the optimal policy is quite different than the one obtained here. Examples and conditions to obtain policies that have different structures are given in [30].
The first main result is described by the following theorem:
**Theorem 3.1.** There exists an optimal threshold policy. A non-threshold policy is not optimal.
**Proof.** The existence of an optimal policy follows from elementary properties of Markov decision processes [31]. By contradiction, consider a non threshold policy $\mu$ that satisfies the constraint ($E[X_K] \leq \Psi$) and construct an alternate policy $\mu'$ which is better off. Since $\mu$ is non threshold, there exists some time $k \leq K - 1$ and some $\epsilon > 0$ such that $u_{k-1} < u_{\text{max}} - \epsilon$ and $u_k > u_{\text{min}} + \epsilon$.
Let $\mu'$ be the policy obtained from $\mu$ by setting $u'_{k-1} = u_{k-1} + \epsilon$ and $u'_k = u_k - \epsilon$ (the other components are the same as those of $\mu$). Let $X'_n$ be the state process under $\mu'$. Also, $F'_{\mu'}(\cdot)$ is defined correspondingly.
First, observe that by construction the new policy is admissible since $\sum_{h=0}^{n-1} u'_h = \sum_{h=0}^{n-1} u_h$ from (3) $E[X'_K] = E[X_K] \leq \Psi$. Then, from (6) it is sufficient to show that $E[e^{-\lambda \Delta V'}] < E[e^{-\lambda \Delta V}]$.
In order to do so, the following observation is useful: from the expression of $p_V(v)$, it holds $p_{V'}(v) = p_{V''}(v)$ for $v \neq \in \{k - 1, k\}$. Also, it holds
$$p_{V'}(K - k - 1) = \left(1 - e^{-\lambda \Delta (u_{k-1} + \epsilon)}\right)e^{-\lambda \Delta \sum_{h=0}^{k-2} u_h}$$
$$> \left(1 - e^{-\lambda \Delta u_{k-1}}\right)e^{-\lambda \Delta \sum_{h=0}^{k-2} u_h} = p_V(K - k - 1)$$
$$p_{V'}(K - k) = \left(1 - e^{-\lambda \Delta (u_k - \epsilon)}\right)e^{-\lambda \Delta (u_{k-1} + \epsilon)}e^{-\lambda \Delta \sum_{h=0}^{k-2} u_h}$$
$$< \left(1 - e^{-\lambda \Delta u_k}\right)e^{-\lambda \Delta \sum_{h=0}^{k-1} u_h} = p_V(K - k)$$ \hspace{1cm} (9)
From the normalization condition it follows
$$p_{V'}(K - k - 1) = p_V(K - k - 1) - \delta, \ p_{V'}(K - k) = p_V(K - k) + \delta$$
Finally, it is possible to write
\[
E \left[ e^{-\lambda \Delta V'} \right] - E \left[ e^{-\lambda \Delta V} \right] = \sum_{h=0}^{K-1} \frac{1}{z^h} (p_{V'}(v) - p_V(v)) =
\]
\[
= -\delta \cdot \left( \frac{1}{z^{n-k-1}} - \frac{1}{z^{n-k}} \right) < 0
\]
It is hence proved that \( \mu' \) is better off \( \mu \), which concludes the proof.
It is now possible to determine the optimal threshold policy. Due to Corollary 3.2, the optimal policy is either the static policy \( \mu_{max} \) or the constraint has to be saturated. In the second case, by using the expression derived above for the average number of copies, the optimal policy satisfies
\[
N - (N - X_0) \exp(-\lambda \Delta \sum_k u_k) = \Psi.
\]
Hence the following optimality condition is obtained:
\[
\sum_{k=0}^{K-1} u_k = -\frac{1}{\lambda \Delta} \log \left( \frac{N - \Psi}{N - X_0} \right) =: \beta
\]
(10)
This directly yields the threshold \( h^* \) of the optimal policy, by considering that \( u_n = u_{max} \) for \( n < h^* \) and \( u_n = u_{min} \) for \( n > h^* \) while satisfying Eq. (10). Then
\[
h^* = \max \{ l \in \mathbb{N} : 0 \leq l \leq K-1, \text{s.t.}
\]
\[
v(l) = l \cdot u_{\max} + (K-1-l) \cdot u_{\min} \leq \beta \},
\]
and \( u_{h^*} = \beta - v(h^*) \). In the particular case of \( u_{min} = 0 \), this reduces to \( h^* = \lceil \beta \rceil \) and \( v(h^*) = \beta - \lfloor \beta \rfloor \). If \( u_{min} = 0 \) and \( u_{\max} = 1 \) then the optimal policy chooses \( u_k = 1 \) for all \( k < \beta \) and \( u_k = 0 \) for all \( k \geq \beta + 1 \). At the remaining time, \( k = \lceil \beta \rceil \), it uses \( u_k = \beta - \lfloor \beta \rfloor \).
The same reasoning can be applied to determine the best static policy. In particular it is \( \mu_{max} \), if \( \mu_{max} \) satisfies the constraint (and in such case the best static policy is also the optimal one), otherwise Eq. (10) holds, and by imposing \( u_n = p^* \) for all \( n \), then \( p^* = \beta/(K-1) \).
It is worth remarking that the results on existence, structure and form of the optimal forwarding policy are the same for the address-centric and for the data-centric cases.
3.3 Comments
The result obtained on the optimality of a threshold-type policy is worth some comments. Intuitively, one would expect that an optimal policy for our problem depends on the (Markovian) state $X_k$. The above result shows that the knowledge of such variable is not necessary to derive an optimal policy. Indeed, the optimal policy falls in the class of open loop policies, i.e., which do not depend on the underlying state. The key point here is that, under the assumptions of the proposed model, a strict monotonicity property holds for the dynamics of the number of copies.
Indeed, the results described above state that (i) the delivery probability is strictly monotone in the number of copies in the network (ii) the evolution of the number of copies in the network (the state $X_k$) is strictly monotone in the control. These properties do not depend on whether the policy is open vs. closed loop. Also, (3) tells us that by using open loop policies – given the knowledge of some system level parameters such as $\lambda$ and $N$ – it is possible to fully control the evolution of $E[X_k]$, i.e., of the average number of copies in the network. As the constraint is on $E[X_k]$, the optimal policy results of the open loop type: hence, for any optimal closed loop policy it is possible to derive an equivalent open loop one.
It is possible to compare the threshold–type policy with the well-known spray-and-wait scheme. In a version of such a scheme, indeed, the source node distributes a copy of the message to the first $L$ nodes encountered. This can be seen as a controlled two–hop routing, where the control is a closed loop policy, i.e., one that depends on the state of the system (the number of copies $X_k$). By assuming $u_{min} = 0$ it is possible to compare spray–and–wait with a threshold–type policy: the main difference is that the latter does not satisfy a bound on the number of copies made on the single realization, but, rather, on the expected number of copies.
4 Stochastic Approximations for Adaptive Optimization
From the results in the previous section and, in particular, from (10), it can be seen that the design of optimal policies requires the knowledge of global parameters such $N$ and $\lambda$. In a real setting, such parameters may be unknown or can change over time. In this section methods are introduced in order to achieve optimal control policies in the absence of information on such parameters.
The approach is based on stochastic approximation theory [32]. This framework generalizes Newton’s method to determine the root of a real-valued function when only noisy observations of such function are available.
Recall the two frameworks of optimization considered throughout the paper:
• Static control: find $p^* \in [u_{\text{min}}, u_{\text{max}}]$ such that the policy $u_n = p^*$ has the best performance among all static policies.
• Dynamic control: find $h^* \in \{0, 1, \cdots, K - 1\}$ and $\mu(h^*)$ characterizing the optimal policy.
It is possible to approach on-line estimation of optimal static and dynamic control in a unified way. For a generic policy, let $\theta = \sum_{k=0}^{K-1} u_k$ denote the sum of the controls used over the $K$ time slots. $\theta$ is uniquely determined from policy $\mu$, but it also identifies uniquely a static or a threshold policy. For the static policy it is indeed $u_n = p = \theta/K$, while for the threshold policy $h$ the dependence on $\beta$ is given by (11).
Note that if $\theta = \beta$, then the two policies are the optimal static and threshold policies determined in the previous section. The problem reduces therefore to the distributed estimation of $\beta$.
The stochastic approximation algorithm will estimate $\beta$ looking for the unique solution of a certain function in $\theta$ in the interval $[\theta_{\text{min}}, \theta_{\text{max}}] = [K \cdot u_{\text{min}}, K \cdot u_{\text{max}}]$.
The algorithm works over rounds. Each round corresponds to the delivery of a set of messages. During a given round, a policy is used. $\mu(m)$ indicates the policy adopted at round $m$ and $\theta(m) = \sum_{k=0}^{K-1} u_k(m)$ the corresponding $\theta$ value. At the end of each round an estimate of $E[X_K]$ can be evaluated by averaging the total number of copies made during the round for each different message. Let $\overline{X}(m)$ denote such average. $\overline{X}(m)$ is used to update $\theta$, according to the following formula:
$$\theta(m + 1) = \Pi_H \left( \theta(m) + a_m (\Psi - \overline{X}(m)) \right),$$
where $\Pi_H(\cdot)$ is the projection in $[\theta_{\text{min}}, \theta_{\text{max}}]$:
$$\Pi_H(\theta) = \max \left[ \theta_{\text{min}}, \min \left( \theta, \theta_{\text{max}} \right) \right].$$
As discussed above, the new policy $\mu(m + 1)$ is univocally determined from $\theta(m + 1)$. The length of a round should be taken in such a way to enable a stable estimate of the mean number of copies performed with the policy currently in use.
The following theorem shows the convergence property of the algorithm.
**Theorem 4.1.** If the sequence $\{a_m\}$ is chosen such that $a_m \geq 0 \ \forall m$, $\sum_{m=0}^{+\infty} a_m = +\infty$ and $\sum_{m=0}^{+\infty} a_m^2 < +\infty$, the sequence of policies $\mu_m$ converges to the optimal policy with probability one.
**Proof.** On the basis of the considerations at the beginning of this section it is sufficient to prove that $\theta(m)$ converges with probability one to $\beta$. The proof is divided
in two parts. First sequence $\theta(m)$ is showed to converge to some limit set of the following Ordinary Differential Equation (ODE)
$$\dot{\theta} = \Psi - E[X_K | \theta]. \tag{13}$$
The convergence is a consequence of Theorem 2.1 in [32] (page 127). In order to use such a result, it is sufficient to show that:
i) $\sup_m E[Z^2(m)] < +\infty$, where $Z(i) = \Psi - \overline{X}(i)$: this is satisfied since $|Z(m)| \leq N$ for all $m$;
ii) $\sum_{m=0}^{+\infty} a_m^2 < +\infty$: this follows from the assumptions on the sequence $\{a_m\}$;
iii) There exist a measurable function $\overline{g}(\cdot)$ and r.v. $\eta_i$ such that $E_m Z(m) = E[Z(m) | \theta(0), Z(i), i < m] = \overline{g}(\theta) + \eta(m)$;
iv) $\overline{g}(\cdot)$ is continuous;
v) $\sum_{m=0}^{+\infty} |\eta(m)| a_m < +\infty$ w.p. 1.
For the case at hand, it is possible to write in closed form
$$E_m Z(m) = E[Z(m) | \theta(0), Z(i), i < m] = E[Z(m) | \theta(m)]$$
$$= (\Psi - N) + (N - X_0) e^{-\lambda \theta(m) \Delta} \tag{14}$$
Hence, it follows that $E_m Z(m) = \overline{g}(\theta(m))$ where function $\overline{g}(\theta) = (\Psi - N) + (N - X_0) e^{-\lambda \theta \Delta}$ is clearly continuous in $\theta$ and thus also measurable. Note that $\eta(m) \equiv 0$ for $\forall m$, i.e., an unbiased estimator of the number of copies is employed. This fact, together with the fact that $\theta$ is bounded with probability one, concludes the first part of the proof.
The second part of the proof shows that the solution of such ODE converges to $\beta$ as time diverges. From Eq. (1):
$$E[\overline{X}(m) | \theta(m)] = E[X_K | \theta(m)] = N - (N - X_0) e^{-\lambda \theta(m) \Delta} \tag{15}$$
so that Eq. (13) writes
$$\dot{\theta} = \Psi - N + (N - X_0) e^{-\lambda \theta(m) \Delta}. \tag{16}$$
In order to show that (16) converges to $\beta$, it is sufficient to observe two facts. First, by inspection, $\theta^* = \beta$ is an equilibrium point of (16). Second, as $E[X_K | \theta]$ is strictly monotone in $\theta$, the equilibrium point is unique. In order to demonstrate the
stability of the estimator, the standard Lyapunov function \( V(\theta) = (\theta - \theta^*)^2 \) can be employed.
Then, it follows:
\[
\dot{V}(\theta) = 2(\theta - \theta^*) \cdot \dot{\theta} = 2 \left[ \theta + \frac{1}{\lambda \Delta} \log \left( \frac{N - \Psi}{N - X_0} \right) \right] \cdot \left[ \Psi - N + (N - X_0) e^{-\lambda \theta \Delta} \right] < 0 \text{ for } \theta \neq \theta^*
\]
Thus, the asymptotic global stability of \( \theta^* \) follows from the Lyapunov’s theorem. \( \square \)
**Remark 4.1.** The proof is based on the fact that sequence \( \theta(m) \) converges to some limit set of (16). Theorem 2.1 in [32] shows that \( \theta(m) \) converges to \( \theta(t_m) \), where \( \theta(t) \) is the solution of Eq. (13) and \( \{t_m\}_{m \geq 0} \) is the sequence defined as follows:
\[
t_0 = 0, \quad t_m = t_{m-1} + a_m, \text{ for } m > 0
\]
**Remark 4.2.** Given the expressions obtained in the previous section, Eq. (16) can be solved, leading to:
\[
\theta(t) = \frac{1}{\lambda \Delta} \ln \left\{ e^{\lambda \Delta[(\Psi - N)t + \theta(0)]} + \frac{N - X_0}{N - \Psi} \left[ 1 - e^{\lambda \Delta(\Psi - N)t} \right] \right\}
\]
**Remark 4.3.** The description of the algorithm above has been derived by assuming that the on-line estimation of the optimal control is obtained by using in Eq. (12) the estimation \( \overline{X}(m) \) obtained from real message transmissions. This constraint can be overcome by using virtual messages. Indeed, the stochastic approximation technique requires the source to estimate the number of copies it would transmit during a time window of duration \( \tau \) if it had a message to transmit. Then the source can simply register the contacts and “virtually” apply the policy. Multiple instances of the virtual relaying protocol (with different policies) can be run in parallel, speeding up the convergence to the optimal policy. If a real message has to be transmitted, the current policy estimation can be used.
### 4.1 Optimal Choice of the Sequence \( \{a_m\} \)
The performance of the stochastic approximation algorithm (12) is known to depend heavily on the choice of the sequence \( \{a_m\} \) [33]. By comparing Eq. (12) and Eq. (18), a trade-off can be observed. In fact, sequences \( \{a_m\} \) vanishing slower guarantee a faster convergence to the ODE trajectory because the series \( \sum a_m \) diverges faster and then \( t_m \) in Eq. (18) is larger. At the same time the corresponding
estimation experiences increased noise effects due to the larger step size in the iterates equation (12).
A standard choice is \(a_m = \frac{C}{m}\), the optimal value of \(C\) that guarantees the smallest asymptotic variance is \(C = \left.\frac{\partial E[X(\tau)|\theta]}{\partial \theta}\right|_{\theta = \theta^*}\) ([32]). In general, however, \(C\) is unknown (as it depends on the unknown function \(E[X(\tau)|\theta]\)) and cannot be set a priori.
Another possible approach to improve the performance is to use techniques such as Polyak’s averages [32, 34]. The idea is to use larger “jumps” to let the iterates converge faster, while using averages to smooth actual estimates. In Polyak’s method, a sequence \(a_m = O(m^{-1})\) can be adopted: in particular, one that satisfies the condition \(a_m / a_{m+1} = 1 + o(a_m)\) and use the outcome as estimation of the optimal policy (i.e., as the control to be used on real messages)
\[
\Theta(m) = \frac{1}{m} \sum_{k=1}^{m} \theta(k).
\] (20)
It is worth noting that such procedure does not add computational complexity, as the iterated \(\Theta_m\) can be conveniently computed as
\[
\Theta(m + 1) = \Theta(m) + \frac{\theta(m + 1) - \Theta(m)}{m + 1}.
\] (21)
Observe that the use of virtual messages described before naturally decouple the iterations of the basic stochastic approximation equation (12) and the control used on the real messages, i.e., the averaged value of (21).\(^5\)
In Section 7 Polyak’s averaging techniques will be showed convenient in terms of convergence time.
### 4.2 Constant Step Approximations
In a real DTN implementation, one appealing feature of the solution proposed is that it can track changing conditions. This is attained by a specific stochastic approximation technique adopting constant step approximations, i.e., by assuming \(a_m = \varepsilon\) for all \(m\).
In this way, the system keeps on modifying its behaviour, in an open-ended fashion. Even for such case, results on convergence can be derived, even though in weaker form:
\(^5\)If we would simply choose to plug in the averaged value into the primary control, the effect of coupling would vanish the advantage of averaging [32].
Theorem 4.2. Consider iterates of the form:
\[
\theta^\varepsilon(m + 1) = \Pi_H \left( \theta^\varepsilon(m) + \varepsilon (\Psi - \overline{X}^\varepsilon(m)) \right).
\] (22)
For any $\delta > 0$, define by $N_\delta(\theta^*) = \{ x \in \mathbb{R} : |x - \theta^*| < \delta \}$. As $\varepsilon \to 0$, sample paths $\theta^\varepsilon(m)$ converge in distribution to elements in $N_\delta(\theta^*)$. The fraction of time spent by the process in $N_\delta(\theta^*)$ during $[0, T]$ goes to 1 as time horizon $T$ diverges.
Proof. The convergence is a consequence of [32][Thm.2.1, p. 248]: in order to apply that result, a set of sufficient conditions needs to be verified. The first such condition is verified by construction since the constraint set $[\theta_{\min}, \theta_{\max}]$ is a rectangle (condition A4.3.1 in [32][Thm.2.1, p. 248]).
In addition, once defined $Y^\varepsilon(i) = \Psi - \overline{X}^\varepsilon(i)$, the following conditions have to be verified:
i) $\{ Y^\varepsilon(i) \}$ are uniformly integrable;
ii) There exist measurable functions $g_m^\varepsilon(.)$ and r.v. $\eta_i$ such that $\mathbb{E}_m Y^\varepsilon(m) = \mathbb{E}[Y^\varepsilon(m)|\theta(0), Y^\varepsilon(i), i < m] = g_m^\varepsilon(\theta) + \beta^\varepsilon(m)$;
iii) $\lim_{m,n,\varepsilon}^{+\infty} \sum_{r=n}^{n+m-1} \mathbb{E}_m \beta^\varepsilon(m) = 0$;
iv) There exists a continuous function $g(.)$ such that, for each $\theta \in [\theta_{\min}, \theta_{\max}]$,
\[
\lim_{m,n,\varepsilon}^{+\infty} \sum_{r=n}^{n+m-1} (g_m^\varepsilon(\theta) - g(\theta)) = 0,
\]
where the meaning of the limit appearing in iii) and iv) is that $m, n \to \infty$ and $\varepsilon \to 0$, in any way possible.
Indeed i) holds because $Y^\varepsilon(i)$ has finite support. As in Thm. 4.1, ii) derives from the closed form
\[
\mathbb{E}_m Y^\varepsilon(m) = (\Psi - N) + (N - X_0) e^{-\lambda \theta^\varepsilon(m)\Delta}
\] (23)
so that $\overline{g}_m(\theta) = (\Psi - N) + (N - X_0) e^{-\lambda \theta^\varepsilon(m)\Delta}$; also, as remarked in Thm. 4.1, $\beta^\varepsilon(m) \equiv 0$ for $\forall m$, i.e., an unbiased estimator of the number of copies is employed, thus iii) follows and iv) is verified by defining $g(\theta)$ as in Thm. 4.1. \qed
5 The Multiclass Case
In this Section the model is extended to the case of several competing DTNs. Such extension will draw on results from weakly coupled stochastic games [20].
5.1 Game model
Consider a network that contains $M$ classes of mobiles. There are $N_m$ mobile nodes in class $m$. In each class there is a source and a mobile of class $i$ stores and forwards only messages originating from the source of that class. Nodes adopt two-hop routing. All sources generate messages for the same set of intended destinations. (This means that they generate messages to the same node in the address-centric case and messages with the same metadata in the data-centric case.) E.g., several competing applications may use different set of relays to diffuse a specific content such as advertisement messages. In particular, such a model is relevant is the case when each DTN is used to deliver a file to a remote host, and destinations represent a common set of access points acting as shared gateways.
Different classes interact in the dissemination process only when they attempt to transmit to the destination node.\footnote{This is the case, e.g., when sources are scattered over a large area and collisions of sources transmitting messages to relays are negligible.} However, collisions occur when two or more nodes from different classes attempt to transmit to the destination at the same time, e.g., when multiple DTNs leverage on the same gateway node.
Two cases are considered:
1. When at least two nodes from different classes attempt to transmit to the destination, the destination does not successfully decode any message; this is referred to as the collision model.
2. An arbitration procedure is coherently applied to all nodes, so that when many nodes have the possibility to transmit a message to the destination, one of them (drawn at random) succeeds; this is referred to as the arbitration model.
Concerning the traffic pattern, the model applies to the following two cases:
1. After a message is delivered or time $\tau$ has elapsed since its generation, the source can stay idle for a random amount of time after which a new message will be generated.
2. Sources synchronously generate messages with lifetime equal to $\tau$. A message is generated every $\tau$.
The problem falls into a certain category of stochastic games called cost-coupled stochastic games, introduced first in [20]. In such games, each player controls an independent Markov chain and knows only the state of that Markov chain. The interaction between the players is due to their utilities or costs which
depend on the states and actions of all players. In the framework proposed hereafter, each source can infect relays of its own class only, independently from the other sources. In turn, coupling is due to possible collisions when transmitting to the destination. The potential occurrence of collisions affects the delivery probability and therefore the optimal strategy to be used.
Let $X_n^{(i)}$ be the number of relays of class $i$ that are infected at time $n\Delta$. The following discrete time stochastic game is defined:
- **Players.** The $M$ classes of mobiles.
- **Actions.** If at time $n\Delta$ class-$i$ source encounters a mobile, it attempts transmission with probability $u_r^{(i)}$. $\mu^{(i)}$ is the time-dependent policy of class-$i$ source. In this game theoretical framework $\mu^{(i)}$ denotes also the strategy of class-$i$, while $\mu^{(-i)}$ denotes the set of strategies adopted by the other classes.
- **Performance index.** The utility of each player/class is the probability of successful delivery, $F^{(i)}(K\Delta)$. Each class has also a constraint on the expected number of infected nodes, i.e., $E[X_n^{(i)}] \leq \Psi^{(i)}$.
- **Information.** Source $i$ is assumed to know only $X_n^{(i)}$ and not know $X_n^{(j)}$ for $j \neq i$. The precise knowledge of $X_n^{(i)}$ is possible since source $i$ knows exactly to how many mobiles it transmitted the packet to. Each source is assumed not to know whether the packet was delivered to the destination by time $\tau$ or not.
Let $Y_n^{(i)}$ be the number of infected nodes of class $i$ that attempt to transmit to the destination during the $n$-th time slot ($0 \leq Y_n^{(i)} \leq X_n^{(i)}$). Also, $S_n^{(-i)} = \sum_{j \neq i} Y_n^{(j)}$ is the total number of infected nodes of classes different from class $i$ that attempt a transmission to the destination during the $n$-th time slot.
### 5.2 Nash equilibrium
**Proposition 5.1.** For both arbitration procedures considered, the complementary cumulative distribution function of the delivery delay at time $n\Delta$, $G^{(i)}(n)$, $n = 1, 2, \ldots$, is decreasing in the control action $u_r^{(i)}$, $r = 0, 1, \ldots, n - 2$.
**Proof.** Let event $A = \{\text{No class } i \text{ node attempts delivery to the destination by } n\Delta\}$ and $B = A^C \cap \{\text{all class } i \text{ nodes attempts to delivery to the destination by } n\Delta \text{ fail}\}$. Clearly
$$G^{(i)}(n) = \Pr\{A \cup B\} = G_1^{(i)}(n) + G_2^{(i)}(n)$$
First, $G_1^{(i)}(n)$ is strictly decreasing in the control action because it holds expression (6), i.e., $Y_r^{(i)} = 0$ for $r = 0, 1, \ldots, n - 1$. It remains to be proved that $G_2^{(i)}(n)$ is non increasing.
At slot $r = 0, 1, \ldots, n - 1$, all nodes $Y_r^{(i)}$ attempting to deliver to the destination will fail with probability
$$f(Y_n^{(i)}, S^{(-i)}(n)) = \begin{cases}
\Pr \left\{ S^{(-i)}(n) > 0 \right\} & \text{collision model} \\
\frac{S^{(-i)}(n)}{Y_n + S^{(-i)}(n)} & \text{arbitration model}
\end{cases} \quad (24)$$
It is immediate to observe that r.v. $f(Y_n^{(i)}, S^{(-i)}(n))$ is non increasing in the variable $Y_n^{(i)}$.
Hence, the probability of failure over all the slots is obtained by taking the expectation over the sample paths
$$G_2^{(i)}(n) = E_{Y_n^{(i)}, S^{(-i)}(n)} \left[ \sum_{h=0}^{n-1} f(Y_n^{(i)}, S^{(-i)}(n)) \right]$$
Using the same notation adopted in Thm. 3.1 policy $\mu'$ is such that $u_k' > u_k$ for a certain index $k \leq n$: clearly, $Y_n' >_{st} Y_n$ (in fact, $Y_n$ is a Poisson random variable with intensity $\lambda \Delta X_n' >_{st} \lambda \Delta X_n$).
Because $f(Y_n^{(i)}, S^{(-i)}(n))$ is non increasing in the variable $Y_n^{(i)}$, it is immediate that $f(Y_n^{(i)}, S^{(-i)}(n)) <_{st} f(Y_n^{(i)}, S^{(-i)}(n))$ for both the collision and the arbitration model, so that $G_2^{(i)}(n) \leq G_2^{(i)}(n)$, concluding the proof. □
**Theorem 5.1.** If for all $M$ classes the complementary cumulative distribution function at time $(n+k)\Delta$, $G^{(i)}(n+k)$, is decreasing in the control action $u_{k-1}^{(i)}$ for $k = 1, 2, \ldots$, then the optimal threshold policy for the single-class case is also the best response to all the possible $\mu^{(-i)}$.
**Proof.** The proof follows the same steps of that of Theorem 3.1: given a non-threshold policy $\mu^{(i)}$, it is possible to build in the same way a new policy $\hat{\mu}^{(i)}$, which can be showed, by means of Prop. 5.1, to achieve better performance than $\mu^{(i)}$. □
From the theorem above the following result follows immediately:
**Corollary 5.1.** The considered game has a unique Nash equilibrium. This Nash equilibrium is obtained when each class adopts its optimal single-class threshold policy.
Proof: The optimal threshold policies are mutual best responses, so they are a Nash Equilibrium. Moreover any different set of strategies cannot be a Nash equilibrium, because at least one class can improve its performance by adopting the optimal single-class threshold policy.
Remark: in general, in order to derive the objective function, node $i$ should know the statistics of $X_n^{(j)}$, $j \neq i$ (but not its actual value). In the present case, however, as the best response does not depend on the statistics of the number of infected nodes in other classes, such knowledge is not required for deriving the best policy to be employed.
6 The Impact of Mobility Patterns
The results presented in the previous sections have been derived under the assumption that intermeeting times among pairs of nodes could be suitably described as sequences of independent and identically distributed exponentially distributed random variables. While it is known that the exponential distribution holds rather well for some synthetic mobility patterns such as random waypoint [7], this represents an idealization for most real-world situations. The objective hereafter is hence to discuss the applicability of the results derived so far in a more general setting.
The exact expression of the delay CDF has been obtained in Sec. 3 under the exponential assumption. But, in principle the optimality of threshold policies does not require such assumption. At the heart of the proof of Theorem 3.1, in fact, lies the monotonicity property of the number of message copies over time, as derived in Proposition 3.1. The framework can therefore account for non-exponential meeting times, provided a monotonicity assumption, as done in [35]. Such approach can be used to extend the results to situations in which the intermeeting time distribution is not exponential (but intermeeting times between any pair of nodes are still sequences of i.i.d. random variables). It is interesting to remark that the same monotonicity assumption is sufficient to apply the results derived in terms of stochastic approximation (Sec. 4) and for the multi-class case (Sec. 5). By relaxing the strict monotonicity, as done in [35], a threshold policy is still optimal under certain assumptions. Intuitively, in such cases a larger number of copies does not necessarily lead to a strictly higher probability of reaching the destination, as copies may be disseminated to nodes which never get in contact with the destination node. Such a case covers situations in which not all the pairs of nodes meet and/or meet with different intensities. However, in the absence of information on the global meeting pattern, our conjecture is that threshold policies can still be proved to be a solution of the optimization problem considered.
Concerning the i.i.d. assumption on intermeeting times, this is indeed rather idealistic with respect to real-world DTN deployments. In the real world, indeed (i) pairs of nodes may mutually meet at a different rate (ii) the sequence of intermeeting times between a given pair of nodes may present correlation (e.g., if node $A$ meets node $B$ at a given time, it is highly likely that they will meet again within a short time-frame). The kind of policies considered in this work are meant to model situations in which it is not possible (or: impractical) to keep track of the ID of the nodes encountered and of the last time two nodes met. It is possible to conjecture that —thanks to the monotonicity argument— the proposed framework provides an optimal policy (though it may not be the only one) also in the case of non i.i.d. intermeeting times, provided that no state information on either the ID of nodes meeting nor the time of last encounter is maintained, and no knowledge on the meeting process is available \textit{a priori}.
Throughout the paper, it has been assumed that contacts are long enough so that all messages backlogged at a node can be transferred to the other one involved in the contact. In real-world settings, whereby duration of contacts is finite, this may not hold if the system gets congested (this depends on a number of application-dependent factors related to mobility, density of nodes, traffic pattern etc.). In general, it is possible to optimize the order by which messages are exchanged during a contact so as to maximize the delivery probability. As an example, one could send first the messages that are closest to timeout expiration, or use similar age-based scheduling and buffer management mechanisms [16]. Even in the case when node pairs meet with different intensity, this could be leveraged to introduce efficient scheduling policies. The presented results can be extended to the case of “blind” scheduling mechanisms (i.e., not based on age and not accounting for meeting intensity): again, in such cases, we conjecture that the optimality of threshold policies for forwarding is preserved.
Many delay-tolerant network deployments do not present a time-stationary meeting pattern. Mobility in many cases tends to follow a periodic pattern. This applies, e.g., to situations in which nodes are mobile phones and other personal computing devices, which are carried around by users in their daily routine. This raises issues related to (i) the optimality of threshold policies (ii) the convergence of the stochastic approximation method. As concerns the first point, intuitively, the respect of a monotonicity assumption, as discussed above for the non-exponential meeting time case, is required. This is expected to hold for most deployments. Concerning point (ii), this clearly depends on the timescale over which the stochastic properties of the meeting pattern change. Some experimental results using real-world traces are presented in [35]; the results therein suggest that convergence of stochastic approximation shall hold for a large share of DTN deployments.
7 Numerical Results
This section describes the numerical validation of the results derived on optimal control policies and on the convergence of the proposed stochastic approximation algorithm. The results of simulations have been obtained over a set of pre-recorder contact traces by emulating the message forwarding policies described before.
7.1 Stochastic Approximations
When considering stochastic approximation methods, one important performance indicator is the time needed for the algorithm to convergence and the accuracy in the identified optimal operating point.
The first set of tests are aimed therefore at evaluating the dynamics of the stochastic approximation process. A simplified setting was employed, in which any pair of nodes meets according to i.i.d. Poisson processes with intensity $1.0453 \cdot 10^{-5} \text{ s}^{-1}$; also, $N = 200$, $u_{\min} = 0$, $u_{\max} = 1$, $\Psi = 20$, $\Delta = 10 \text{ s}$ and $\tau = 20000 \text{ s}$.
The source node performs at each round a sample measurement of $\overline{X}_m$, based on 30 different estimates of the number of infected nodes at time $\tau$. At the end of each round, the control policy in use is updated according to (12). Unless otherwise specified, results in this section have been obtained considering $a_m = 1/(10 \cdot m)$ as scaling sequence.
Fig. 1 illustrates a specific run for the case when the source estimates the parameter $p^*$ for the best static policy. The figure shows that the estimates $\overline{X}_m$ evaluated by the source are noisy, due to the limited number of samples per estimate.
Nevertheless, the convergence of the algorithm is apparent from the dynamics of the control $p$, i.e. the static forwarding probability, which stabilizes after about 20 rounds around the optimal value $p^* \approx 0.45$ (the horizontal line). For the sake of completeness, the time evolution of the message delivery probability is also reported (Fig. 1b).
The same experiment was repeated for the dynamic control case. In this case, the source node tries to estimate the optimal threshold $h^* \approx 911$ slots. The dynamics of the estimated parameter is depicted in Fig. 2c). It is possible to observe that the convergence time in this case is similar to that measured in the case of static policies. This is due to the fact that in both cases the stochastic approximation algorithm estimates the same parameter $\beta$ and even if the distribution of $\overline{X}_m$ (but not its expected value) is different for static and threshold policies, the sequence of estimates $\theta_m$ converges to the solution of the same ODE. Indeed, as mentioned in Sec. 4, the ODE trajectory provides more information than the “simple” asymptotic stability of the control variable: indeed the sample trajectories of the control estimates follow a shifted ODE dynamics with probability one. In particular, in Fig. 3 the dynamics of the parameter $p$ has been depicted for the static case against a properly rescaled version of the ODE solution. Sample trajectories have been averaged over 10 runs of the algorithm. Also, in order to make the phenomenon visible, as described in [32], the dynamics are conveniently rescaled according to $t_m := \sum_{i=1}^{m} a_i$. It can be observed that, after an initial transient phase, the trajectory of the control mimics the original ODE; the maximum and minimum values of the trajectories have been superimposed to that graph for the sake of completeness. This pictorial representation confirms that the convergence speed of the algorithm is basically dictated by the dynamics of the related ODE solutions.
7.1.1 Polyak’s averages
As mentioned in Sec. 4, a slowly decaying scaling sequence $a_m$ obtains a fast convergence to the ODE dynamics and hence to the optimal control policy. The price to pay is a lower rejection to noise, with potentially larger oscillations. Here, the benefit of the Polyak-like averaging technique is showed, as a larger sequence is used, $a_m = 1/(10 \cdot m^{2/3})$, from which faster convergence but weaker noise rejection capabilities are expected.
In Fig. 4 the result of a run of the stochastic approximation procedure has been reported. The plain stochastic estimation of $\theta_{\tau_n}$, based on the chosen $a_m$ coefficients, is superimposed to the output, which has been obtained using the averaging procedure as in (20). It is possible to observe the smoothing performed by the Polyak averaging over the estimated optimal control values, both in the case of static control and in the case of threshold policies. Even though this is a particular case, this result shows, as anticipated in Sec. 4 that interesting tradeoff exist: it is possible to increase the speed of convergence of the algorithm by means of faster sequences, i.e., by approaching faster the tail of the ODE dynamics, while reducing at the same time the estimation noise by averaging.
7.1.2 Real-World Mobility Patterns
as described in Sec. 6, core aspect is the impact of real-world mobility patterns onto the convergence properties of the proposed stochastic approximation scheme. It is possible to perform numerical experiments using real world sample traces obtained
Figure 4: Algorithm employing Polyak’s averages applied to a) static and b) threshold forwarding policies.
Figure 5: RollerNet trace: performance of the stochastic approximation algorithm a) per-round success probability (running average) b) average number of message copies c) dynamics of the control output of the algorithm.
by recording the encounter patterns of wireless devices. Those described hereafter are available online for experimentation.
Rollernet trace: the trace is described in [36] and it is available in CRAWDAD. This trace has been recorded on August 20, 2006 during a rollerblading tour which lasted three hours, composed of two sessions of 80 minutes; about 2500 people participated to the tour. The trace was collected using iMotes contact loggers provided to 62 volunteers; in practice, Bluetooth interfaces were used to sample the encounters that iMotes had with other devices, i.e., possibly different iMotes or cell phones in radio range. In the described experiments a subtrace involving contacts among iMotes, restricting the DTN network to 62 devices has been employed.
Office trace: the second dataset is the one used in [37]; this dataset has been obtained by monitoring the contact of 21 employees who volunteered to participate in the measurement campaign. The employees occupy different roles within the
organization and their offices are distributed located on different floors of a research center, scattered over 4 floors in one single building. The experiment lasted 4 weeks; during such period the employees carried a mobile telephone running a Java application. The application was using Bluetooth connectivity in order to periodically trigger (every 60 seconds) Bluetooth node discovery. Upon detecting another device, its Bluetooth address, together with the current time-stamp was saved in the permanent storage of the device for later processing.
*Campus trace*: the last trace considered comes from the Student-Net project at the University of Toronto [38]. As in the case of the office trace, in the experimentation, students were equipped with Bluetooth-enabled hand-held devices, capable of generating inquiries and tracing any pairwise contact between users. The inquiry period was set so to preserve a 8–10 hours battery life-time. This resulted in a 16 seconds scan period. In particular, the trace used here involved 21 graduate students for a duration of two-and-a-half weeks.
*Results*: for real-world trace files, no optimality can be claimed with respect to the threshold policies proposed before. In fact, traces are not stationary-ergodic as assumed in Thm. 4.1.
Nevertheless, by design, the algorithm drives the system to match the constraint
on the average number of released copies. As showed in Fig. 5a and b, under non stationary-ergodic contact traces, the intermeeting contact process is subject to relevant fluctuations in the intermeeting intensities as described in [36]. In this setting, the chosen horizon is $\tau = 300$ s and $\Psi = 20$ copies, whereas the time slot considered is $\Delta = 10$ s.
Conversely, we tested Polyak averages, but they seem not to offer substantial advantage. This is due to the fact that a very fine grained adaptation step was adopted in order to reduce the bias in the estimation of the control; in turn this permits to comply with the constraint in the number of released copies.
The same analysis is repeated in the case of the office trace as depicted in Fig 6. In this case, $\Psi = 7$, $\tau = 5000$. The dynamics of the control variable $\theta_n$ is described in Fig. 6c. It is possible to observe again relatively large oscillations due to the non-stationary contact pattern of this trace. However, even in this case, the algorithm tends to stabilize at a working point that is quite far from the maximum possible threshold, but it is still able to track the constraint on the number of message copies. As before, the effect of Polyak averages is to produce a smoothed version of the control as seen in Fig. 6c. Overall, the effect of averaging is to slower the
convergence of the algorithm and to introduce a bias on the estimate of the control.
Finally, in the last run in Fig 7, the case of the *campus* trace is reported. The run is obtained for $\Psi = 7$, $\tau = 50000$. Standard stochastic approximations and Polyak’s averages are much closer to each other than with the previous traces. This can be ascribed to the structure of the trace: in the case of the *office* trace, contacts are distributed uniformly among users. Hence, the dynamics of the number of copies generated is fast. In the *campus* trace, most contacts are concentrated over a small subset of users’ ID pairs, while other nodes meet rarely. The resulting working point for the control threshold is very large: as expected since a large fraction of the considered interval of length $\tau$ is required in order to meet 7 nodes out of 20. The very slow dynamics of this trace makes the convergence slowdown introduced by the averaged version of the algorithm much less apparent compared to the previous cases.
The indication that we draw from the simulation experiments is that, even in case of non-stationary traces, the algorithm converges to an effective solution. In the case of fast time-varying traces the need to track the working point at a fast pace discourages the usage of Polyak averages.
### 7.2 Multiclass Case
In the game theoretical framework presented in Sec. 5, the result on the existence of a Nash equilibrium poses the question of the Pareto optimality of such equilibrium point. The answer is not straightforward since the success probability depends on the number of nodes involved, on the number of classes and on the underlying encounter process.
The multiclass case has been simulated considering the case of two classes, where $N_1 = N_2 = 5, 6, \ldots, 12$ nodes. Each pair of nodes meet with intensity $\lambda = 2.61 \cdot 10^{-2}$. Meetings among nodes follow i.i.d. Poisson processes. The one stage game has been iterated over 800 rounds in order to measure the impact of the different strategies in the collision model. First, the performance attained, namely the successful delivery probability by $\tau$, has been measured in the case of the first interference model introduced in Sec. 5, whereby a collision between two nodes attempting transmission to the destination results in a failure for both nodes. The Nash equilibrium policy and the corresponding best static policy has been evaluated: results are reported in Fig. 8 using 95% confidence intervals. It can be noted that the social outcome can be improved if each class adopts the best static policy. It should be observed that this is not an equilibrium, because a class would find it more convenient to switch to its optimal threshold policy. But, it provides numerical evidence that the Nash equilibrium is not Pareto optimal.

In order to better understand the dependency on the threshold value in the multi-class case, some further simulations have been performed. For this case, the number of nodes per class has been fixed at $N_1 = N_2 = 10$ and expected message copies at $\Psi = 9$. Mobility of nodes is identical to previous experiments, but with varying communication range in order to change the intensity of meetings. For each setting, the threshold at the Nash equilibrium is computed, $\theta^*$, and the successful delivery probability is calculated as a function of the threshold value for $\theta \leq \theta^*$. Values larger than $\theta^*$ should not be considered as they violate the constraint on the expected number of copies.
As a result three regimes are identified. In the first one (sparse regime), the interference created by the other class is negligible, so that the success probability increases monotonically with the threshold value. As a direct consequence, the Nash equilibrium represents actually the optimal choice in terms of threshold value. This case is illustrated in Fig. 9a, where $R = 1$ m was used. In the second one
(dense regime) interference is the driving factor. In this case the optimal choice is actually not to make copies at all (i.e., set the threshold to zero), and just rely on the ability of the source to directly meet the destination. This is shown in Fig. 9b for $R = 15$ m, where the success probability is monotonically decreasing in the threshold value used. In this case, clearly, the Nash equilibrium does not represent an efficient operating point for the system.
In the third regime (medium density), the two effects are present at the same time, so that there exists an optimal non-zero value of the threshold which is smaller than the value at the Nash equilibrium. This case is shown in Fig. 9c for $R = 8$ m.
8 Conclusions
In this paper a discrete time model for the control of mobile ad hoc DTNs was introduced. The closed form expressions for optimal static and threshold forwarding policies for two-hop routing have been derived. Such policies depend on network parameters, such as the number of nodes or nodes meeting rates. But, those parameters may be unknown a priori.
Using the theory of stochastic approximations, an algorithm has been designed that enables all nodes in the DTN to tune independently and optimally the forwarding policies, adapting them to the current operating conditions of the system. The algorithm does not require neither message exchanges, nor an explicit estimation of network parameters.
These features are appealing from the network design standpoint: indeed similar techniques can be applied to a wide set of problems in DTNs, a type of network where the estimation of global parameters is extremely challenging due to the absence of persistent connectivity.
The proposed model has been validated via numerical simulations, including both synthetic mobility traces as well as real-world ones. In particular, the scheme is able to adjust online the forwarding control such in a way to obtain a target number of generated message copies. Overall, this is an effective scheme able to drive the system to a desired operating point and applies to the case when the forwarding control is confined to run on board of source nodes.
Finally, the model has been applied to the case of competing DTNs. To this aim, a class of weakly coupled Markov games has been introduced where players are DTNs competing for resources. The coupling may occur because of interference when DTNs attempt to deliver messages to a common gateway node. In such games, a unique Nash equilibrium exists where each node applies the optimal policy determined for the single DTN case.
References
[1] E. Altman, G. Neglia, F. De Pellegrini, and D. Miorandi, “Decentralized stochastic control of delay tolerant networks,” in *Proc. of IEEE INFOCOM*, Rio de Janeiro, Brasil, 2009, pp. 1134–1142.
[2] S. Burleigh, L. Torgerson, K. Fall, V. Cerf, B. Durst, K. Scott, and H. Weiss, “Delay-tolerant networking: an approach to interplanetary Internet,” *IEEE Comm. Mag.*, vol. 41, no. 6, pp. 128–136, Jun. 2003.
[3] T. Spyropoulos, K. Psounis, and C. Raghavendra, “Efficient routing in intermittently connected mobile networks: the multi-copy case,” *ACM/IEEE Trans. on Networking*, vol. 16, pp. 77–90, Feb. 2008.
[4] A. Vahdat and D. Becker, “Epidemic routing for partially connected ad hoc networks,” Duke University, Tech. Rep. CS-2000-06, 2000.
[5] E. Nordström, P. Gunningberg, and C. Rohner, “Haggle: a data-centric network architecture for mobile devices,” in *Proc. of ACM MobiHoc S$^3$ Workshop*, New Orleans, Louisiana, USA, 2009, pp. 37–40.
[6] M. Grossglauser and D. Tse, “Mobility increases the capacity of ad hoc wireless networks,” *IEEE/ACM Trans. on Networking*, vol. 10, no. 4, pp. 477–486, Aug. 2002.
[7] R. Groenevelt, P. Nain, and G. Koole, “The message delay in mobile ad hoc networks,” *Performance Evaluation*, vol. 62, no. 1-4, pp. 210–228, October 2005.
[8] M. Musolesi and C. Mascolo, “Controlled epidemic-style dissemination middleware for mobile ad hoc networks,” in *Proc. of ACM Mobiquitous*, San Jose, California, July 2006, pp. 1–9.
[9] X. Zhang, G. Neglia, J. Kurose, and D. Towsley, “Performance modeling of epidemic routing,” *Computer Networks*, vol. 51, no. 10, pp. 2867–2891, Jul. 2007.
[10] A. E. Fawal, J.-Y. L. Boudec, and K. Salamatian, “Performance analysis of self limiting epidemic forwarding,” EPFL, Tech. Rep. LCA-REPORT-2006-127, 2006.
[11] W. Zhao, M. Ammar, and E. Zegura, “Controlling the mobility of multiple data transport ferries in a delay-tolerant network,” in *Proc. of IEEE INFOCOM*, Miami USA, March 2005, pp. 1407–1418.
[12] M. M. B. Tariq, M. Ammar, and E. Zegura, “Message ferry route design for sparse ad hoc networks with mobile nodes,” in *Proc. of ACM MobiHoc*, Florence, Italy, May 22–25, 2006, pp. 37–48.
[13] K. Chen and H. Shen, “Dsearching: Distributed searching of mobile nodes in dtms with floating mobility information,” in *Proc. of IEEE INFOCOM*, April 2014, pp. 2283–2291.
[14] A. Guerrieri, I. Carreras, F. D. Pellegrini, D. Miorandi, and A. Montresor, “Distributed estimation of global parameters in delay–tolerant networks,” *Computer Communications*, vol. 33, no. 13, pp. 1472–1482, Aug. 2010.
[15] W. Gao, G. Cao, T. La Porta, and J. Han, “On exploiting transient social contact patterns for data forwarding in delay-tolerant networks,” *Mobile Computing, IEEE Transactions on*, vol. 12, no. 1, pp. 151–165, Jan 2013.
[16] A. Krifa, C. Barakat, and T. Spyropoulos, “Optimal buffer management policies for delay tolerant networks,” in *Proc. of IEEE SECON*, San Francisco, California, June 2008, pp. 260–268.
[17] G. Neglia and X. Zhang, “Optimal delay-power tradeoff in sparse delay tolerant networks: a preliminary study,” in *Proc. of ACM SIGCOMM CHANTS 2006*, Pisa, Italy, 15 September 2006, pp. 237–244.
[18] E. Altman, T. Basar, and F. De Pellegrini, “Optimal monotone forwarding policies in delay tolerant mobile ad-hoc networks,” in *Proc. of ACM InterPerf*, Athens, Greece, October 24 2008.
[19] C. Singh, E. Altman, A. Kumar, and R. Sundaresan, “Optimal forwarding in delay-tolerant networks with multiple destinations,” *IEEE/ACM Trans. Netw.*, vol. 21, no. 6, pp. 1812–1826, Dec. 2013. [Online]. Available: http://dx.doi.org/10.1109/TNET.2012.2233494
[20] E. Altman, K. Avrachenkov, N. Bonneau, M. Debbah, R. El-Azouzi, and D. S. Menasche, “Constrained cost-coupled stochastic games with independent state processes,” *Operations Research Letters*, vol. 36, pp. 160–164, 2008.
[21] A. Chaientreau, P. Hui, J. Crowcroft, C. Diot, R. Gass, and J. Scott, “Impact of human mobility on opportunistic forwarding algorithms,” *IEEE Trans. on Mobile Computing*, vol. 6, no. 6, pp. 606–620, 2007.
[22] T. Karagiannis, J.-Y. L. Boudec, and M. Vojnović, “Power law and exponential decay of inter contact times between mobile devices,” in *Proc. of ACM MobiCom*, September 9-14 2007, pp. 183–194.
[23] W. Chahin, R. El-Azouzi, F. De Pellegrini, and A. P. Azad, “Blind online optimal forwarding in heterogeneous delay tolerant networks,” in *Proc. of IFIP Wireless Days*, Niagara Falls, Ontario, Canada, October 10-12 2011.
[24] R. El-Azouzi, F. De Pellegrini, and V. Kamble, “Evolutionary forwarding games in delay tolerant networks,” in *Proc. of IEEE WiOpt*, Avignon, France, May 31 - June 4 2010.
[25] W. Chahin, H. Sidi, F. D. R. El Azouzi, and J. Walrand, “Incentive mechanisms based on minority games in heterogeneous delay tolerant networks,” in *Proc. of ITC*, Shangai, China, 10-12 September 2013, *Best paper runner up*.
[26] M. Shaked and J. G. Shantikumar, *Stochastic Orders and Their Applications*. New York: Academic Press, 1994.
[27] E. Altman and G. Koole, “On submodular value functions and complex dynamic programming,” *Stochastic Models*, vol. 14, no. 5, pp. 1051–1072, 1998.
[28] S. Stidham and R. Weber, “A survey of Markov decision models for control of networks of queues,” *Queueing Systems*, vol. 13, no. 1-3, pp. 291–314, 1993.
[29] K. W. Ross, “Randomized and past-dependent policies for Markov decision processes with multiple constraints,” *Operations Research*, vol. 37, no. 3, pp. 474–477, 1989.
[30] A. Hordijk and F. Spieksma, “Constrained admission control to a queueing system,” *Advances in Applied Probability*, vol. 21, no. 2, pp. 409–431, 1989.
[31] E. Altman, *Constrained Markov Decision Processes*. Chapman and Hall/CRC, 1999.
[32] H. J. Kushner and G. G. Yin, *Stochastic Approximation and Recursive Algorithms and Applications*. Springer, 2nd Edition, 2003.
[33] J. Maryak, “Some guidelines for using iterate averaging in stochastic approximation,” in *Proc of IEEE CDC*, vol. 3, San Diego, California, 10-12 Dec. 1997, pp. 2287–2290.
[34] B. T. Polyak and A. Juditsky, “Acceleration of stochastic approximation by averaging,” *SIAM Journal of Control and Optimization*, vol. 30, no. 4, pp. 838–855, Jul. 1992.
[35] F. De Pellegrini, D. Miorandi, and I. Carreras, “Optimal two-hop routing in delay–tolerant networks,” in *Proc. of European Wireless (invited)*, Lucca, IT, 2010.
[36] P. U. Tournoux, J. Leguay, F. Benbadis, V. Conan, M. D. de Amorim, and J. Whitbeck, “The accordion phenomenon: Analysis, characterization, and impact on DTN routing,” in *Proc. of IEEE INFOCOM*, Rio de Janeiro, Brasil, 2009.
[37] I. Carreras, F. De Pellegrini, D. Miorandi, D. Tacconi, and I. Chlamtac, “Why neighbourhood matters: Interests-driven opportunistic data diffusion schemes,” in *Proc. of ACM CHANTS*, San Francisco, CA, US, September 2008.
[38] J. Su, A. Goel, and E. de Lara, “An empirical evaluation of the student-net delay tolerant network,” in *Mobile and Ubiquitous Systems: Networking Services, 2006 Third Annual International Conference on*, July 2006, pp. 1–10.
|
Somatoform Dissociation: Major Symptoms of Dissociative Disorders
Ellert R. S. Nijenhuis, PhD
ABSTRACT. In most of the recent scientific and clinical literature, dissociation has been equated with dissociative amnesia, depersonalization, derealization, and fragmentation of identity. However, according to Pierre Janet and several World War I psychiatrists, dissociation also pertains to a lack of integration of somatoform components of experience, reactions, and functions. Some clinical observations and contemporary studies have supported this view. Somatoform dissociation, which can be measured with the Somatoform Dissociation Questionnaire (SDQ-20), is highly characteristic of dissociative disorder patients, and a core feature in many patients with somatoform disorders and in a subgroup of patients with eating disorders. It is strongly associated with reported trauma among psychiatric patients and patients with chronic pelvic pain presenting in medical healthcare settings. Motor inhibitions and anesthesia/analgesia are somatoform dissociative symptoms that are similar to animal defensive reactions to major threat and injury. Among a wider range of somatoform dissociative symptoms, these particular symptoms are highly characteristic of patients with dissociative disorders. The empirical findings reviewed in this article should have implications for the contemporary conceptualization and definition of dissociation, as well as the categorization of somatoform disorders in a future version of the DSM. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-342-9678. E-mail address: <firstname.lastname@example.org> Website: <http://www.HaworthPress.com> © 2000 by The Haworth Press, Inc. All rights reserved.]
KEYWORDS. Dissociation, somatoform, trauma
Ellert R. S. Nijenhuis is affiliated with Cats-Polm Institute, The Netherlands. Address correspondence to: Ellert R. S. Nijenhuis, Cats-Polm Institute, Dobbenwal 90, 9407 AH Assen, The Netherlands (E-mail: email@example.com).
The author wishes to thank Kathy Steele for her assistance in preparing this article.
Journal of Trauma & Dissociation, Vol. 1(4) 2000
© 2000 by The Haworth Press, Inc. All rights reserved.
What are the major symptoms of the dissociative disorders? According to the *Diagnostic and Statistical Manual for Mental Disorders, Fourth Edition* (DSM-IV; American Psychiatric Association, 1994), the essential feature of dissociation is a disruption of the normal integrative functions of consciousness, memory, identity, and perception of the environment. Thus, the current standard for the assessment of dissociative disorders, the *Structural Clinical Interview for DSM-IV Dissociative Disorders* (SCID-D; Steinberg, 1994), includes four symptom clusters: dissociative amnesia, depersonalization, derealization, and identity confusion/identity fragmentation. Well-known self-report questionnaires that evaluate the severity of dissociation, such as the *Dissociative Experiences Scale* (DES; Bernstein & Putnam, 1986) and the *Dissociation Questionnaire* (DIS-Q; Vanderlinden, 1993), predominantly encompass largely similar, empirically derived factors. As these clusters and factors involve manifestations of dissociation of psychological variables (dissociative amnesia, depersonalization, derealization, identity confusion, identity fragmentation), we have proposed to name these phenomena *psychological dissociation* (Nijenhuis, Spinhoven, Van Dyck, Van der Hart & Vanderlinden, 1996).
Do these symptom clusters encompass all major symptoms of dissociative disorders? Does dissociation indeed only manifest in psychological variables, leaving the body unaffected? In the aforementioned descriptive definitions and instruments that evaluate dissociation and dissociative disorders, that would seem to be the case. This impression is amplified when one studies the DSM-IV criteria for the dissociative disorders. The only diagnostic criteria that refer to the body can be found under depersonalization disorder, which states that the person can feel detached from, and as if one is an outside observer of, one’s body, or parts of the body. It is also stated that dissociative disorders may involve a disruption of the usually integrated function of perception of the environment and the diagnostic features of depersonalization disorder include various types of sensory anesthesia. Yet, patients with dissociative disorders report many somatoform symptoms, and many meet the DSM-IV criteria of somatization disorder or conversion disorder (Pribror, Yutzy, Dean & Wetzel, 1993; Ross, Heber, Norton & Anderson, 1989; Saxe et al., 1994). On the other hand, patients with somatization disorder often have amnesia (Othmer & De Souza, 1985). Although somatoform disorders are not conceptualized as dissociative disorders in the DSM-IV, the strong correlation between dissociative and somatoform disorders (see also Darves-Bornoz, 1997) indicates that dissociation and so-called conversion symptoms, and particular somatization symptoms, may be manifestations of a single underlying principle.
The major symptoms of hysteria, which involve both mind and body—a cluster of disorders that prominently included the current dissociative disders—are another indication of the existence of *somatoform* dissociation, a concept with origins in 19th century French psychiatry. During that time many authors focused, exclusively or primarily, on the somatoform manifestations of hysteria (e.g., Briquet, 1859). As Van der Hart and colleagues (Van der Hart, Van Dijke, Van Son, & Steele, 2000, this issue) have clearly demonstrated, somatoform dissociation characterized many traumatized World War I soldiers as well. Recent clinical observations also indicate that dissociation can manifest in somatoform ways (Cardeña, 1994; Kihlstrom, 1994; Nemiah, 1991; Van der Hart & Op den Velde, 1995). Furthermore, the *International Classification of Diseases, Tenth Edition* (ICD-10; World Health Organization, 1992) includes somatoform dissociation within dissociative disorders of movement and sensation: a category listed as “conversion disorder” in the DSM-IV. Confusion exists within both classificatory systems as well. For example, whereas the ICD-10 includes the diagnostic category of dissociative anesthesia, the ICD-10 and the DSM-IV both include symptoms of anesthesia—among many other symptoms—under somatization disorder. Pain symptoms and sexual dysfunctions are not described as conversion symptoms or dissociative symptoms, yet according to clinical observation they can represent definitive dissociative phenomena. For instance, localized pain may be dependent on the reactivation of a traumatic memory that was previously dissociated and manifests as physical pain in a particular body part. In fact, traumatic memories primarily include a range of sensorimotor reactions (Nijenhuis, Van Engen, Kusters & Van der Hart, in press; Van der Hart et al., 2000, this issue; Van der Kolk & Fisler, 1995).
In order to avoid confusion, it is important to stress that the labels “psychological dissociation” and “somatoform dissociation” should not be taken to mean that only psychological dissociation is a mental phenomenon. Both descriptors refer to the ways in which dissociative symptoms may manifest, not to their presumed cause. Somatoform dissociation designates dissociative symptoms that phenomenologically involve the body, and psychological dissociative symptoms are those that phenomenologically involve psychological variables. The descriptor “somatoform” indicates that the physical symptoms resemble, but cannot be explained by, a medical symptom or the direct effects of a substance. In the term “somatoform dissociation,” “dissociation” describes the existence of a disruption of the normal integrative mental functions. Thus “somatoform dissociation” denotes phenomena that are manifestations of a lack of integration of somatoform experiences, reactions, and functions.
This article will review recent empirical studies of somatoform dissociation. These studies investigated the extent to which somatoform dissociation: (1) can be measured, (2) correlates with psychological dissociation, (3) belongs to the major symptoms of dissociative disorders, (4) discriminates among various diagnostic categories, (5) depends on culture, (6) reflects general psychopathology, (7) depends on suggestion, (8) is characteristic of dissociative disorders, and can be used in the screening for these disorders, (9) is associated with (reported) trauma among psychiatric patients and patients presenting in medical health care settings, and (10) relates to animal defense-like reactions. The review of these studies is preceded by brief descriptions of Janet’s view on hysteria and Myers’ (1940) view on “shell shock,” or war-related traumatization.
**JANET’S CLASSIFICATION OF DISSOCIATIVE SYMPTOMS**
Janet’s clinical observations suggested that hysteria involves psychological and somatoform functions and reactions (Janet, 1889, 1893, 1901/1977). In his view, mind and body were inseparable, thus his classification of the symptoms of hysteria does not follow a mind-body distinction. He maintained that apart from the permanent symptoms, termed “mental stigmata,” that mark all cases of hysteria, there are incidental symptoms, that is, symptoms that depend on each case. Janet referred to these intermittent and variable symptoms as “mental accidents” (Van der Hart & Friedman, 1989).
Janet observed that *mental stigmata* include functional losses including partial or complete loss of knowledge (amnesia), loss of sensations such as loss of tactile sensations, kinesthesia, smell, taste, hearing, vision, and pain sensitivity (analgesia), and loss of motor control (inability to move or speak). We have referred to mental stigmata as negative symptoms (Nijenhuis & Van der Hart, 1999).
Janet defined *mental accidents* as incidental symptoms, i.e., symptoms that vary by case and are often more transitory in nature. In our view, mental accidents represent positive symptoms because they involve additions, i.e., mental phenomena that should have been integrated in the personality, but because of integrative failure become dissociated material that intrudes into consciousness at times. Examples include reexperiencing more or less complete traumatic memories and manifestations of dissociative personalities.
According to Janet, the simplest form of mental accidents are “idées fixes” (fixed ideas), that are related to intrusions of some dissociated emotion, thought, sensory perception, or movement. This intrusion into or interruption of the personality may also pertain to “hysterical attacks,” to the extent to which they are reactivations of traumatic memories. Janet observed that some dissociative patients are subject to “somnambulisms,” which today may be recognized as the activities of dissociative identities (APA, 1994). (Since these mental structures involve far more than merely a different sense of self, we feel they are better referred to as dissociative personalities.
When patients lose all touch with reality during dissociative episodes, they experience a “delirium,” i.e., a reactive dissociative psychosis (Van der Hart, Witztum, & Friedman, 1993).
Janet (1889, 1893, 1901/1977, 1907) gave many clinical examples showing that dissociative mental structures can involve dissociated sensory, motor, and other bodily reactions and functions in addition to dissociated emotions and knowledge. The symptoms can vary within each dissociative mental structure. For example, in one dissociative personality the patient may be insensitive to pain (analgesic) or touch (tactile anesthesia), but in another, these mental stigmata can be absent, or exchanged for mental accidents, such as localized pain. Whatever has not been integrated into one dissociative personality (not-knowing; not-sensing; not-perceiving) is often prominent in another: a memory; a thought; a bodily feeling, or a complexity of sensations, motor reactions, and other experiential components that could manifest in “hysterical attacks.”
Janet’s dissociation theory postulates that both somatoform and psychological components of experience, reactions, and functions can be encoded into mental systems that can escape integration into the personality (Janet, 1889, 1893, 1901/1977, 1911). He used the construct “personality” to denote the extremely complex, but largely integrated, mental system that encompasses consciousness, memory, and identity. Janet observed that dissociative mental systems are also characterized by a retracted field of consciousness, that is, a reduced number of psychological phenomena that can be simultaneously integrated into one and the same mental system.
In Janet’s conceptualization, mental accidents represent reactivations of what has been encoded and stored in dissociative “systems of ideas and functions.” Due to recurrent dissociation and imagery, these systems can become emancipated. That is, dissociative systems may synthesize and assimilate more sensations, feelings, emotions, thoughts, and behaviors in the context of recurrent traumatization or reactivation by trauma-related conditioned stimuli. As a result, these systems may become associated with a range of experiences, a name, age, and other personality-like characteristics. Today, these emancipated systems are described as more or less complex dissociative personalities whose personality-like features may result from secondary elaborations (Nijenhuis, Spinhoven, Vanderlinden, Van Dyck, & Van der Hart, 1998). These elaborations are probably promoted by hypnotic-like imagination, restricted fields of consciousness, and needs that are associated with these dissociative mental systems. To a yet unknown extent, secondary shaping of dissociative mental systems by sociocultural influences may also be involved (Gleaves, 1996; Janet, 1929; Laria & Lewis-Fernández, in press).
THE “APPARENTLY NORMAL” PERSONALITY AND THE “EMOTIONAL” PERSONALITY
Many cases of dissociative disorder predominantly remain in a condition that has been described as an “apparently normal” personality (Myers, 1940; Nijenhuis & Van der Hart, 1999; Van der Hart, Van der Kolk, & Boon, 1998; Van der Hart et al., 2000, this issue). As “apparently normal” personality, the patient on the surface appears as more or less mentally normal. However, on closer scrutiny he or she is characterized by a range of negative symptoms (Nijenhuis & Van der Hart, 1999). Examples of these negative symptoms are partial or complete amnesia and anesthesia. The “apparently normal” personality, which in dissociative identity disorder (DID) can be fragmented into two or more personalities, is structurally dissociated from one or more “emotional” personalities (Nijenhuis, Van der Hart et al., in press; Van der Hart, 2000; Van der Hart et al., 2000, this issue). In our view, dissociative mental systems that involve “emotional” personalities—ranging from Janetian idées fixes to somnambulisms—often encompass traumatic memories, or aspects thereof, and defensive reactions to major threat (Nijenhuis, Vanderlinden, & Spinhoven, 1998; Nijenhuis, Spinhoven, Vanderlinden et al., 1998). Thus, the “emotional” personality—whatever its degree of complexity and emancipation—constitutes a positive symptom. However, as to content, “emotional” personalities can contain negative or positive symptoms, or both. Negative symptoms of “emotional” personalities include analgesia and motor inhibitions that are expressions of defensive freezing. Examples of positive symptoms include particular trauma-related movements and pain. Because dissociative barriers are not absolute, “emotional” personalities may influence the “apparently” normal personality and, when applicable, vice versa. Alternation between both types of personalities occurs in mental disorders ranging from posttraumatic stress disorder to DID (Nijenhuis & Van der Hart, 1999).
Table 1 summarizes the clinically observed dissociative symptoms along two dichotomous types of phenomena. One type of phenomena are mental stigmata/negative symptoms and mental accidents/positive symptoms, and the other phenomena are psychological and somatoform manifestations of a common dissociative process.
THE SOMATOFORM DISSOCIATION QUESTIONNAIRE
The severity of somatoform dissociation can be measured with the *Somatoform Dissociation Questionnaire* (SDQ-20, see Appendix), a 20 item self-report instrument with excellent psychometric characteristics (Nijenhuis et al. 1996, 1998a, Nijenhuis, Van Dyck, Spinhoven et al., 1999). The items of the
SDQ-20 include negative and positive symptoms, and converge with the major symptoms of hysteria formulated by Janet a century ago. Examples of sensory losses are analgesia (“Sometimes my body, or a part of it, is insensitive to pain”), kinesthetic anesthesia (“Sometimes it is as if my body, or a part of it, has disappeared”), and motor inhibitions (“Sometimes I am paralysed for a while”; “Sometimes I cannot speak, or only whisper”). Anesthesia also pertains to visual (“Sometimes I cannot see for a while”), and auditory perception (“Sometimes I hear sounds from nearby as if they were coming from far away”). Positive symptoms include “Sometimes I have pain while urinating,” and “Sometimes I feel pain in my genitals” (at times other than sexual intercourse).
In seven studies performed to date, age and gender did not have a significant effect on somatoform dissociation as measured by the SDQ-20. However, in a sample of psychiatric outpatients (N = 153), women had slightly higher scores than men (Nijenhuis, Van der Hart, & Kruger, submitted), and
in Turkey, a weak but statistically significant correlation with age was found (Sar, Kundakci, Kiziltan, Bakim, & Bozkurt, 2000, this issue).
**SOMATOFORM DISSOCIATION AND PSYCHOLOGICAL DISSOCIATION**
In all but one study performed to date, somatoform dissociation was strongly associated with psychological dissociation as measured by the DES and DIS-Q, ranging from $r = 0.62$ (Nijenhuis et al., submitted) to $r = 0.85$ (Nijenhuis, Van Dyck, Spinhoven et al., 1999). Waller et al. (2000, this issue) found a lower correlation among psychiatric outpatients in the United Kingdom ($r = 0.51$). These results suggest that while somatoform and psychological dissociation are manifestations of a common process, they are not completely overlapping. Somatoform and psychological dissociation during or immediately after the occurrence of a traumatic event, i.e., *peritraumatic* dissociation, were also significantly correlated (Nijenhuis, Van Engen et al., in press).
**SOMATOFORM DISSOCIATION IN VARIOUS DIAGNOSTIC GROUPS IN THE NETHERLANDS AND BELGIUM**
A range of contemporary studies have revealed that somatoform dissociation is a unique construct and a major feature of dissociative disorders (Nijenhuis et al., 1996, 1998a; Nijenhuis, Van Dyck, Spinhoven et al., 1999). Patients with DSM-IV dissociative disorders had significantly higher SDQ-20 scores than psychiatric outpatients with other DSM-IV diagnoses, and patients with dissociative identity disorder (DID) had higher scores than patients with dissociative disorder, not otherwise specified (DDNOS) or depersonalization disorder (Nijenhuis et al., 1996, 1998a).
In Dutch samples, the SDQ-20 discriminated among various diagnostic categories (Nijenhuis, Van Dyck, Spinhoven et al., 1999). Compared to patients with DDNOS or depersonalization disorder, patients with DID had significantly higher scores. Patients with DDNOS had statistically significantly higher scores than patients with somatoform disorders or eating disorders, and the latter two diagnostic categories were associated with significantly higher scores than patients who had anxiety disorder, depression, adjustment disorders and bipolar mood disorders (see Table 2). In particular, bipolar mood disorder was associated with extremely low somatoform dissociation (see also Nijenhuis, Spinhoven, Van Dyck, Van der Hart, De Graaf et al., 1997).
TABLE 2. Somatoform Dissociation as Measured by the SDQ-20 in Various Diagnostic Groups
| | Dutch samples | Turkish samples | North American samples |
|--------------------------|---------------|-----------------|------------------------|
| | N | mean | SD | N | mean | SD | N | mean | SD |
| DID | | | | | | | | | |
| | 27 | 51.8 | 12.6| 25 | 58.7 | 17.9| 11 | 50.7 | 10.7|
| | 15 | 57.3 | 14.9| | | | | | |
| | 23 | 55.1 | 13.5| | | | | | |
| DDNOS and Depersonalization disorder | 23 | 43.8 | 7.1 | 25 | 46.3 | 16.2| | | |
| | 16 | 44.6 | 11.9| | | | | | |
| | 21 | 43.0 | 12.0| | | | | | |
| Somatoform disorders, including conversion disorder (n = 32), pain disorder (n = 7), conversion and pain disorder (n = 5), somatization disorder (n = 4) | 47 | 31.9 | 9.4 | | | | | | |
| Pseudo-epilepsy | 27 | 29.8 | 7.5 | | | | | | |
| Epilepsy | 74 | 24.8 | 6.9 | | | | | | |
| Temporal lobe epilepsy | 49 | 24.3 | 6.8 | | | | | | |
| Eating disorders | 50 | 27.7 | 8.8 | | | | | | |
| Anxiety disorder, major depressive episode, adjustment disorder | 45 | 22.9 | 3.9 | | | | | | |
| Anxiety disorder | | | | 26 | 26.8 | 6.4 | | | |
| Major depressive episode | | | | 23 | 28.7 | 8.3 | | | |
| Bipolar mood disorder | 51 | 22.9 | 3.7 | 22 | 22.7 | 3.5 | | | |
| Chronic pelvic pain | 52 | 25.6 | 9.3 | | | | | | |
In contrast with the SDQ-20, the DES did not discriminate between bipolar mood disorder and somatoform disorders. In a sample that primarily included cases of DSM-IV conversion and pain disorder and no cases of hypochondriases, the results suggest that patients with these particular somatoform disorders have significant somatoform dissociation, but less psychological dissociation (Nijenhuis, Van Dyck, Spinshoven et al., 1999).
**IS SOMATOFORM DISSOCIATION A CULTURALLY-DEPENDENT PHENOMENON?**
Our consistent finding that somatoform dissociation is extremely characteristic of DSM-IV dissociative disorders, in particular DID, has been corroborated by findings in some other countries and cultures (see Table 2). In the USA, Chapperon (personal communication, September 1996) found high somatoform dissociation among DID patients, and Dell (1997a) reported that DID patients had significantly higher scores than patients with DDNOS, eating disorders, or pain disorder. Studying various diagnostic categories in Turkey, Sar and colleagues (Sar, Kundakci, Kiziltan, Bahadir, and Aydiner, 1998; Sar et al., 2000, this issue) obtained results that are remarkably similar to ours: somatoform dissociation was extreme in DSM-IV dissociative disorders, quite modest in anxiety disorders, major depression, and schizophrenia, and low in bipolar mood disorder. Also consistent with our data, both Dell (1997a) and Sar et al. (1998, 2000, this issue) found strong intercorrelations of SDQ-20 and DES scores. Van Duyl’s (personal communication, March 2000) data on somatoform dissociation among dissociative disorder patients in Uganda converge with our Dutch/Flemish results as well. Conjointly, these international findings suggest that somatoform dissociation is highly characteristic of dissociative disorders, that somatoform and psychological dissociation are closely related constructs, and that the severity of somatoform dissociation among dissociative disorder patients from these cultures is largely comparable. Moreover, somatoform dissociative symptoms and disorders also manifested among tortured Bhutanese refugees, in particular those with PTSD (Van Ommeren et al., in press).
**IS SOMATOFORM DISSOCIATION A UNIQUE CONSTRUCT?**
Considering the moderate to high correlation between general psychopathology and psychological dissociation (Nash, Hulsey, Sexton, Harralson, & Lambert, 1993; Norton, Ross, & Novotny, 1990), some have expressed concern that dissociation scales may assess the former concept rather than the latter (Tillman, Nash, & Lerner, 1994). These authors could be correct, but this correlation could also reflect the broad comorbidity that characterizes complex dissociative disorders.
To study whether somatoform dissociation could possibly reflect general psychopathology, Nijenhuis, Van Dyck, Spinhoven et al. (1999) statistically adjusted the somatoform dissociation scores of different diagnostic categories for the influence of general psychopathology as assessed by the *Symptom Checklist* (SCL-90-R; Derogatis, 1977). The adjusted scores discriminated among DID, DDNOS, somatoform disorders, bipolar mood disorder, and eating disorders, and mixed psychiatric disorders (Nijenhuis, Van Dyck, Spinhoven et al., 1999). Therefore, it was concluded that somatoform dissociation is a unique construct, unrelated to general levels of psychopathology.
DOES SOMATOFORM DISSOCIATION RESULT FROM SUGGESTION?
Another concern is whether suggestion affects somatoform dissociation scores. For example, Merskey (1992, 1997) maintained that dissociative disorder patients are extremely suggestible, and therefore vulnerable to indoctrination by therapists who mistake the symptoms of bipolar mood disorder for “dissociative” symptoms.
In a single case study with positron emission tomography (PET) functional imaging, hypnotic paralysis activated brain areas similar to those in patients with conversion disorder, which could indicate that hypnosis and somatoform dissociation share common neurophysiological mechanisms (Halligan, Athwal, Oakley, & Frackowiak, 2000). This case study obviously requires replication among a group of patients with somatoform dissociative disorders, and the observed correlation does not document a causal relationship.
There are noteworthy reasons to believe that suggestion and indoctrination do not explain somatoform dissociation. Patients who completed the SDQ-20 in the assessment phase, and prior to the SCID-D interview, had higher scores than dissociative patients who completed the instrument in the course of their therapy (Nijenhuis, Van Dyck, Van der Hart, & Spinhoven, 1998; Nijenhuis, Van Dyck, Spinhoven et al., 1999). Moreover, prior to our research, the symptoms described by SDQ-20 were not known as major symptoms of dissociative disorders among diagnosticians and therapist, let alone patients. It was also found that the dissociative patients who were in treatment with the present author did not exceed the SDQ-20 scores of dissociative patients who were treated by other therapists. Given this author’s theoretical orientation and expectations, he was the most likely person to suggest somatoform dissociative symptoms (Nijenhuis, Spinhoven, Vanderlinden et al., 1998). Hence, the available empirical data run contrary to the hypothesis that somatoform dissociation results from suggestion.
SOMATOFORM DISSOCIATION IN THE SCREENING FOR DSM-IV DISSOCIATIVE DISORDERS
The data discussed so far reveal that somatoform dissociation is very characteristic of patients with DDNOS and DID. The question remains whether somatoform dissociation is as characteristic of these disorders as psychological dissociation. This issue required examination of the relative ability of somatoform and psychological dissociation screening instruments to discern between those cases with DSM-IV dissociative disorders, and those without.
The SDQ-5, comprised of 5 items from the SDQ-20, was developed as a screening instrument for DSM-IV dissociative disorders (Nijenhuis, Spinhoven, Van Dyck, Van der Hart, & Vanderlinden, 1997; Nijenhuis et al., 1998a). The sensitivity (the proportion of true positives selected by the test) of the SDQ-5 among SCID-D assessed patients with dissociative disorders in various Dutch/Flemish samples \((N = 50, N = 33, N = 31\), respectively) ranged from 82% to 94%. The specificity (the proportion of the comparison patients that is correctly identified by the test) of the SDQ-5 ranged from 93% to 98% \((N = 50, N = 42, N = 45\), respectively). The positive predictive value (the proportion of cases with scores above the chosen cut-off value of the test that are true positives) among these samples ranged from 90% to 98%, and the negative predictive value (the proportion of cases with scores below this cut-off value that are true negatives) from 87% to 96%. The corresponding values of the SDQ-20 were slightly lower (Nijenhuis et al., 1997).
High sensitivity and specificity of a test do not implicate a high predictive value when the prevalence of the disorder in the population of concern is low (Rey, Morris-Yates, & Stanislaw, 1992). The prevalence of dissociative disorders among psychiatric patients has been estimated at approximately 8%-15% (Friedl & Draijer, 2000; Horen, Leichner, & Lawson, 1995; Sar et al., 1999; Saxe et al., 1993). Corrected for a prevalence rate of 10%, the positive predictive values among the indicated samples ranged from 57% to 84%, and the negative predictive values from 98% to 99%. Averaged over three samples, the positive predictive value of the SDQ-5 was 66%. Hence, it can be predicted that among Dutch/Flemish samples, two of three patients with scores at or above the cut-off will have a DSM-IV dissociative disorder.
Among Dutch dissociative disorder patients and psychiatric comparison patients, Boon and Draijer (1993) found that the sensitivity of the DES was 93%, the specificity 86%, the corrected positive predictive value 42%, and the corrected negative predicted value 99%. It thus seems that somatoform dissociation is at least as characteristic of complex dissociative disorders as is psychological dissociation in Dutch samples.
**IS SOMATOFORM DISSOCIATION CORRELATED WITH REPORTED TRAUMA?**
In our study comparing dissociative disorder patients \((N = 45)\) with control patients \((N = 43)\) (Nijenhuis, Spinhoven, Van Dyck, Van der Hart, & Vanderlinden, 1998b), the dissociative disorder patients reported severe and multifaceted traumatization on the *Traumatic Experiences Checklist* (TEC; Nijenhuis, Van der Hart, & Vanderlinden; see Nijenhuis, 1999). Among various types of trauma, physical abuse, with an independent contribution of sexual trauma, best predicted somatoform dissociation. Sexual trauma best
predicted psychological dissociation. According to the reports of the dissociative disorder patients, this abuse usually occurred in an emotionally neglectful and abusive social context. Both somatoform and psychological dissociation were best predicted by early onset of reported intense, chronic and multiple traumatization.
Reanalysing the data of this study, it was found that the total TEC score explained 48% of the variance of somatoform dissociation, a value that exceeded the variance explained by reported physical and sexual abuse (Nijenhuis, 1999). This additional finding suggests that somatoform dissociation is strongly associated with reported multiple types of trauma: a finding that converges with the results of research in the incidence of verified multiple and chronic traumatization in DID patients (Coons, 1994; Hornstein & Putnam, 1992; Kluft, 1995; Lewis, Yeager, Swica, Pincus, & Lewis, 1997).
Studying psychiatric outpatients, both Waller and his colleagues (2000, this issue) and Nijenhuis et al. (submitted) also found that among various types of trauma, somatoform dissociation was best predicted statistically by physical abuse and threat to life by another person. Preliminary North American findings (Dell, 1997b) have indicated moderate to strong statistically significant correlations among somatoform dissociation and reported sexual abuse ($r = .51$), sexual harassment ($r = .49$), physical abuse ($r = .49$), and lower correlations with reported emotional neglect ($r = .25$) and emotional abuse ($r = .31$). Reported early onset of traumatization was somewhat more strongly associated with somatoform dissociation than was trauma reported in later developmental periods, and among all variables tested the total trauma score was associated with somatoform dissociation most strongly ($r = .63$). These various results are highly consistent with our findings. It can be concluded that somatoform dissociation is particularly associated with physical abuse and sexual trauma, thus with threat to the integrity of the body. Consistent with this conclusion, Van Ommeren et al. (in press) found that tortured Bhutanese refugees ($N = 526$), compared with nontortured Bhutanese refugees, had significantly more lifetime ICD-10 (WHO, 1992) persistent somatoform pain disorder (56.2% vs. 28.8%), dissociative motor disorder (11.2% vs. 1.3%), and dissociative anesthesia and sensory loss (14.4% vs. 2.8%).
A link between somatoform dissociation and reported trauma is also suggested by studies that have found associations between somatization symptoms, somatoform disorders and reported trauma. For example, undifferentiated somatoform disorder belonged to the three DSM-IV Axis I diagnoses that marked Gulf War veterans referred for medical and psychiatric syndromes (Labbate, Cardeña, Dimitreva, Roy, & Engel, 1998). More specifically, reports of traumatic events were correlated with both PTSD and somatoform diagnoses, and veterans who handled dead bodies had a three-fold risk
of receiving a somatoform diagnosis. In addition, a range of studies found associations among (reported) trauma, psychological dissociation, and somatization symptoms or somatoform disorders (e.g., Atlas, Wolfson, & Lipschitz, 1995; Darves-Bornoz, 1997; Van der Kolk et al., 1996).
**SOMATOFORM DISSOCIATION AND ANIMAL DEFENSIVE REACTIONS**
Patients with DID or related types of DDNOS remain in alternating dissociative personalities (in varying degrees of complexity) that are relatively discrete, discontinuous, and resistant to integration. In our view, basically they represent “apparently normal” and “emotional” personalities (Nijenhuis & Van der Hart, 1999), and are associated with particular somatoform dissociative symptoms. Exploring the roots of these dissociative mental systems and symptoms, Nijenhuis, Vanderlinden, and Spinhoven (1998) drew a parallel between animal defensive and recuperative states evoked in the face of variable predatory imminence and injury, and characteristic somatoform dissociative responses of patients with dissociative disorders who report trauma. Their review of empirical data of research with animals and humans, as well as clinical observations, suggested that there are similarities between disturbances of normal eating-patterns and other normal behavioral patterns in the face of diffuse threat. Freezing and stilling occur when serious threat materializes; analgesia and anesthesia when strike is about to occur; and acute pain when threat has subsided and actions that promote recuperation follow. According to our structural dissociation model (Nijenhuis, Van der Hart, & Steele, in press), “emotional” personalities would involve animal defense-like systems, and “apparently normal” personalities would exhibit a range of behavioral and mental reactions to avoid or escape from traumatic memories and the associated “emotional” personality. In our view, the mental avoidance and escape reactions, among others, find expression in negative psychological and somatoform dissociative symptoms, such as amnesia and emotional as well as sensory anesthesia.
Consistent with this model, several studies have suggested that threat to life, whether due to natural or human causes, may induce analgesia and numbness (Cardeña et al., 1998; Cardeña & Spiegel, 1993; Pitman, Van der Kolk, Orr, & Greenberg, 1990; Van der Kolk, Greenberg, Orr, & Pitman, 1989). Nijenhuis, Spinhoven and Vanderlinden et al. (1998) performed the first test of the hypothesized similarity between animal defensive reactions and certain somatoform dissociative symptoms of dissociative disorder patients who reported trauma. Twelve somatoform symptom clusters consisting of clinically observed somatoform dissociative phenomena were constructed. All clusters discriminated between patients with dissociative disorders and
patients with other psychiatric diagnoses. Those expressive of the hypothesized similarity–freezing, anesthesia-analgesia, and disturbed eating–belonged to the five most characteristic symptoms of dissociative disorder patients. Anesthesia-analgesia, urogenital pain and freezing symptom clusters independently contributed to predicted caseness of dissociative disorder. Using an independent sample, it appeared that anesthesia-analgesia best predicted caseness after controlling for symptom severity. The indicated symptom clusters correctly classified 94% of cases that constituted the original sample, and 96% of the independent second sample. These results were largely consistent with the hypothesized similarity.
The anesthesia symptoms characterize “emotional” personalities, but may also be part and parcel of “apparently normal” personalities. In our view, “apparently normal” personalities are phobic of traumatic memories and phobic of the associated “emotional” personalities (Nijenhuis & Van der Hart, 1999; Nijenhuis, Van der Hart et al., in press). This phobia manifests in two major negative dissociative symptoms: amnesia and sensory, as well as emotional anesthesia. Recent data from psychobiological experimental research with both types of dissociative personalities support this interpretation (Nijenhuis, Quak et al., 1999; Van Honk, Nijenhuis, Hermans, Jongen, & Van der Hart, 1999).
**IS SOMATOFORM DISSOCIATION ALSO ASSOCIATED WITH DISSOCIATIVE DISORDER AND TRAUMA IN A NONPSYCHIATRIC POPULATION?**
In order to test the generalizability of the powerful associations between somatoform dissociation, dissociative disorder, and reported trauma among psychiatric patients, we investigated whether these relationships would also hold among a nonpsychiatric population (Nijenhuis, Van Dyck, Ter Kuile et al., 1999). According to the literature, chronic pelvic pain (CPP) is one of the somatic symptoms that, at least among a subgroup of gynecology patients, relates to reported trauma (e.g., Walling et al., 1994; Walker et al., 1995) and dissociation (Walker et al., 1992). In this population (N = 52), psychological dissociation and somatoform dissociation were significantly associated with (features of) DSM-IV dissociative disorders, as measured by the SCID-D. Anxiety, depression, and psychological dissociation best predicted the SCID-D total score, whereas amnesia was best predicted by somatoform dissociation. Identity confusion was best predicted by anxiety/depression and somatoform dissociation. These findings ran partly contrary to our hypothesis that somatoform dissociation among CPP patients would be more predictive of dissociative disorder than psychological dissociation.
In this study, the sensitivity of somatoform and psychological dissociation
screening instruments for dissociative disorders was 100%. The specificity was 90.2% (SDQ-5) and 94.1% (DES) respectively. Somatoform dissociation was strongly associated with, and best predicted, reported trauma. Physical abuse, life threat posed by a person, sexual trauma, and intense pain best predicted somatoform dissociation among the various types of trauma. Physical abuse/life threat posed by a person remained the best predictor of somatoform dissociation after statistically controlling for the influence of anxiety, depression, and intense pain (Nijenhuis, Van Dyck, Ter Kuile et al., 1999).
This study demonstrated a strong association between somatoform dissociation and reported trauma in a nonpsychiatric population, as well as a considerable association between somatoform dissociation and features of dissociative disorders. The results are consistent with our findings among psychiatric patients, and, therefore, strengthen our thesis that somatoform dissociation, features of dissociative disorders, and reported trauma are strongly intercorrelated phenomena.
**DISCUSSION**
The items of the SDQ comprise many of the symptoms that mark hysteria as described by Janet (1893, 1907). The reviewed empirical data show that the 19th century symptoms of hysteria are very characteristic of the 20th century dissociative disorders. They confirm that these symptoms involve a combination of mental stigmata (the negative symptoms of anesthesia, analgesia, and motor inhibitions) and mental accidents (the positive symptoms of localized pain, and alternation of taste and smell preferences/aversions). Although I subscribe to the Janetian position that body and mind are inseparable, I insist that making a phenomenological distinction among psychological and somatoform manifestations of dissociation can be clarifying, in that it highlights the largely forgotten or ignored clinical—and now empirically substantiated—observation that dissociation also pertains to the body.
No indications were found suggesting that these symptoms were manifestations of general psychopathology, or were a consequence of suggestion. Obviously, this is far from saying that dissociative disorder patients are immune to suggestion, or denying that there are factitious dissociative disorder cases (Draijer & Boon, 1999). However, it seems warranted to state that suggestion does not explain the findings of our studies on somatoform dissociation.
Somatoform dissociation belongs to the major symptoms of DSM-IV dissociative disorders, but it also characterizes many cases of DSM-IV somatoform disorders, as well as a subgroup of patients with eating disorders. Like dissociative disorders, somatization disorder (Briquet’s syndrome) has roots in hysteria: Briquet’s pioneering research revealed that many patients with
hysteria had amnesia, in addition to many somatoform symptoms. Contemporary research also shows that psychological dissociation and somatization are related phenomena. For example, Saxe et al. (1994) found that about two-thirds of the patients with dissociative disorders met the DSM-IV criteria of somatization disorder. Yet somatization probably is neither a distinct clinical entity, nor the result of a single pathological process (Kellner, 1995). It seems likely that somatoform dissociation pertains to a subgroup of somatoform symptoms that remain medically unexplained, or difficult to explain.
The findings of our studies are more consistent with the ICD-10 (WHO, 1992), that includes dissociative disorders of movement and sensation, than to the DSM-IV, that restricts dissociation to psychological manifestations and regards somatoform manifestations of dissociation as “conversion symptoms.” However, the SDQ-5 in the Netherlands, and the SDQ-20 in Turkey, were at least as effective as the DES in the screening for DSM-IV dissociative disorders, and our finding that psychological and somatoform dissociation are strongly associated suggests that both phenomena are manifestations of a common (pathological) process. Moreover, somatoform dissociation has been demonstrated to be characteristic of DSM-IV conversion disorder (Kuyk, Spinhoven, Van Emde Boas, & Van Dyck, 1999; for a review, see Bowman & Kuyk, in press), and somatoform dissociation, rather than psychological dissociation, was characteristic of patients with pseudo-epileptic seizures (Kuyk et al., 1999). Psychological dissociation was also very common among patients with conversion disorders (Spitzer, Spelsberg, Grabe, Mundt, & Freiberguer, 1999).
In conclusion, relabeling conversion (a concept that has links to controversial Freudian theory) as somatoform dissociation, and categorizing the DSM-IV conversion disorders as dissociative disorders is indicated. The same applies to somatization disorder when it is predominantly characterized by somatoform dissociation. Such findings would promote a reinstitution of the 19th century category of hysteria under the general label of dissociative disorders, and would include the current dissociative disorders, DSM-IV conversion disorder/ICD-10 dissociative disorders of movement and sensation, and somatization disorder. On the other hand, analysis of somatoform dissociation in DSM-IV somatization disorder may also reveal the existence of various subgroups. It could be that a subgroup of patients with somatization disorder has severe somatoform dissociation, whereas another subgroup obtains low or modest somatoform dissociation scores. It also seems doubtful that, for example, conversion disorder and hypochondriasis relate to similar pathology. Hence, further study of somatoform dissociation in the various DSM-IV somatoform disorders is needed.
The hypothesized dissociative personality-dependent nature of somatoform dissociation cannot be studied with the regular use of the SDQ-20 and
SDQ-5, but must be analysed using other methods. These include repeated administration of these instruments to DID patients while they remain in “apparently normal” and “emotional” personalities, and to controls while they maintain simulated “apparently normal” and “emotional” personalities. More important approaches, however, include the study of somatoform dissociative symptoms and concurrent psychophysiological and endocrinological reactions while DID patients and controls remain in these respectively authentic and enacted personalities as they are experimentally exposed to memories of trauma (Nijenhuis, Quak et al., 1999) or masked threat cues (Van Honk et al., 1999).
RECEIVED: 7/10/00
REVISED: 8/01/00 and 8/11/00
ACCEPTED: 8/26/00
REFERENCES
American Psychiatric Association (1994). *Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition* (DSM-IV). Washington, DC: American Psychiatric Association.
Atlas, J.A., Wolfson, M.A., & Lipschitz, D.S. (1995). Dissociation and somatization in adolescent inpatients. *Psychological Reports*, 76, 1101-1102.
Bernstein, E., & Putnam, F.W. (1986). Development, reliability, and validity of a dissociation scale. *Journal of Nervous Mental Disease*, 102, 280-286.
Boon, S., & Draijer, N. (1993). *Multiple personality disorder in the Netherlands. A study on reliability and validity of the diagnosis*. Amsterdam: Swets & Zeitlinger, 1993.
Bowman, E.S., & Kuyk, J. (in press). The relationship of conversion and dissociation: Implications for DSM-IV and ICD-10. *Journal of Trauma & Dissociation*.
Briquet, P. (1859). *Traité Clinique et Thérapeutique de L'hystérie* (2 vols.) [Clinical and Therapeutic Treatise of Hysteria]. Paris: J.-P. Baillière & Fils.
Cardeña, E. (1994). The domain of dissociation. In S.J. Lynn & J.W. Rhue (Eds.), *Dissociation: Clinical and theoretical perspectives* (pp. 15-31). New York: Guilford.
Cardeña, E., Holen, A., McFarlane, A., Solomon, Z., Wilkinson, C., & Spiegel, D. (1998). A multisite study of acute stress reaction to a disaster. In *Sourcebook for the DSM-IV, Vol. IV*. Washington, DC: American Psychiatric Association.
Cardeña, E., & Spiegel, D. (1993). Dissociative reactions to the San Francisco Bay area earthquake of 1989. *American Journal of Psychiatry*, 150, 474-478.
Coons, P.M. (1994). Confirmation of childhood abuse in child and adolescent cases of multiple personality disorder and dissociative disorder not otherwise specified. *Journal of Nervous and Mental Disease*, 182, 461-464.
Darves-Bornoz, J.-M. (1997). Rape-related psychotraumatic syndromes. *European Journal of Obstetrics & Gynecology*, 71, 59-65.
Dell, P.F. (1997a). *Somatoform dissociation in DID, DDNOS, chronic pain, and eating disorders in a North American sample*. Proceedings of the 14th International Conference of the International Society for the Study of Dissociation, November 8-11, p. 130.
Dell, P.F. (1997b). *Somatoform dissociation and reported trauma in DID and DDNOS*. Proceedings of the 14th International Conference of the International Society for the Study of Dissociation, November 8-11, p. 130.
Derogatis, L.R. (1977). *SCL-90: Administration, Scoring, and Procedures Manual-I for the R(evised) Version and Other Instruments of the Psychopathology Rating Scale Series*. Baltimore: Clinical Psychometric Research Unit, John Hopkins University School of Medicine.
Draijer, N., & Boon, S. (1999). The imitation of dissociative identity disorder: Patients at risk; therapists at risk. *Journal of Psychiatry & Law*, 27, 423-458.
Friedl, M.C., & Draijer, N. (2000). Dissociative disorders in Dutch psychiatric inpatients. *American Journal of Psychiatry*, 157, 1012-1013.
Gleaves, D.H. (1996). The sociocognitive model of dissociative identity disorder: A reexamination of the evidence. *Psychological Bulletin*, 120, 42-59.
Halligan, P.W., Athwal, B.S., Oakley, D.A., & Frackowiak, R.S. (2000). Imaging hypnotic paralysis: Implications for conversion hysteria [letter]. *Lancet*, 355, 986-987.
Horen, S.A., Leichner, P.P., & Lawson, J.S. (1995). Prevalence of dissociative symptoms and disorders in an adult psychiatric inpatient population in Canada. *Canadian Journal of Psychiatry*, 40, 185-191.
Hornstein, N.L., & Putnam, F.W. (1992). Clinical phenomenology of child and adolescent disorders. *Journal of the American Academy of Child and Adolescent Psychiatry*, 31, 1077-1085.
Janet, P. (1889). *L'Automatisme Psychologique*. Paris: Félix Alcan. Reprint: Société Pierre Janet, Paris, 1973.
Janet, P. (1893). *L'Etat Mental des Hystériques: Les Stigmates Mentaux* [The Mental State of Hystericals: The Mental Stigmata]. Paris: Rueff & Cie.
Janet, P. (1901). *The Mental State of Hystericals*. New York: Putnam & Sons. Reprint: University Publications of America, Washington, DC, 1977.
Janet, P. (1907). *Major symptoms of hysteria*. London: Macmillan. Reprint: Hafner, New York, 1965.
Janet, P. (1911). *L'Etat Mental des Hystériques*. Paris: Félix Alcan. Second extended edition. Reprint: Lafitte Reprints, Marseille, 1983.
Janet, P. (1929). *L'Evolution Psychologique de la Personnalité*. Paris: Chahine. Reprint: Société Pierre Janet, Paris, 1984.
Kellner, R. (1995). Psychosomatic syndromes, somatization, and somatoform disorders. *Psychotherapy and Psychosomatics*, 61, 4-24.
Kihlstrom, J.F. (1994). One hundred years of hysteria. In S.J. Lynn & J.W. Rhue (Eds.), *Dissociation: Clinical and Theoretical Perspectives* (pp. 365-395). New York: Guilford.
Kluft, R.P. (1995). The confirmation and disconfirmation of memories of abuse in DID patients: A naturalistic clinical study. *Dissociation*, 8, 251-258.
Kuyk, J., Spinhoven, P., Van Emde Boas, M.D., & Van Dyck, R. (1999). Dissociation
in temporal lobe epilepsy and pseudo-epileptic seizure patients. *The Journal of Nervous and Mental Disease, 187*, 713-720.
Labatte, L.A., Cardeña, E., Dimitreva, J., Roy, M.J., & Engel, C. (1998). Psychiatric syndromes in Persian Gulf War veterans: An association of handling dead bodies with somatoform disorders. *Psychotherapy and Psychosomatics, 67*, 275-279.
Laria, A.J. & Lewis-Fernández, R. (in press). The professional fragmentation of experience in the study of dissociation, somatization, and culture. *Journal of Trauma & Dissociation*.
Lewis, D.O., Yeager, C.A., Swica, Y., Pincus, J.H. & Lewis, M. (1997). Objective documentation of child abuse and dissociation in 12 murderers with dissociative identity disorder. *American Journal of Psychiatry, 154*, 1703-1710.
Merskey, H. (1992). The manufacture of personalities: The production of multiple personality disorder. *British Journal of Psychiatry, 160*, 327-340.
Merskey, H. (1997). Tests of “dissociation” and mood disorder (letter), *British Journal of Psychiatry, 171*, 487.
Myers, C.S. (1940). *Shell shock in France 1914-18*. Cambridge: Cambridge University Press.
Nash, M.R., Hulsey, T.L., Sexton, M.C., Harralson, T.L., & Lambert, W. (1993). Long-term sequelae of childhood sexual abuse: Perceived family environment, psychopathology, and dissociation. *Journal of Consulting and Clinical Psychology, 61*, 276-283.
Nemiah, J.C. (1991). Dissociation, conversion, and somatization. In A. Tasman & S.M. Goldfinger (Eds.), *American Psychiatric Press Annual Review of Psychiatry* (Vol. 10) pp. 248-260. Washington, DC: American Psychiatric Press.
Nijenhuis, E.R.S. (1999). *Somatoform Dissociation: Phenomena, Measurement, and Theoretical Issues*. Assen, the Netherlands: Van Gorcum.
Nijenhuis, E.R.S., Quak, J., Reinders, S., Korf, J., Vos, H., & Marinkelle, A.B. (1999). *Identity-Dependent Processing of Traumatic Memories in Dissociative Identity Disorder: Converging Regional Blood Flow, Physiological and Psychological Evidence*. Proceedings of the 6th European Conference on Traumatic Stress: Psychotraumatology, Clinical Practice, and Human Rights. Istanbul, Turkey, June 5-8, p. 23.
Nijenhuis, E.R.S., Spinhoven, P., Van Dyck, R., Van der Hart, O., De Graaf, A.M.J., & Knoppert, E.A.M. (1997). Dissociative pathology discriminates between bipolar mood disorder and dissociative disorder. *British Journal of Psychiatry, 170*, 581.
Nijenhuis, E.R.S., Spinhoven, P., Van Dyck, R., Van der Hart, O., & Vanderlinden, J. (1996). The development and the psychometric characteristics of the Somatoform Dissociation Questionnaire (SDQ-20). *Journal of Nervous and Mental Disease, 184*, 688-694.
Nijenhuis, E.R.S., Spinhoven, P., Van Dyck, R., Van der Hart, O., & Vanderlinden, J. (1997). The development of the Somatoform Dissociation Questionnaire (SDQ-5) as a screening instrument for dissociative disorders. *Acta Psychiatrica Scandinavica, 96*, 311-318.
Nijenhuis, E.R.S., Spinhoven, P., Vanderlinden, J., Van Dyck, R., & Van der Hart, O. (1998). Somatoform dissociative symptoms as related to animal defensive reactions to predatory threat and injury. *Journal of Abnormal Psychology, 107*, 63-73.
Nijenhuis, E.R.S., Spinhoven, P., Van Dyck, R., Van der Hart, O., & Vanderlinden, J. (1998a). Psychometric characteristics of the Somatoform Dissociation Questionnaire: A replication study. *Psychotherapy and Psychosomatics, 67*, 17-23.
Nijenhuis, E.R.S., Spinhoven, P., Van Dyck, R., Van der Hart, O., & Vanderlinden, J. (1998b). Degree of somatoform and psychological dissociation in dissociative disorders is correlated with reported trauma. *Journal of Traumatic Stress, 11*, 711-730.
Nijenhuis, E.R.S., & Van der Hart, O. (1999). Somatoform dissociative phenomena: A Janetian perspective. In J.M. Goodwin & R. Attias (Eds.), *Splintered Reflections: Images of the Body in Trauma* (pp. 89-127). New York: Basic Books.
Nijenhuis, E.R.S., Van der Hart, O., & Kruger, K. (submitted). The psychometric characteristics of the Traumatic Experiences Checklist (TEC).
Nijenhuis, E.R.S., Van der Hart, O., & Steele, K. (in press). Strukturale Dissoziation der Persoenlichkeit: Ueber ihre traumatischen Wurzeln und die phobischen Mechanismen, die sie in Gang halten. [Structural dissociation of the personality: Traumatic origins, phobic maintenance.] In: Hofmann, A., Reddemann, L., & Gast, U. (Eds.), *Behandlung dissoziativer Störungen*. [The treatment of dissociative disorders.] Stuttgart: Thieme Verlag.
Nijenhuis, E.R.S., Vanderlinden, J., & Spinhoven, P. (1998). Animal defensive reactions as a model for trauma-induced dissociative reactions. *Journal of Traumatic Stress, 11*, 243-260.
Nijenhuis, E.R.S., Van Dyck, R., Spinhoven, P., Van der Hart, O., Chatrou, M., Vanderlinden, J., & Moene, F. (1999). Somatoform dissociation discriminates among diagnostic categories over and above general psychopathology. *Australian and New Zealand Journal of Psychiatry, 33*, 512-520.
Nijenhuis, E.R.S., Van Dyck, R., Ter Kuile, M., Mourits, M., Spinhoven, P., & Van der Hart, O. (1999). Evidence for associations between somatoform dissociation, psychological dissociation, and reported trauma in chronic pelvic pain patients. In Nijenhuis, E.R.S., *Somatoform Dissociation: Phenomena, Measurement, and Theoretical Issues* (pp. 146-160). Assen, the Netherlands: Van Gorcum.
Nijenhuis, E.R.S., Van Dyck, R., Van der Hart, O., & Spinhoven, P. (1998). Somatoform dissociation is unlikely to be a result of indoctrination by therapists (letter). *British Journal of Psychiatry, 172*, 452.
Nijenhuis, E.R.S., Van Engen, A., Kusters, I., & Van der Hart, O. (in press). Peritraumatic somatoform and psychological dissociation in relation to recall of childhood sexual abuse. *Journal of Trauma & Dissociation*.
Norton, G.R., Ross, C.A., & Novotny, M.F. (1990). Factors that predict scores on the Dissociative Experiences Scale. *Journal of Clinical Scale, 46*, 273-277.
Othmer, E., & DeSouza, C. (1985). A screening test for somatization disorder (hysteria). *American Journal of Psychiatry, 142*, 1146-1149.
Pitman, R.K., Van der Kolk, B.A., Orr, S.P., & Greenberg, M.S. (1990). Naloxone reversible stress induced analgesia in post traumatic stress disorder. *Archives of General Psychiatry, 47*, 541-547.
Pribor, E.F., Yutzy, S.H., Dean, J.T., & Wetzel, R.D. (1993). Briquet’s syndrome, dissociation and abuse. *American Journal of Psychiatry, 150*, 1507-1511.
Rey, J.M., Morris-Yates, A., & Stanislaw, H. (1992). Measuring the accuracy of
diagnostic tests using Receiver Operating Characteristics (ROC) analysis. *International Journal of Methods in Psychiatric Research, 2*, 39-50.
Ross, C.A., Heber, S., Norton, G.R., & Anderson, G. (1989). Somatic symptoms in multiple personality disorder. *Psychosomatics, 30*, 154-160.
Sar, V., Kundakci, T., Kiziltan, E., Bahadir, B., & Aydiner, O. (1998). *Reliability and validity of the Turkish version of the Somatoform Dissociation Questionnaire (SDQ-20)*. Proceeding of the International Society of Dissociation 15th International Fall Conference. Seattle, November 14-17.
Sar, V., Kundakci, T., Kiziltan, E., Yargic, I.L., Tutkun, H., Bakim, B., Aydiner, O., Özpulat, T., Keser, V., & Özdemir, Ö. (1999). *Frequency of dissociative disorders among psychiatric outpatients with borderline personality disorder*. Proceedings of the 6th European Conference on Traumatic Stress: Psychotraumatology, Clinical Practice, and Human Rights. Istanbul, Turkey, June 5-8, p. 115.
Sar, V., Kundakci, T., Kiziltan, E., Bakim, B., & Bozkurt, O. (2000). Differentiating dissociative disorders from other diagnostic groups through somatoform dissociation in Turkey. *Journal of Trauma & Dissociation, 1*(4), 67-80.
Saxe, G.N., Chinman, G., Berkowitz, M.D., Hall, K., Lieberg, G., Schwartz, J., & Van der Kolk, B.A. (1994). Somatization in patients with dissociative disorders. *American Journal of Psychiatry, 151*, 1329-1334.
Saxe, G.N., Van der Kolk, B.A., Berkowitz, R., Chinman, G., Hall, K., & Lieberg, G. (1993). Dissociative disorders in psychiatric inpatients. *American Journal of Psychiatry, 150*, 1037-1042.
Spitzer, C., Spelsberg, B., Grabe, H.-J., Mundt, B., & Freibberger, H. (1999). Dissociative experiences and psychopathology in conversion disorders. *Journal of Psychosomatic Research, 46*, 291-294.
Steinberg, M. (1994). *Interviewer’s guide to the Structured Clinical Interview for DSM-IV Dissociative Disorders (Revised ed.)*. Washington, DC: American Psychiatric Press.
Tillman, J.G., Nash, M.R., & Lerner, P.M. (1994). Does trauma cause dissociative pathology? In S.J. Lynn & J.W. Rhue (Eds.), *Dissociation: Clinical and Theoretical Perspectives* (pp. 395-415). New York: Guilford.
Van der Hart, O. (2000). Dissociation: Toward a resolution of 150 years of confusion. Keynote address International Society for the Study of Dissociation 17th International Fall Conference. San Antonio, Texas, November 12-14.
Van der Hart, O., & Friedman, B. (1989). A reader’s guide to Pierre Janet on dissociation: A neglected intellectual heritage. *Dissociation, 2*, 3-16.
Van der Hart, O., & Op den Velde, W. (1995). Traumatische herinneringen. [Traumatic memories.] In O. van der Hart (Ed.), *Trauma, dissociatie en hypnose* [Trauma, dissociation and hypnosis], 3rd edition (pp. 79-101). Lisse, the Netherlands: Swets & Zeitlinger.
Van der Hart, O., Van der Kolk, B.A., & Boon, S. (1998). Treatment of dissociative disorders. In J.D. Bremner & C.R. Marmar (Eds.), *Trauma, memory, and dissociation* (pp. 253-283). Washington, DC: American Psychiatric Press.
Van der Hart, O., Van Dijke, A., Van Son, M.J.M., & Steele, K. (2000). Somatoform dissociation in traumatized World War I combat soldiers: A neglected clinical heritage. *Journal of Trauma & Dissociation, 1*(4), 33-66.
Van der Hart, O., Witztum, E., & Friedman, B. (1993). From hysterical psychosis to reactive dissociative psychosis. *Journal of Traumatic Stress, 6*, 43-64.
Van der Kolk, B. A., & Fisler, R. (1995). Dissociation and the fragmentary nature of traumatic memories: Overview and exploratory study. *Journal of Traumatic Stress, 8*, 505-525.
Van der Kolk, B.A., Greenberg, M.S., Orr, S.P., & Pitman, R.K. (1989). Endogenous opioids, stress induced analgesia, and posttraumatic stress disorder. *Psychopharmacology Bulletin, 25*, 417-422.
Van der Kolk, B.A., Pelcovitz, D., Roth, S., Mandel, F.C., McFarlane, A.C., & Herman, J.L. (1996). Dissociation, somatization, and affect dysregulation: The complexity of adaptation to trauma. *American Journal of Psychiatry, 153*, (Festschrift Supplement), 83-93.
Vanderlinden, J. (1993). *Dissociative experiences, trauma, and hypnosis: Research findings and clinical applications in eating disorders*. Delft: Eburon.
Van Honk, J., Nijenhuis, E.R.S., Hermans, E., Jongen, A., & Van der Hart, O. (1999). State-dependent emotional responses to masked threatening stimuli in dissociative identity disorder. Proceedings of the 16th International Fall Conference of the International Society for the Study of Dissociation, Miami, November 11-13.
Van Ommeren, M., Sharma, B., Sharma, G. K., de Jong, J. T. V. M., Komproe, I., & Cardeña, E. (in press). The relationship between somatic and PTSD symptoms among Bhutanese refugee torture survivors. In T. M. McIntyre, & S. Krippner (Eds.), *The impact of war trauma on civilian populations: An international perspective*. Greenwood Press/Praeger.
Walker, E.A., Katon, W.J., Neraas, K., Jemelka, R.P., & Massoth, D. (1992). Dissociation in women with chronic pelvic pain. *American Journal of Psychiatry, 149*, 534-537.
Walker, E.A., Katon, W.J., Hansom, J., Harppo-Griffith, J., Holm, L., Jones, M.L., Hickok, L.R., & Russo, J. (1995). Psychiatric diagnoses and sexual victimization in women with chronic pelvic pain. *Psychosomatics, 36*, 531-540.
Waller, G., Hamilton, K., Elliott, P., Lewendon, J., Stopa, L. Waters, A., Kennedy, F., Chalkley, J., F., Lee, G., Pearson, D., Kennerley, H., Hargreaves, I., & Bashford, V. (2000). Somatoform dissociation, psychological dissociation and specific forms of trauma. *Journal of Trauma & Dissociation, 1*(4), 81-98.
Walling, E.A., Reiter, R.C., O’Hara, M.W., Milburn, A.K., Lilly, G., & Vincent, S.D. (1994). Abuse history and chronic pain in women. I. Prevelences of sexual and physical abuse. *Obstetrics & Gynecology, 84*, 193-199.
World Health Organization (1992). *The ICD-10 Classification of Mental and Behavioral Disorders. Clinical description and diagnostic guidelines*. Geneva: Author.
APPENDIX
SDQ-20
This questionnaire asks about different physical symptoms or body experiences, which you may have had either briefly or for a longer time.
Please indicate to what extent these experiences apply to you in the past year.
For each statement, please circle the number in the first column that best applies to YOU. The possibilities are:
1 = this applies to me NOT AT ALL
2 = this applies to me A LITTLE
3 = this applies to me MODERATELY
4 = this applies to me QUITE A BIT
5 = this applies to me EXTREMELY
If a symptom or experience applies to you, please indicate whether a physician has connected it with a physical disease. Indicate this by circling the word YES or NO in the column “Is the physical cause known?” If you wrote YES, please write the physical cause (if you know it) on the line.
Example:
| Extent to which the symptom or experience applies to you | Is the physical cause known? |
|----------------------------------------------------------|-----------------------------|
| Sometimes: | |
| My teeth chatter | 1 2 3 4 5 | NO YES, namely ________ |
| I have cramps in my calves | 1 2 3 4 5 | NO YES, namely ________ |
If you have circled a 1 in the first column (i.e., This applies to me NOT AT ALL), you do NOT have to respond to the question about whether the physical cause is known.
On the other hand, if you circle 2, 3, 4, or 5, you MUST circle NO or YES in the “Is the physical cause known?” column.
Please do not skip any of the 20 questions.
Thank you for your cooperation.
Here are the questions:
1 = this applies to me NOT AT ALL
2 = this applies to me A LITTLE
3 = this applies to me MODERATELY
4 = this applies to me QUITE A BIT
5 = this applies to me EXTREMELY
| Sometimes: | Extent to which the symptom or experience applies to you | Is the physical cause known? |
|--------------------------------------------------------------------------|----------------------------------------------------------|-----------------------------|
| 1. I have trouble urinating | 1 2 3 4 5 | NO |
| 2. I dislike tastes that I usually like (women: at times OTHER THAN pregnancy or monthly periods) | 1 2 3 4 5 | NO |
| 3. I hear sounds from nearby as if they were coming from far away | 1 2 3 4 5 | NO |
| 4. I have pain while urinating | 1 2 3 4 5 | NO |
| 5. My body, or a part of it, feels numb | 1 2 3 4 5 | NO |
| 6. People and things look bigger than usual | 1 2 3 4 5 | NO |
| 7. I have an attack that resembles an epileptic seizure | 1 2 3 4 5 | NO |
| 8. My body, or a part of it, is insensitive to pain | 1 2 3 4 5 | NO |
| 9. I dislike smells that I usually like | 1 2 3 4 5 | NO |
| 10. I feel pain in my genitals (at times OTHER THAN sexual intercourse) | 1 2 3 4 5 | NO |
| 11. I cannot hear for a while (as if I am deaf) | 1 2 3 4 5 | NO |
| 12. I cannot see for a while (as if I am blind) | 1 2 3 4 5 | NO |
| 13. I see things around me differently than usual (for example, as if looking through a tunnel, or seeing merely a part of an object) | 1 2 3 4 5 | NO |
| 14. I am able to smell much BETTER or WORSE than I usually do (even though I do not have a cold) | 1 2 3 4 5 | NO |
15. It is as if my body, or a part of it, has disappeared
| | 1 | 2 | 3 | 4 | 5 |
|---|---|---|---|---|---|
| | NO | YES, namely |
16. I cannot swallow, or can swallow only with great effort
| | 1 | 2 | 3 | 4 | 5 |
|---|---|---|---|---|---|
| | NO | YES, namely |
17. I cannot sleep for nights on end, but remain very active during daytime
| | 1 | 2 | 3 | 4 | 5 |
|---|---|---|---|---|---|
| | NO | YES, namely |
18. I cannot speak (or only with great effort) or I can only whisper
| | 1 | 2 | 3 | 4 | 5 |
|---|---|---|---|---|---|
| | NO | YES, namely |
19. I am paralysed for a while
| | 1 | 2 | 3 | 4 | 5 |
|---|---|---|---|---|---|
| | NO | YES, namely |
20. I grow stiff for a while
| | 1 | 2 | 3 | 4 | 5 |
|---|---|---|---|---|---|
| | NO | YES, namely |
Before continuing, will you please check whether you have responded to all 20 statements?
You are asked to fill in and place an X beside what applies to you.
21. Age: _______ years
22. Sex:
- [ ] female
- [ ] male
23. Marital status:
- [ ] single
- [ ] married
- [ ] living together
- [ ] divorced
- [ ] widower/widow
24. Education: _______ number of years
25. Date: ________________
26. Name: ________________________________________
|
MOLDING TECHNICAL CERAMICS WITH POLYSACCHARIDES
Abstract
Technical ceramics represent a large, international market dominated by electronic applications, such as insulators, substrates, integrated circuit packages, capacitors, and magnets. Typical manufacturing operations involve blending ceramic powder with organic liquids (e.g., polyethylene wax, organic solvents) to form a slurry that is molded into a three-dimensional shape before it is dried and kiln-fired. There are serious problems with the pyrolysis of these organics prior to kiln firing: (i) slow and costly heating (e.g., 200 °C for one week) is required to avoid the formation of cracks and gas bubbles, (ii) toxic fumes are emitted, and (iii) residual carbon contaminates the final microstructure. Aqueous suspending media are needed to eliminate these organic carrier liquids and evaporate safely without causing cracks, shape distortion, and microstructure contamination in sintered parts. Our experiments indicate that various dextrins and maltodextrins are useful to achieve this goal because of their natural tendency to sorb to oxide powders in aqueous suspensions. Small concentrations of these starch hydrolysis products (< 5 wt%) significantly improve molding paste rheology and enable clean pyrolysis with minimal carbon contamination of microstructures. In addition, these polysaccharides form strong, interparticle bonds after water evaporation, which enables processing of strong, crack-free ceramics before they are kiln-fired. In this paper, we begin by discussing background information on surface chemical aspects of controlling the rheology of ceramic molding slurries. Experiments involving sedimentation, filtration, extrusion, rheology, and surface chemical analysis are then presented which illustrate the practical potential of maltodextrins and dextrins as rheological modifiers in ceramic manufacturing.
* Ames Laboratory and Department of Materials Science and Engineering, Iowa State University, Ames, USA
** Department of Chemistry and Physics, University of Agriculture, Krakow, Poland
*** Department of Carbohydrate Technology, University of Agriculture, Krakow, Poland
+ HanYang University, Ceramic Materials Research Institute, 17 Hanengdang-Dong, Seongdong-Gu Seoul 133-791, South Korea
++ Materials Research and Testing Center, Hunan Light Industry College, Changsha, Hunan 410007, People's Republic of China
# Ames Laboratory is operated by Iowa State University under the contract number W-7405-eng-82 with the U.S. Department of Energy.
Introduction
Technical ceramics: market overview
In the past two decades, significant advances have been made in the synthesis of technical ceramic powders, which have an unprecedented degree of control over particle size, shape, and chemistry (Segal 1989 and 1996). In 1994, the market value of technical ceramic powder for electronic applications was estimated at 613 million U.S. dollars. By the year 2000, the market value of ceramic powders used in electronic applications is estimated to reach 977 million U.S. dollars as a result of 8.1 percent average annual growth (Abraham 1996).
Oxide ceramics have as much as 97% of the market for electronic ceramics. Aluminum oxide has the largest market for substrate materials that are utilized in making insulators and integrated circuits. Ceramic ferrite powders are used for making permanent and soft magnetic materials and constitute the second largest market. Ceramic titanates are the basis for electronic capacitors as well as piezoelectric components and constitute the third largest market (Abraham 1996).
Technical ceramic powders are also utilized in structural applications such as wear-resistant parts, mechanical seals, sliding bearings, cutting tool inserts, and surgical implants. Oxides, silicon carbide, and silicon nitride are three of the most common technical ceramics for structural applications. The market for technical ceramics powder for structural applications is smaller than that of electronic ceramics and was estimated at 45 million U.S. dollars in 1994 (Abraham 1996).
Plastic molding of ceramics: why polysaccharides?
Technical ceramics are typically produced by a sequential process of \((i)\) mixing ceramic powder with an organic liquid carrier (e.g. alcohols, ketones, polyethylene wax, vinyl additives) to form a moldable slurry, \((ii)\) forming the slurry into a three-dimensional shape (e.g., by injection molding or plastic shaping), \((iii)\) thermal treatment to evaporate or pyrolyze the liquid carrier, and \((iv)\) kiln firing. Large concentrations (up to 60 to 70 vol%) of the organic carrier liquid are typically needed to maintain plasticity during shape-forming. A transition to brittle, dilatant behavior occurs at lower concentrations of organic liquid (Pujari 1989; Franks and Lange 1996). Aside from the associated environmental hazards, another concern is that the diffusion of hydrocarbon liquids during heating and the gases produced during pyrolysis cause unwanted cracks and shape distortion in pre-sintered parts. Relatively long heat treatments (e.g., up to one week at \( \approx 200^\circ C \)) are typically needed to complete pyrolysis; more rapid heating produces internal stresses that can cause cracks and shape distortion (Stangle and Aksay 1990). In addition, evaporation and gaseous diffusion typically
remove not all of the decomposition products of pyrolysis; carbonaceous residues are frequently left behind which contaminate microstructures.
Aqueous suspending media are needed to eliminate these organic carrier liquids and evaporate safely without causing cracks, shape distortion, and microstructure contamination in sintered parts. Several industrial firms and academic researchers are currently working towards this goal. Much of this research entails fundamental studies of the surface chemical origins of the rheology of ceramic powder suspensions. In the paragraphs below, we shall review important aspects of these studies to help build an understanding of the advantages of dextrins and maltodextrins in formulating molding pastes for the ceramic industry.
Experience indicates that the simple addition of deionized water to ceramic powder results in essentially the same problem mentioned above: a dramatic reduction of slurry plasticity if the water concentration is beneath a critical value. As a result, molding becomes impossible because insufficient water tends to produce a stiff, brittle consistency. Experience indicates that this critical water concentration is usually much smaller for aqueous, clay suspensions as opposed to aqueous suspensions of oxide ceramics. Therefore, shapes that are molded from oxides are often too porous, which results in cracks, warpage, and excessive shrinkage during drying and kiln firing. In contrast, clay-based ceramics are much easier to manufacture at smaller concentrations of water, which in turn makes them less susceptible to warpage and cracks during drying. Unfortunately, clay-based ceramics are unsuitable for most consumer products that are made of technical ceramics, because they do not possess the high performance electrical, magnetic, and mechanical properties that are required.
The remarkable difference in rheology between oxides and clay minerals can be attributed to a large extent to inherent differences in particle morphology, the nature of the surface charge, and the adsorption of structured water molecules on clay surfaces (van Olphen 1977; Pashley and Israelachvili 1984; Lawrence 1978). The stacked-platelet morphology of clays, along with a superposition of long range, interparticle van der Waals (attractive) and short range, interparticle hydration (repulsive) forces, are thought to contribute to the high degree of plasticity of clays. In contrast, nonclay, technical ceramic powders do not generally have this same shape and surface-repulsion characteristic that aid particle rearrangement in clay-based systems.
Several researchers have recently shown that the key to improving the plasticity of concentrated suspensions of nonclay ceramic powders is to develop a weakly-flocculated state by coating powders with a substance producing a "clay-like" superposition of interparticle, long-range attraction and short-range repulsive forces. Two general methods are reported: (i) the hydration-layer approach (Velamakanni et al. 1990 and 1994; Chang et al. 1991 and 1994 a and b; Luther et al. 1994; Franks et al. 1995; Franks and Lange 1996) and (ii) adsorbate-mediated steric-hindrance (i.e., the creation
of a steric adlayer that inhibits complete mutual approach of individual particles). Before presenting our research, we shall briefly mention previous studies on rheological effects of adsorbate-mediated steric hindrance, studies that have led to the current interest in the use of polysaccharides for ceramic molding.
Yin et al. (1988) first introduced the method of weak-flocculation by adsorbate-mediated, steric-hindrance and reported the formation of high-density, low-viscosity suspensions with polymethacrylates adsorbed on alumina powders suspended in heptane or paraffin oil. Schilling (1992) and Bergström et al. (1992) subsequently reported significant improvements in the packing densities of centrifuged suspensions in which short-range, steric-repulsion forces were established by the adsorption of fatty acids to alumina powders suspended in decalin. Kramer and Lange (1994) performed similar experiments with alcohols adsorbed on silicon nitride powders suspended in an organic solvent.
More recent studies focused on developing aqueous analogs to the method of weak-flocculation by adsorbate-mediated steric-hindrance, analogs producing highly-concentrated and highly-plastic suspensions of oxide powders without the practical concerns of the organic solvents used in the studies mentioned above. For example, Leong et al. (1993) reported significant reductions in the viscosities of aqueous suspensions of concentrated zirconia by establishing short-range, steric repulsion through the adsorption of anionic molecules (sulfate, various phosphates) and simple, organic-acid anions (lactate, malate, and citrate). Luther et al. (1995) reported similar results by the use of ammonium citrate additives in aqueous alumina suspensions. Although focused on interparticle dispersion and not weak flocculation per se, there are also excellent reports on the use of sorbed-polyelectrolytes in the preparation of highly concentrated and dispersed, aqueous alumina suspensions (Shanefield 1995; Hidber et al. 1996). In addition, Chan and Lin (1995) reported significant reductions in viscosity by the adsorption of steric acid onto alumina powder surfaces in paraffin/polypropylene suspensions.
More recently, Schilling and co-workers (Schilling et al. 1995; Goel et al. 1996) reported significant improvements in the consolidation and rheology of aqueous, filter-pressed suspensions of submicron alumina powder, weakly-flocculated suspensions that were prepared with maltodextrin. It was reported that these suspensions exhibited a high degree of "clay-like" plasticity based on measurements of equibiaxial extensional rheometry. Schilling et al. (1998a) then performed surface chemical experiments to analyze the mechanism of enhanced fluidity caused by one, model polysaccharide: a commercially available maltodextrin having an average molecular weight of 3,600 Daltons. It was concluded that this mechanism primarily entails reduction of interparticle attraction by adsorbate mediated steric hindrance rather than double-layer, interparticle repulsion. Another benefit of this maltodextrin is that, during drying, it acts as
a binder to strengthen ceramic bodies (Schilling et al. 1998 b). We will highlight some of the results of these earlier studies in the paragraphs below. In addition, we will present new experimental data on the effects of polysaccharide molecular weight and concentration on suspension rheology and the strength of dried ceramics.
Finally, we should mention that there are a few publications on the use of different polysaccharides for ceramic processing (Sarkar and Greminger 1983, Fanelli et al. 1989; Shanefield 1995; Dmitriev et al. 1990; Bonomi et al. 1989; Mach et al. 1988; Panda et al. 1988). In fact, a broad range of polysaccharides are commonly utilized as additives for aqueous suspensions of colloidal, oxide powders in other applications including papermaking, mineral separation, and treatment of chemical waste (Tomasik and Schilling 1998 a and b).
**Experimental procedure**
Suspensions were prepared with deionized water and $\alpha$-Al$_2$O$_3$ powder having an equiaxed particle shape, an average particle size of 0.4 $\mu$m, and a specific surface area of 8.5 m$^2$ per gram (A-16 SG, Alcoa Corporation, Bauxite, Arkansas, U.S.A.). Kaolin suspensions were prepared with deionized water and Pioneer Airfloated Clay (Dry Branch Kaolin Company, Dry Branch, Georgia, U.S.A.), having an average particle size of 1.0–1.2 microns according to the supplier. The majority of the experiments below were conducted using a single, commercially-available maltodextrin, which we shall refer to as maltodextrin 040 (Maltrin 040, Grain Processing Corp., Muscatine, Iowa, U.S.A., average molecular weight 3,600 g/mole; average degree of polymerization 22.1). Chromatography measurements by the manufacturer revealed the following composition of maltodextrin 040: (i) 85% maltodextrins having a degree-of-polymerization greater than 10 and (ii) a 15% concentration of shorter chain maltodextrins, oligo- and monosaccharides. The manufacturer also reported that the dextrose equivalent value of maltodextrin 040 was 5.
Commercially-available maltodextrins and dextrins having a range of average molecular weights from 900 to 63,000 Daltons were used in the rheology experiments below (Table 1). Maltodextrin 100 and 200 were provided by the Grain Processing Corporation as described above. Four different dextrins, D1, D2, D3, and D4, were provided by Sigma Chemical Company of St. Louis, Missouri, U.S.A. The average molecular weight of these polysaccharides is listed in Table 1 and is based on chromatography measurements reported by each manufacturer. Dextrans, pullulan (Sigma Chemical Company, St. Louis, Missouri, U.S.A.), and soluble starch (Difco Corporation, Detroit, Michigan, U.S.A.) were used in the mechanical properties experiments below.
Suspensions were prepared by simply adding alumina powder to an aqueous solution of a given polysaccharide. In some instances, the solutions contained 0.01 M
NaCl. Each suspension was then sonicated for approximately two minutes using a sonifier with a 0.25 inch horn (CV17 Vibracell, Sonics and Materials Inc., Danbury Connecticut, U.S.A.). The sonicator was operated at between 60 to 80% of the maximum power output of 600 W. Suspensions were then poured into sealed, plastic bottles and placed on a shaker for 24 hours.
**Table 1**
Average molecular weights of polysaccharides
| Specimen | Average Molecular Weight (Daltons) |
|-------------------|-----------------------------------|
| Maltodextrin 200 | 900 |
| Maltodextrin 100 | 1800 |
| Maltodextrin 040 | 3600 |
| Dextrin D1 (potato) | 6650 |
| Dextrin D2 (corn) | 6450 |
| Dextrin D3 (corn) | 15,000 |
| Dextrin D4 (corn) | 63,000 |
**Sedimentation, filtration, and extrusion**
Gravity sedimentation experiments were performed to analyze the effects of the maltodextrin 040 concentration on the degree of consolidation (Schilling et al. 1998a). Each suspension was prepared with an initial volume fraction of alumina of $\phi_0 = 0.15$ and a zero total concentration of NaCl. One hundred milliliters of a given suspension were added to a 100 ml graduated cylinder (Pyrex glass), and the sediment height was recorded as a function of time. Each graduated cylinder was sealed to minimize evaporation.
We also compared the filtration behavior of the following suspensions: (i) alumina with 0.03 grams of maltodextrin 040 per gram of Al$_2$O$_3$, (ii) flocculated alumina near the isoelectric point without maltodextrin and (iii) kaolin without maltodextrin. Each suspension was prepared with a solids volume fraction of $\phi_0 = 0.2$ and a zero total concentration of NaCl. In case (ii) above, NH$_4$OH was added to suspensions to raise the pH to 8.6. In case (iii) above, kaolin powder was simply added to deionized water and then stirred for 2 hours before filtration (Schilling et al. 1998a).
Each suspension was poured into a filter-press that was precision machined from an acrylic tube (5.08 cm inside diameter) fitted with filter paper membranes and a porous, polyethylene piston. The applied pressure remained constant, and the movement of the piston was monitored as a function of time. In each filtration experiment, a single, liquid-saturated cake was removed from the filter press after the piston stopped
moving, and its green density was immediately analyzed by an oil-immersion technique based on the Archimedes principle (Schilling et al. 1995; Goel et al. 1996). Three specimens of each composition were filter-pressed and evaluated by this procedure in order to confirm repeatability of the packing-density measurements.
Liquid-saturated filter-cakes were examined by extrusion measurements and Benbow analysis (Schilling et al. 1998a; Benbow et al. 1987 and 1989). We performed these experiments with a stainless steel, piston extruder having a barrel diameter, $D_o$, of 12.7 mm. Square-entry dies of circular cross-section were used with die-land diameters, $D$, of 1 and 2 mm and die-land lengths, $L$, of 10 and 15.8 mm. $L/D$ ratios of 5:1, 7.9:1, 10:1, and 15.8:1 were obtained.
The extruder was manually filled with a given filter cake and placed in mechanical testing machine, which operated in the compression mode and subjected the piston to a constant axial velocity. Extrusion data were fitted to the Benbow equation, which describes the relationship between the piston velocity and the pressure drop, both in the die-entry region and in the die land (Benbow et al. 1987 and 1989):
$$P_{tot} = \frac{4F}{\pi D^2} = P_{de} + P_{dl} = 2 \ln \frac{D_o}{D} \left[ \tau_b + k_b V^n \right] + 4 \frac{L}{D} \left[ \tau_f + k_f V^m \right]$$
In this expression, $P_{tot}$ is the total extrusion pressure, $P_{de}$ is the pressure drop in the die entry region, $P_{dl}$ is the pressure drop in the die land, $V$ is the velocity, $\tau$ is the yield strength, $k$ is a constant, the subscripts $b$ and $f$ refer to the body and the die-land slip film, respectively, and $n$ and $m$ are the shear-thinning exponents for the body and film, respectively. In all experiments, we used the Benbow assumption that $m = n = 1.0$ (Benbow et al. 1989).
**Rheology studies**
Experiments were performed to determine the effects of the polysaccharide concentration and molecular weight on rheological properties (Sikora et al. 1998). Stock solutions of deionized water, 0.01 M NaCl, and varying concentrations of a given polysaccharide were initially prepared. A weighed amount of alumina powder ($\phi_o = 0.2$) was then added to each solution, followed by 24 hours of shaking in sealed, plastic containers.
Rheological measurements were performed at room temperature with a computer-controlled rheometer (RheoStress RS 75, Gebrüder Haake GmbH, Karlsruhe, Germany) having a double-gap cylinder (DG 41; DIN 54453). Each specimen was subjected to an increasing shear rate starting at 1 s$^{-1}$ and ending at 500 s$^{-1}$. The shear rate was subsequently swept back to 1 s$^{-1}$. This process of sweeping the shear rate up and down was subsequently repeated on the each specimen two more times in order to verify repeatability. In addition, we verified repeatability by performing rheological measurements on two additional specimens of each composition. A total of 87, separate specimens were analyzed in the rheometer.
Rheological measurements were expressed in terms of the shear stress $\tau$ as a function of the shear rate $\dot{\gamma}$. These measurements were fitted to the Herschel-Bulkley model $\tau = \tau_o + K \dot{\gamma}^n$ using a computer (Steffe, 1996). In this expression, $\tau_o$ is the yield stress, $K$ is the consistency coefficient, and $n$ is the flow behavior index. This model is convenient, because it describes the rheological behavior of a broad range of fluids that are either Newtonian ($n = 1$), shear-thinning ($0 < n < 1$), or shear-thickening ($1 < n$). A computer was used to statistically analyze the parameters $\tau_o$, $K$, and $n$ for as a function of the polysaccharide concentration and molecular weight (Sikora et al. 1998).
**Surface chemical analysis**
In an earlier study, we performed sorption and acoustophoresis measurements to study whether the enhanced rheological behavior of maltodextrin-alumina suspensions was attributed to adsorbate-mediated steric hindrance, electrostatic, interparticle repulsion, or both (Schilling et al. 1998a). Sorption isotherms measurements entailed centrifugation of aqueous suspensions prepared with varying concentrations of maltodextrin 040. Maltodextrin concentrations in centrifuged supernatants were measured spectrophotometrically by the addition of 1 ml of 5% phenol and 5 ml of concentrated sulfuric acid to 1 ml of the maltodextrin solution to form hydroxymethyl furfural, which strongly adsorbed at 488 nm (Dubois et al. 1956). The Smoluchowski zeta potentials of several suspensions were calculated using measurements of the electrokinetic sonic amplitude as a function of frequency (Acoustosizer™, Matec Applied Sciences Corp., Hopkinton, Massachusetts, U.S.A.). Suspension preparation entailed use of procedures where we varied the maltodextrin 040 concentration and the pH by the dropwise addition of reagent grade HCl or NH₄OH. All suspensions were prepared at 0.01 M NaCl and $\phi = 0.2$.
**Mechanical properties**
Experiments were performed to determine the effects of the polysaccharide molecular weight and concentration on the tensile strength of molded alumina specimens. These specimens were prepared using suspensions containing $\phi_o = 0.2$ alumina, 0.01 M NaCl, and varying concentrations of a single type of polysaccharide. Several types of polysaccharide were investigated, including soluble starch, pullulan, dextrans, and maltodextrins. Each slurry underwent 24 hours of shaking in a sealed, plastic bottle. Slip-casting was subsequently used to prepare disc-shaped specimens (diameter ~ 1.3 cm and thickness ~ 0.25 cm) for mechanical strength measurements. Slip casting is
a common, ceramic molding process that entails pouring a suspension onto gypsum mold (Aksay and Schilling 1984). Capillary suction of the gypsum serves to consolidate the suspensions by filtration (gypsum contains fine pores that are much smaller than the alumina powder). Each cast specimen was dried by storing at room temperature for one week prior to mechanical testing. The tensile strength of each specimen was measured by the diametric compression method (Bortzmeyer 1992). At least 5 measurements of tensile strength were performed on specimens of each type.
**Experimental results**
**Sedimentation, filtration, and extrusion**
Significant increases in sediment density resulted after adding small amounts of maltodextrin 040 to suspensions of alumina powder ($\phi_0 = 0.15$) and deionized water (Table 2). The simple addition of alumina powder to deionized water without maltodextrin resulted in a strongly flocculated condition and the formation of a low-density sediment ($\phi = 0.18$). The alumina volume fraction sharply increased from $\phi = 0.25$ to 0.47 as the maltodextrin concentration increased from 0.01 to 0.06 grams of maltodextrin 040 per gram of Al$_2$O$_3$.
Volume-averaged densities of liquid-saturated filter-cakes are shown as a function of the consolidation pressure in Table 3. Suspensions of strongly-flocculated alumina near the isoelectric point without maltodextrin (pH 8.6) exhibited the lowest, alumina volume-fraction of $\phi = 0.49$ when consolidated at a pressure of only 0.54 MPa. A much higher pressure of 3.5 MPa was needed to raise the alumina volume-fraction of this same slurry system to $\phi = 0.57$. In contrast, kaolin cakes exhibited a solid-volume-fraction of $\phi = 0.6$ when consolidated at a pressure of only 0.54 MPa. At the same, low consolidation pressure of 0.54 MPa, maltodextrin-alumina cakes exhibited a slightly lower, alumina volume-fraction of $\phi = 0.57$.
**Table 2**
Alumina sediment densities ($\phi_0 = 0.15$)
| Maltodextrin 040 Concentration (g/g Al$_2$O$_3$) | Volume Fraction Alumina |
|-----------------------------------------------|--------------------------|
| 0 | 0.18 |
| 0.01 | 0.25 |
| 0.03 | 0.43 |
| 0.06 | 0.47 |
| 0.09 | 0.40 |
Filter-cake properties \((\phi_0 = 0.2)\)
| Suspension type | Solids volume fraction, \( \phi \) | Consolidation pressure, MPa |
|----------------------------------------|-----------------------------------|-----------------------------|
| Al\(_2\)O\(_3\), pH 8.6 | 0.49 | 0.54 |
| Al\(_2\)O\(_3\), pH 8.6 | 0.57 | 3.5 |
| 0.03 g maltodextrin / g Al\(_2\)O\(_3\) | 0.57 | 0.54 |
| Kaolin | 0.60 | 0.54 |
As shown in Table 4, strongly-flocculated alumina suspensions without maltodextrin at pH 8.6 displayed the most "clay-like" extrusion at an alumina volume-fraction of \( \phi = 0.49 \): they exhibited yield stresses and velocity factors that were similar to those of the kaolin suspensions (Table 4). Raising the alumina concentration of the pH 8.6 suspensions to \( \phi = 0.57 \) produced specimens that were too stiff to be extruded. Alumina specimens that were prepared with 0.03 grams of maltodextrin 040 per gram of Al\(_2\)O\(_3\) (at the same alumina concentration of \( \phi = 0.57 \)) were easily extruded, although they exhibited higher yield stresses than all of the other systems in Table 4. In addition, the alumina specimens containing maltodextrin had yield stresses and velocity factors that were several times higher than the corresponding values for kaolin.
Extrusion summary
| Specimen type | Solids volume fraction, \( \phi \) | Benbow parameters |
|--------------------------------------|-----------------------------------|-------------------------|
| | | \( \tau_b \) (MPa) | \( k_b \) (MPa·s·m\(^{-1}\)) | \( \tau_f \) (MPa) | \( k_f \) (MPa·s·m\(^{-1}\)) |
| Kaolin | 0.60 | 0.42 | 3.7 | 0.03 | 0.69 |
| Al\(_2\)O\(_3\), pH 8.6 | 0.49 | 0.34 | 2.8 | 0.05 | 1.0 |
| Al\(_2\)O\(_3\), pH 8.6 | 0.57 | * | * | * | * |
| Al\(_2\)O\(_3\) + 0.03 grams | 0.57 | 1.72 | 16.4 | 0.13 | 4.26 |
| maltodextrin/gram Al\(_2\)O\(_3\) | | | | | |
* These samples were too stiff to be extruded.
**Rheology studies**
In the absence of polysaccharide, alumina suspensions near the isoelectric point commonly exhibited pseudoplastic behaviour (Figure 1). Herschel-Bulkley parameters for these suspensions are: \( 0.2 < \tau_o < 3 \) Pa, \( 6.1 < K < 10.18 \) Pa.s\(^n\), and \( 0.18 < n < 0.25 \). The addition of nearly all of the polysaccharides in this study dramatically suppresses this pseudoplastic behaviour. Small amounts (a few weight per cent) of these polysacPolysaccharides typically produce a major reduction in the flow stress along with a transition from pseudoplastic to Newtonian-like behaviour. For example, Figure 1 illustrates this trend for a suspension containing 0.03 grams of maltodextrin 040 per gram of alumina. In this case, Herschel-Bulkley parameters are as follows: $0.01 < \tau_o < 0.03$ Pa, $0.004 < K < 0.005$ Pa.s$^n$, and $0.89 < n < 0.95$.
As shown in Figure 2, the consistency coefficient $K$ rapidly decreased upon the addition of each of the polysaccharides in this study. For example, all of the polysaccharides except D4 exhibit a sharp reduction in $K$ as the solution concentration increased from 0 to 0.01 gram of polysaccharide per gram of alumina. For these specimens, $K \sim 0$ for all of the higher concentrations of polysaccharide. In contrast, the addition of the polysaccharide with the largest molecular weight (D4) produces more of a gradual decrease in $K$ as the polysaccharide concentration increases. In this case, $K$ approaches zero only when the concentration of D4 exceeds 0.05 g/g alumina.
**Surface chemical analysis**
Let us define $c_o$ as the maltodextrin 040 concentration of a given, stock solution without adding $\phi_o = 0.2$ alumina powder, $c_s$ as the equilibrium concentration of maltodextrin 040 sorbed to alumina after adding $\phi_o = 0.2$ alumina to the stock solution, and $c_f$ as the equilibrium concentration of free maltodextrin 040 in solution after adding $\phi_o = 0.2$ alumina to the stock solution. We showed in an earlier publication that maximum sorption is achieved at a minimum $c_o$ of 0.02 grams of maltodextrin 040 per gram of alumina (Schilling et al. 1998). Under these conditions when $c_o = 0.02$ grams of maltodextrin 040 per gram of alumina, approximately 50% of $c_o$ sorbs to alumina, whereas the other 50% remains in solution.
At pH 10 and $c_o = 0.01$ grams of maltodextrin 040 per gram of alumina, acoustophoresis indicated a Smoluchowski zeta potential of -7.6 mV (Table 5). At the same pH of 10, a larger $c_o$ of 0.03 grams of maltodextrin 040 per gram of alumina increased the zeta potential to -3.6 mV. At a lower pH of 7, we observed a Smoluchowski zeta potential of +8.7 mV without maltodextrin. Also at pH 7, $c_o$ of 0.01 grams of maltodextrin 040 per gram of alumina resulted in a Smoluchowski zeta potential of +2.5 mV. Also at pH 7, a larger $c_o$ of 0.03 grams of maltodextrin 040 per gram of alumina reduced the Smoluchowski zeta potential to +1 mV.
Since we previously observed maltodextrin sorption to alumina, it is not surprising that acoustophoresis revealed a decreasing surface charge at pH 10 upon raising $c_o$ from 0.01 to 0.03 grams of maltodextrin 040 per gram of alumina. We should mention that a pH of 9.7 was routinely observed in alumina – maltodextrin 040 slurries that were used in all the sedimentation, filtration, and rheology experiments above. Acoustophoresis revealed a relatively small zeta potential of -3.6 mV under similar conditions (pH 10, $\phi = 0.2$, 0.01 M NaCl, $c_o = 0.03$ grams of maltodextrin 040 per gram of
alumina). This small potential suggests that electrostatic, interparticle repulsion is not a primary mechanism for the increased consolidation and fluidity upon adding maltodextrin to alumina. Instead, sorption data suggest that sorbate-mediated steric hindrance appears to play a major role in this regard.
Fig. 1 Aqueous suspensions of 20 vol% alumina exhibit a transition from strongly-flocculated, pseudoplastic behavior to a Newtonian-like state upon the addition of 0.03 grams of maltodextrin 040 per gram of alumina (Sikora et al. 1998).
Fig. 2 Increasing the polysaccharide concentration significantly enhanced suspension fluidity, as apparent from the reductions in consistency coefficient. The largest molecular weight dextrin (D4) was least effective in this regard (Sikora et al. 1998).
Acoustophoresis measurements
| Maltodextrin concentration (g/g Al₂O₃) | pH | Mean Smoluchowski zeta potential (mV) |
|--------------------------------------|-----|--------------------------------------|
| 0.01 | 10 | -7.6 |
| 0.03 | 10 | -3.6 |
| 0.01 | 7 | 2.5 |
| 0.03 | 7 | 1.0 |
| 0 | 7 | 8.7 |
Fig. 3. Tensile strength of slip cast and dried alumina as a function of the polysaccharide molecular weight. Baseline specimens were prepared without polysaccharide and had the lowest strength. All other specimens were prepared with 0.03 grams of a given polysaccharide per gram of alumina.
**Mechanical properties**
Diametric compression measurements indicated tensile strengths between 0.2 to 0.5 MPa for baseline alumina specimens that were prepared without polysaccharide (Figure 3). The strength increased to approximately 0.75 MPa upon the addition of 0.03 grams of maltodextrin per gram of alumina. Figure 3 illustrates the general trend of increasing the tensile strength upon increasing the molecular weight of the polysaccharide above 10,000 Daltons. Below 10,000 Daltons, the strength does not appear to be influenced by the molecular weight. Specimens containing pullulan and the largest-molecular-weight dextran were approximately 10 times stronger than that of the baseline specimens made without polysaccharide. We should mention that we also observed systematic increases in tensile strength as the concentration of each polysaccharide increased. Finally, we should mention that we had previously reported ultrasonic velocity measurements illustrating that the simple addition of 3 grams of maltodextrin 040 per gram of alumina results in dried filter cakes with high elastic stiffness (13.8
GPa as opposed to 8.2 GPa for specimens prepared without maltodextrin 040 (Schilling et al. 1998b).
**Conclusions**
Aqueous suspensions of submicron alumina powder exhibited striking improvements in consolidation and rheological properties after adding small amounts of various maltodextrins and dextrins. These results provide strong support for the use of these additives in technical ceramic manufacturing.
High-density sediments (47 vol%) and high-density filter-cakes (57 vol%) were produced at low filtration-pressures (0.54 MPa). In contrast, alumina filter-cakes that were flocculated at the isoelectric point without maltodextrin required an order-of-magnitude greater filtration pressure to achieve the same 57 vol% density.
Maltodextrin-alumina filter-cakes were easily extrudable with Benbow parameters comparable to but higher than those of kaolin at approximately the same packing density of 57 vol%. Alumina filter-cakes without maltodextrin at the same 57 vol% density were too stiff to be extruded.
Rheometry experiments indicated a strongly-flocculated, Bingham-plastic response upon adding 20 vol% alumina powder to aqueous solutions of 0.01 M NaCl without maltodextrin. In contrast, the addition of 3 wt% of various dextrins and maltodextrins resulted in low viscosities, Newtonian-like behavior, and Bingham yield stresses of approximately zero.
Sorption measurements indicated that maltodextrin sorption to alumina enhances the consolidation and flow behaviour of these specimens. Acoustophoresis data support the hypothesis that sorbate-mediated steric-hindrance, rather than electrostatic, interparticle repulsion, plays a significant role enhancing the consolidation and plastic flow behavior.
The addition of 0.03 grams of a given polysaccharide per gram of alumina significantly increased the tensile strength of slip cast and dried alumina. Specimens containing pullulan and dextran were approximately 10 times stronger than that of the baseline specimens made without polysaccharide.
**Acknowledgements**
The authors wish to thank the U.S. National Science Foundation, the Polish Research Council, and the Office of Basic Energy Sciences at the U.S. Department of Energy for supporting this research. We also wish to thank H. Goel, R. Smith, J. Jane, L. Ukrainczyk, and R. Bellman for valuable assistance.
REFERENCES
[1] Abraham T.: "Advanced Ceramic Powder and Nano-Sized Ceramic Powder: An Industry and Market Overview," in Ceramic Transactions Volume 62, Science, Technology, and Commercialization of Powder Synthesis and Shape Forming Processes, edited by J.J. Kingsley, C.H. Schilling, J.H. Adair (American Ceramic Society, Westerville, Ohio, U.S.A., 1996), 3-14.
[2] Aksay A., Schilling C.H.: "Colloidal Filtration Route to Uniform Microstructures," in Ultrastructure Processing of Ceramics, Glasses, and Composites, edited by L.L. Hench, D.R. Ulrich (John Wiley and Sons, New York, 1984), 439-447.
[3] Benbow J.J., Lawson T.A., Oxley E.W., Bridgwater J.: "Prediction of Paste Extrusion Pressure," Am. Ceram. Soc. Bull., 68, 10, 1989, 1821-24.
[4] Benbow J.J., Oxley E.W., Bridgwater J.: "The Extrusion Mechanics of Pastes - The Influence of Paste Formulation on Extrusion Parameters," Chem. Eng. Sci. 42 [9] 2151-62 (1987).
[5] Bergström L., Schilling C.H., Aksay I.: "Consolidation Behavior of Flocculated Alumina Suspensions," J. Am. Ceram. Soc., 75, 12, 1992, 3305-14.
[6] Bonomi Z., Persay G., Schleiffer E.: "Manufacture of Alumina-Based Refractory Materials," Hungarian Patent, Hung. Teljes HU 49, 103, 28 August 1989.
[7] Bortzmeyer D.: "Tensile Strength of Ceramic Powders," J. Mater. Sci., 27, 1992, 3305-8.
[8] Chan T.-Y., Lin S.-T.: "Effects of Stearic Acid on the Injection Molding of Alumina," J. Am. Ceram. Soc., 78, 10, 1995, 2746-52.
[9] Chang J.C., Velamakanni B., Lange F., Pearson D.: "Centrifugal Consolidation of Al₂O₃ and Al₂O₃/ZrO₂ Composite Slurries vs Interparticle Potentials: Particle Packing and Mass Segregation," J. Am. Ceram. Soc., 74, 9, 1991, 2201-04.
[10] Chang J.C., Lange F.F., Pearson D.S.: "Viscosity and Yield Stress of Alumina Slurries Containing Large Concentrations of Electrolyte," J. Am. Ceram. Soc. 77 (1) 19-26 (1994 a).
[11] Chang J.C., Lange F.F., Pearson D.S., Pollinger J.P.: "Pressure Sensitivity for Particle Packing of Aqueous Alumina Slurries vs. Interparticle Potential," J. Am. Ceram. Soc., 77, 5, 1994b, 1357-60.
[12] Dmitriev A., Podkovyrkin M.I., Beloborodova L.G., Kleshcheva T.M., Gorskaya E.S.: "Molding of Nonplastic Corundum Bodies," Ogneupory, 9, 1990, 30-33.
[13] Dubois M., Gilles K.A., Hamilton J.K., Rebers P.A., Smith F.: "Colorimetric Method for Determination of Sugars and Related Substances," Analytical Chemistry, 28, 1956, 350-56.
[14] Fanelli J., Silvers R.D., Frei W.S., Burlew J.V., Marsh G.B.: "New Aqueous Injection Molding Process for Ceramic Powders," J. Am. Ceram. Soc., 72, 10, 1989, 1833-1836.
[15] Franks G.V., Velamakanni B.V., Lange F.F.: "Vibraforming and In-Situ Flocculation of Consolidated, Coagulated, Alumina Slurries," J. Am. Ceram. Soc., 78, 5, 1995, 1324-28.
[16] Franks G.V., Lange F.F.: "Plastic-to Brittle Transition of Saturated, Alumina Powder Compacts," J. Am. Ceram. Soc., 79, 12, 1996, 3161-68.
[17] Goel H., Schilling C.H., Biner S.B., Moore J.A., Lograsso B.K.: "Plastic Shaping of Aqueous Alumina Suspensions with Saccharides and Dicarboxylic Acids," in Science, Technology, and Commercialization of Powder Synthesis and Shape Forming Processes, edited by J. J. Kingsley, C. H. Schilling, and J. H. Adair (American Ceramic Society, Westerville, Ohio 1996), 241-54.
[18] Hidber P.C., Graule T.J., Gauckler L.J.: "Citric Acid - A Dispersant for Aqueous Alumina Suspensions," J. Am. Ceram. Soc., 79, 7, 1996, 1857-67.
[19] Kono T., Otsuka T.: "Concrete Compositions Suitable for Application by Compacting," Jpn. Kokai Tokyo Koho JP 02,239,144, 21 September 1990.
[20] Kramer T., Lange F.F.: "Rheology and Particle Packing of Chem- and Phys-Adsorbed Alkylated Silicon Nitride Powders," J. Am. Ceram. Soc., 77, 4, 1994, 922-928.
[21] Lawrence W.G.: "The Structure of Water and its Role in the Clay-Water System," in Ceramic Processing Before Firing, edited by G. Y. Onoda, Jr. and L. L. Hench (John Wiley and Sons, New York, 1978), 193-210.
[22] Leong Y.K., Scales P.J., Healy T.W., Boger D.V., Ruscall R.: "Rheological Evidence of Adsorbate-mediated Short-Range Steric Forces in Concentrated Dispersions," J. Chem. Soc., Faraday Trans., 89, 1993, 2473-78.
[23] Luther E.P., Kramer T.M., Lange F.F., Pearson D.S.: "Development of Short-Range Repulsive Potentials in Aqueous, Silicon Nitride Slurries," J. Am. Ceram. Soc., 77, 4, 1994, 1047-51.
[24] Luther E.P., Yanez J.A., Franks G.V., Lange F.F., Pearson D.S.: "Effect of Ammonium Citrate on the Rheology and Particle Packing of Alumina Slurries," J. Am. Ceram. Soc., 78, 6, 1995, 1495-5000.
[25] Mach Z., Wendler L., Plucar A., Kostkova A.: "Compositions for Cordieritic Kiln Furniture," Czech. Patent CS 248,912, 15 January 1988.
[26] Panda J., Sahoo N., Agarwal S.: "Manufacture of High-Alumina Gas-Permeable Refractory Shaped Articles," Indian Patent IN 162,517, 04 June 1988.
[27] Pashley R.M., Israelachvili J.N.: "DLVO and Hydration Forces Between Mica Surfaces in Mg^{+2}, Ca^{+2}, Sr^{+2}, and Ba^{+2} Chloride Solutions," J. Colloid Interface Sci., 97, 1984, 446-55.
[28] Pujari V.K.: "Effect of Powder Characteristics on Compounding and Green Microstructure in the Injection Molding Process," J. Am. Ceram. Soc., 72, 10, 1989, 1981-1984.
[29] Sarkar N., Greninger G.K. Jr.: "Methylcellulose Polymers as Multifunctional Processing Aids in Ceramics," Bull. Am. Ceram. Soc., 62, 11, 1988, 1280-1284.
[30] Schilling H.: "Plastic Shaping of Colloidal Ceramics," Ph.D. thesis, University of Washington, Seattle, Washington, 1992).
[31] Schilling H., Biner S.B., Goel H., Jane J.L.: "Plastic Shaping of Aqueous Alumina Suspensions with Sucrose and Maltodextrin Additives," J. Environmental Polymer Degradation, 3, 3, 1995, 153-60.
[32] Schilling H., Bellman R.A., Smith R.M., Goel H.: "Plasticizing Aqueous Suspensions of Concentrated Alumina with Maltodextrin Sugar," accepted in J. Am. Ceram.Soc., 1998a.
[33] Schilling C.H., Garcia V.J., Smith R.M., Roberts R.A.: "Ultrasonic- and Mechanical Behavior of Green- and Partially-Sintered Alumina: Effects of Slurry Consolidation Chemistry," accepted in J. Am. Ceram. Soc., 1998b.
[34] Segal: "Chemical Preparation of Powders," in Materials Science in Technology, Volume 17A, Processing of Ceramics, Part 1, edited by R. J. Brook, R. W. Cahn, P. Haasen, and I. J. Kramer (VCH Verlagsgesellschaft mbH, Weinheim, Federal Republic of Germany, 1996), 70-98.
[35] Segal D.: Chemical Synthesis of Advanced Ceramic Materials (Cambridge University Press, Cambridge 1989).
[36] Sikora M., Schilling C.H., Garcia V.J., Li C.P., Tomasik P.: "Effects of Polysaccharide Concentration and Molecular Weight on the Rheology of Alumina Suspensions," submitted to J. Am. Ceram. Soc., 1998.
[37] Shanefield D.J.: Organic Additives and Ceramic Processing (Kluwer Academic Publishers, Boston, 1995).
[38] Stangle C., Aksay I.A.: "Simultaneous Momentum, Heat and Mass Transfer with Chemical Reaction in a Disordered Porous Medium: Application to Binder Removal from a Ceramic Green Body," Chem. Engr. Sci., 45, 7, 1990, 1719-31.
[39] Steffe J.F.: Rheological Methods in Food Process Engineering (Freeman Press, East Lansing, Michigan, 1996), 19-23.
[40] Tomasik P., Schilling C.H.: "Starch Complexes, Part 1: Complexes with Inorganic Guests," Advances in Carbohydrate Chemistry and Biochemistry, 52, 1998a, in press.
[41] Tomasik P., Schilling C.H.: "Starch Complexes, Part 2: Complexes with Organic Guests," Advances in Carbohydrate Chemistry and Biochemistry, 52, 1998b, in press.
[42] van Olphen: An Introduction to Clay Colloid Chemistry, Second Edition (John Wiley and Sons, New York, 1977).
[43] Velamakanni B.V., Chang J.C., Lange F.F., Pearson D.S.: "New Method for Efficient Colloidal Particle Packing via Modulation of Repulsive Lubricating Hydrating Forces," Langmuir, 6, 1990, 1323-25.
[44] Velamakanni B.V., Lange F.F., Zok F.W., Pearson D.S.: "Influence of Intertparticle Forces on the Rheological Behavior of Pressure-Consolidated Alumina Particle Slurries," J. Am. Ceram. Soc., 77, 1, 1994, 216-20.
[45] Yin T.K., Aksay I.A., Eichinger B.E.: "Lubricating Polymers for Powder Compaction," in Ceramic Transactions Volume 1, Ceramic Powder Science II, edited by G. L. Messing, E. R. Fuller, Jr., and H. Hausner (American Ceramic Society, Westerville, Ohio, 1988), 654-662.
FORMOWANIE CERAMIKI TECHNICZNEJ Z UŻYCIEM POLISACHARYDÓW
Streszczenie
Techniczna ceramika stanowi olbrzymi międzynarodowy rynek zbytu zdominowany przez zastosowania elektroniczne. Wyrobami do tego celu są izolatory, pakiety zintegrowanych obwodów, kondensatory i magnesy. Typową operacją w produkcji tego rodzaju wyrobów jest mieszanie proszku ceramicznego z ciekłą substancją organiczną (np. smar polietylenny, rozpuszczalniki organiczne) w celu uzyskania masy konsystencji plasteliny, której nadaje się trójwymiarowy kształt który następnie się suszy i wypala. Piroliza dodatków organicznych stwarza problemy przed ostatecznym wypalaniem wyrobu. Najpierw trzeba taki wyrob powoli (ok. tygodnia) ogrzewać w 200°C aby uniknąć pęknięć wyrobów i tworzenia się pęcherzyków gazowych. W trakcie tego, związanego z wysokimi kosztami ogrzewania tworzą się trujące wyziewy. Wyrob zostaje przy tym zanieczyszczony mikrokryształicznymi cząsteczkami węgla. Chcąc uniknąć pęknięć, odkształceń oraz zanieczyszczeń węglowych w wyrobach spiekanych przy równoczesnym wyeliminowaniu dodatków organicznych należy posługiwać się szlamem proszku w zawiesinie wodnej, do którego dodaje się rozpuszczalnych w wodzie dodatków wiążących. Z naszych badań wynika, że do tego celu nadają się różne dekstryny i maltodekstryny, gdyż w roztworze wodnym wykazują one naturalną zdolność do sorbowania się na powierzchni cząsteczek tlenków metali. Niewielki ich dodatek (<5 wag%) znacznie ułatwia proces formowania i zapewnia łatwą pirolizę z minimalnym zanieczyszczeniem drobinami węgla. Ponadto po odparowaniu wody te polisacharydy mocno wiążą ze sobą ziarenka tlenków zapewniając otrzymywanie mocnych, wolnych od pęknień wyrobów ceramicznych przed ich wypalaniem. Niniejsza praca przedstawia omówienie podstawowych elementów wpływających na reologię wodnych zawiesin proszków ceramicznych w oparciu o chemiczną naturę oddziaływań międzycząsteczkowych. Przedstawiono wyniki badań nad sedymentacją, sączeniem, ekstruzją i chemiczną analizą powierzchni obrazujące praktyczne możliwości zastosowania maltodektryn i dekstryn jako reologicznych modyfikatorów w produkcji wyrobów ceramicznych.
|
From Dating to Mating and Relating: Predictors of Initial and Long-Term Outcomes of Speed-Dating in a Community Sample
JENS B. ASENDORPF\(^1*\), LARS PENKE\(^2\) and MITJA D. BACK\(^3\)
\(^1\)Department of Psychology, Humboldt University, Berlin, Germany
\(^2\)Department of Psychology and Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, UK
\(^3\)Department of Psychology, Johannes Gutenberg-University Mainz, Germany
Abstract: We studied initial and long-term outcomes of speed-dating over a period of 1 year in a community sample involving 382 participants aged 18–54 years. They were followed from their initial choices of dating partners up to later mating (sexual intercourse) and relating (romantic relationship). Using Social Relations Model analyses, we examined evolutionarily informed hypotheses on both individual and dyadic effects of participants’ physical characteristics, personality, education and income on their dating, mating and relating. Both men and women based their choices mainly on the dating partners’ physical attractiveness, and women additionally on men’s sociosexuality, openness to experience, shyness, education and income. Choosiness increased with age in men, decreased with age in women and was positively related to popularity among the other sex, but mainly for men. Partner similarity had only weak effects on dating success. The chance for mating with a speed-dating partner was 6%, and was increased by men’s short-term mating interest; the chance for relating was 4%, and was increased by women’s long-term mating interest. Copyright © 2010 John Wiley & Sons, Ltd.
Key words: evolutionary psychology; sexuality; social and personal relationships
INTRODUCTION
Hundreds of empirical studies have been devoted to sexual and romantic attraction, but most were methodologically limited in that they were based on self-report of preferences for attributes of hypothetical partners, dyadic interactions between undergraduates in the laboratory, indirect inferences on preferences from traits of existing couples or self-presentations in and responses to lonely hearts advertisements.
In recent years, researchers have begun to adopt a new dating research design: In speed-dating, multiple men meet multiple women of similar age for brief encounters one after the other. This design allows researchers to separate actor effects (how do I behave towards others in general?) from partner effects (which behaviour do I evoke in others in general?) and relationship effects (is my behaviour towards a specific partner different from what is expected from my actor effect and the specific partner’s effect?), the dyadic gist of the interaction (Kenny, 1994; Kenny, Kashy, & Cook, 2006: chap. 8). In traditional studies of dyadic interactions, where one participant is interacting with only one dating partner, these three different effects are inextricably confounded. In a speed-dating design they can be separated, and actor and partner effects can be estimated quite reliably because behaviour is averaged across many dyads. Also, speed-daters get access to a dating partner’s address only in the case of matching (reciprocated choices, i.e. both partners choose each other for further contact), and thus the frequency of matching is a clearly interpretable measure of immediate dating success that reflects the mutual interest of both dating partners.
Although a few studies using speed-dating data have recently been published (e.g. Eastwick & Finkel, 2008; Fishman, Iyengar, Kamenica, & Simonsen, 2006; Kurzban & Weeden, 2005, 2007; Luo & Zhang, 2009; Place, Todd, Penke, & Asendorpf, 2009, in press; Todd, Penke, Fasolo, & Lenton, 2007), only one study that has followed speed-dating participants over some time after the event to study the outcomes of speed-dating (Eastwick & Finkel, 2008). However, this study included only young students (mean age 20 years), as in most dating studies, and the time range was limited (only 1 month). The present study is the first one that followed a large community sample of speed-daters over a full year after the event. We used these data in order to study the impact of age and personality in a broad sense (including physical traits, education and income) on the participants’ dating preferences and their short- and long-term dating success. Our analyses were based on evolutionarily informed hypotheses, particularly by the general assumption that men’s and women’s preferences were based on sex-typical mating strategies; therefore, we ran most analyses separately for men and women.
STRUCTURE OF THE HYPOTHESES
Our research design made it possible to distinguish popularity (the probability of being chosen by the opposite
sex) from *choosiness* (the tendency to choose few versus many dating partners for further interaction), and to study *dyadic effects* (the reciprocity of choices within dyads as well as effects of similarities and interactions of men’s and women’s attributes on the frequency of reciprocated choices). Accordingly, our first three sets of hypotheses concern (1) what makes a dating partner popular (*popularity hypotheses*); (2) what makes oneself more or less discriminative in one’s choices (*choosiness hypotheses*); (3) to which extent are the immediate choices reciprocated by the dating partners, and do reciprocated choices depend on similarities and interactions of men’s and women’s attributes (*dyadic hypotheses*). In addition, we assessed before the speed-dating events the participants’ interest in finding a partner for a short-term affair versus a long-term committed relationship in order to study the impact of short- versus long-term interest on the tendency to engage in mating (sexual intercourse) versus relating (establishing a serious romantic relationship) during the year following the speed-dating event (*short- versus long-term interest hypotheses*).
**POPULARITY HYPOTHESES**
From an evolutionary perspective, what makes an (opposite sex) dating partner popular can be generally desirable attributes such as health and good overall condition, but it also depends on (a) one’s sex, (b) whether one pursues short-term versus long-term mating tactics, and (c) environmental conditions related to survival and need for biparental investment in offspring (Buss & Schmitt, 1993; Gangestad & Simpson, 2000; Penke, Todd, Lenton, & Fasolo, 2007).
Concerning generally desirable attributes that can be judged in the brief encounters of speed-dating and that predict popularity (probability of being chosen as a dating partner by the opposite sex), facial averageness and symmetry are probably the most prominent cues to health and overall condition in both men and women (Rhodes, 2006). Because these cues strongly influence the judgment of facial attractiveness (Rhodes, 2006), facial attractiveness is expected to predict the popularity of both men and women. Indeed, observer-rated facial attractiveness emerged in virtually all dating studies based on real interactions as a powerful, and often the most powerful, predictor of popularity (Feingold, 1990; Kurzban & Weeden, 2005; Luo & Zhang, 2009). Less clear is the evidence for vocal attractiveness (attractiveness of one’s voice, independent of what one says) although a few studies suggest that the human voice also contains cues to health and is used as a cue for attraction (Feinberg, 2008; Hughes, Dispenza, & Gallup, 2004).
Concerning sex-typical attributes, women are expected to prefer men that are able to provide more resources for future children, implying that women in Western cultures prefer men of high education, high income and high openness to experience as a cue to socioeconomic status and intelligence, as well as high conscientiousness as an indicator of achievement motivation and occupational perseverance. Although women state such preferences in questionnaires, the evidence from dating studies involving real interactions is mixed, including speed-dating studies (Eastwick & Finkel, 2008; Kurzban & Weeden, 2005). Also, taller men have higher reproductive success than shorter men (Pawlowski, Dunbar, & Lipowicz, 2000; Mueller & Mazur, 2001) and are especially unlikely to remain childless (Nettle, 2002a), indicating that women prefer height in long-term partners. This might be because male height relates to health and resource provision ability (Magnusson, Rasmussen, & Gyllensten, 2006; Mascie-Taylor & Lasker, 2005; Silventoinen, Lahelma, & Rähkänen, 1999; Szklarska, Koziel, Bielecki, & Malina, 2007). Effects of height on mating success are much less clear in women (Nettle, 2002b; Pollet & Nettle, 2008), but a physical trait clearly preferred by men (particularly in the short-term mating context, Swami, Miller, Furnham, Penke, & Tovée, 2008) is lower body mass, which, unless extremely low, is an indicator of general health and thus ultimately fecundity (Swami & Furnham, 2007; Yilmaz, Kilic, Kanat-Pektas, Gulerman, & Mollamahmutolu, 2009).
Concerning environment-contingent attributes, it has been suggested that in addition to the health-fecundity effect, higher body mass is preferred in environments providing low resources, and lower body mass in resource-rich environments such as those usually found in current Western cultures, were it signals better health and fitness (Swami & Furnham, 2007; Swami & Tovée, 2005). Together, this suggests a preference of both men and women in current Western cultures for slimmer partners with a more marked preference by men, which has been largely confirmed by the literature (e.g. Kurzban & Weeden, 2005; Thornhill & Grammer, 1999).
Concerning personality dimensions, we expected that the trait of shyness, which cuts across the dimensions extraversion and neuroticism, is negatively related to popularity judgments after brief interactions because shyness hinders social interaction with strangers (Asendorpf, 1989) and the establishment of new relationships with peers (Asendorpf & Wilpers, 1998).
Another domain of attributes that both men and women prefer particularly in long-term partners is warmth and trustworthiness (Penke et al., 2007), behavioural tendencies that are related to the personality dimension of agreeableness. However, when first meeting another person, agreeableness is relatively difficult to detect (Connolly, Kavanagh, & Viswesvaran, 2007; John & Robins, 1993; Kenny & West, 2008). This should be particularly true for romantic relationships: How warm and trustworthy someone is perceived by his or her romantic partner depends on the attachment system that develops between two persons within a relationship, a process that takes at least a year (Fraley & Shaver, 2000). If this logic is correct, agreeableness should not affect popularity judgments after brief interactions typical for speed-dating studies.
In sum:
**H1 Popularity hypotheses**
**H1a General attributes:** The participants are expected to prefer particularly facially (and perhaps also vocally) attractive dating partners, and also partners of low body mass and low in shyness. Agreeableness might not be generally preferred.
**H1b Sex-typical attributes:** In addition, women are expected to prefer men who are tall, open to experience, conscientious, well educated and have high income.
CHOOSINESS HYPOTHESES
Popularity is the price people seek on the mating market, and therefore it is expected that individuals who possess attractive attributes are also choosier, given that they have more options. Or put differently, individuals with less attractive attributes should try to increase their number of matches while individuals with more attractive attributes should try to narrow down their number of matches by their active choice behaviour (Kenrick, Sadalla, Groth, & Trost, 1990; Lenton, Penke, Todd, & Fasolo, in press; Penke et al., 2007).
Interestingly, this pattern implies that the individual reciprocity of dating choices should be negative: Individuals who are frequently chosen (i.e. are popular) should not choose others very often (i.e. are choosy). In other words, because the same attributes that make people popular should also make them choosier, popularity should be positively related to choosiness. Indeed, self-rated popularity is often positively related to choosiness (Todd et al., 2007), and observer-rated popularity has also been found to be positively related to choosiness, although not always statistically significantly (Eastwick, Finkel, Mochon, & Ariely, 2007; Luo & Zhang, 2009).
The correlation between being popular and selective is also important for sex differences in choosiness. In most evolutionary accounts, women are expected to be more selective than men (Darwin, 1871) because they invest more in their children (Trivers, 1972). However, more differentiated views have pointed out that this general tendency will be moderated by the effects that women generally prefer somewhat older men, and the older men are, the more they prefer younger women (Kenrick & Keefe, 1992). Thus, women’s popularity is expected to decrease with age, and a popularity–choosiness correlation would then imply that their choosiness also decreases with age. Most studies of attraction miss this sex-by-age interaction because they focus exclusively on younger adults or only on older adults. Our age-heterogeneous sample made it possible to study the expected changes in men’s and women’s choosiness. In sum:
H2 Choosiness hypotheses
H2a Correlation between attractive attributes and choosiness: Participants who have more attractive attributes are expected to be more selective in their choice behaviour.
H2b Correlation between popularity and choosiness: The more often men and women are chosen, the more selective they should be in their choices of dating partners.
H2c Age by sex interaction: With increasing age, men’s choosiness is expected to increase, while women’s choosiness should decrease.
DYADIC HYPOTHESES
Whereas the hypotheses so far answer questions at the level of individuals (actor and partner effects), speed-dating offers the opportunity to study in addition effects at the level of dyads (relationship effects). A first question concerns the dyadic reciprocity of choices: To what extent are men’s specific relational choices reciprocated by women’s specific relational choices? Dyadic reciprocity requires interaction (Kenny, 1994), and because speed-dating encounters last only for short time (3 minutes in the present study), not much reciprocity is expected to emerge. Indeed, earlier speed-dating studies have found a positive but low dyadic reciprocity for the choices at the end of the event, whereas post-event dyadic reciprocities (when participants had received feedback about the choices of their dating partners) were somewhat higher (Eastwick et al., 2007; Luo & Zhang, 2009).
The folk wisdom that similarity attracts was confirmed mainly in studies of hypothetical partners and in studies of established relationships, particularly married couples (e.g. for facial attractiveness, height, education, IQ and openness to experience; see overviews in Klohnen & Luo, 2003; Watson, Klohnen, Casillas, Simms, Haig, & Berry, 2004), and was often based on similarity scores that were confounded with individual differences. In speed-dating studies that controlled similarity scores for actor and partner effects (Eastwick & Finkel, 2008; Kurzban & Weeden, 2005; Luo & Zhang, 2009), few effects of similarity on matching were found, and there was no evidence for dissimilarity effects. Therefore, we expected, if any, positive effects of similarity on matching, particularly for attributes where similarity is usually found in established relationships, such as physical attractiveness, height and education.
In addition to these tests of similarity effects, speed-dating data make it possible to predict relationship effects from statistical interactions between the individual characteristics of men and women, e.g. do sociosexual men match particularly often with facially attractive women (more than expected by the additive effects of men’s sociosexuality and women’s facial attractiveness)? Because of the huge number of possible interactions ($k^2$ interactions for $k$ individual characteristics), $\alpha$ inflation was a serious problem in this case, and therefore we did not explore such dyadic effects. In sum:
H3 Dyadic hypotheses
H3a Dyadic reciprocity: Choices are expected to show a low positive reciprocity at the dyadic level.
H3b Similarity: Matching of dating partners is expected to be more likely if they have similar individual attributes.
SHORT- VERSUS LONG-TERM INTEREST HYPOTHESES
From an evolutionary perspective, there are good reasons for both men and women to pursue either long-term or short-term tactics, depending on context (Buss & Schmitt, 1993; Gangestad & Simpson, 2000). Speed-dating is usually meant to find a long-term partner, although some participants may have different intentions. Therefore, we expected that speed-dating participants report relatively more interest in a long-term partner than in a short-term partner.
However, while long-term mating is usually the preferred tactic for single women (at least after a period of experimental exploration during adolescence, Furman & Shaffer, 2003), this is less true for men, who generally have a stronger desire to pursue short-term mating tactics (Buss & Schmitt, 1993) which they apparently try to pursue (i.e. finding sexual affairs instead of or in addition to long-term tactics) as long as they feel they can be successful with them (i.e. are not completely rejected all the time when trying to have a sexual affair) (Penke & Denissen, 2008). Thus, post-adolescent single men should show greater short-term mating interest than women, but since some men will have experienced success with their short-term mating attempts and some won’t, men should also be more variable in their short-term interests then women. Also, we expected a similarity effect at the dyadic level such that matching is more likely for men and women with similar short- or long-term mating interest.
Because we followed the speed-dating participants over the full year after the speed-dating event and most participants with matches had more than one match, we could use differences between the matches of a participant to predict with whom the participant ended up mating (having sex) or relating (developing a romantic relationship). These are strong tests because they test within-participant effects, where the influence of the participant on mating or relating is held constant. Because short-term interest predicts mating rather than relating and should vary more among men, we expected that women’s mating is predicted from the short-term strategy of their male matches. Conversely, because long-term interest predicts relating rather than mating and women are generally more selective with regard to long-term partners and thus more influential than men in establishing a romantic relationship (Todd et al., 2007), we expected that men’s relating is predicted from the long-term interest of their female matches. In sum:
**H4 Short- versus long-term interest hypotheses**
**H4a Overall tendency: Higher long-term interest than short-term interest is expected for both men and women.**
**H4b Sex difference: Whereas no sex difference is expected for long-term interest, short-term interest is expected to show a higher mean and variance in men than in women.**
**H4c Similarity: Matching of dating partners is expected to be more likely if they have similar short- or long-term interest.**
**H4d Mating versus relating: For participants with matches, we expect that women’s mating is predicted by the short-term mating preferences of their male matches, whereas men’s relating is predicted by the long-term preferences by their female matches.**
**METHOD**
**Participants**
German singles were invited through email lists, links on various German webpages and advertisements in various media to participate in free speed-dating sessions. They were informed that participation included videotaping of the interactions for exclusively scientific purposes and required answering personal questions before and on the day of testing. A total of 703 German heterosexual adult singles (292 men, 411 women) completed the initial online questionnaire about demographic information, personality and relationship/sexual history.
From this sample, participants were invited for a speed-dating session with similar numbers of men and women of about the same age. A total of 17 sessions were scheduled within 5 months, including 190 men and 192 women aged 18–54 years \((M = 32.8, SD = 7.4)\); 12 sessions included only women not using hormonal contraceptives, and five sessions only women using such contraceptives in order to avoid within-session effects of women’s contraceptive usage (Gangestad, Thornhill, & Garver-Apgar, 2005). On average, men were 1.6 years older than women, \(t(380) = 2.16, p < .05, d = .22\). The sessions included 17–27 participants \((M = 22.7, SD = 2.4)\); mean age within a session varied from 24.0 to 45.0 years, with a mean within-session age range of +/- 4.8 years, and the mean age difference between men and women within a session tended to increase with increasing mean age, \(r = .19, ns\). Thus, the age composition of the sessions reflected the expected age preferences. In terms of education, the sample was biased towards higher educational level with little variance in secondary education (92.2% had finished high school with *Abitur* or *Fachabitur*) but substantial variance in university degrees (41% reported one). All were currently single, but 14.9% had been married before, and 16.5% had at least one child; 6.3% were sexually inexperienced. Prior speed-dating experience was indicated by 12.3%. We would like to emphasize that these were all real singles whose sole motivation to participate in the study was the chance to find a real-life romantic or sexual partner. In this, the current study differs from other lab-based speed-dating studies, where participants were students that received course credit in addition to the opportunity to find a partner.
**Speed-dating procedure**
All sessions took place on a Saturday or Sunday from 3 pm to approximately 7 pm. Men and women entered the speed-dating location in a large building of Humboldt University from different streets and were guided to separate waiting rooms, minimizing the chance that they met before the speed-dating interactions. Upon arrival, participants received a tag with a unique number, a scorecard and a pre-event questionnaire that they answered while in the waiting room. Pre-event testing included brief video and audio samples and the measurement of height and weight; it took place in separate rooms for males and females and was conducted by a same-sex experimenter. The actual ‘dates’ took place in booths equipped with two opposing chairs. Women were asked to take a seat in their booths before the men entered the scene. They sat with the back to the booth entrance such that they were hardly visible from outside. Women stayed in their booth until they had interacted with all male participants. This ensured that each man saw each woman for the first time when he entered her booth. Men and women had tags with a unique identity number. Similar to conventional speeddatings, men rotated through the booths until they had dated every female participant. Each interaction period lasted 3 minutes, as indicated by a bell rung at the end of the interaction. After the men had left the booths, but before they entered the next, both men and women recorded their choices of the current ‘date’ on a scorecard. When everybody was finished, the experimenter rung the bell again to ensure that all men entered their next booth simultaneously.
At the end of all interactions, the participants had a chance to revise their choices on the basis of their information on all potential mates. After all speed dating interactions were completed, the experimenters collected the scorecards, and males and females were separated again for a post-event assessment. Thereafter, they were informed about the follow-up studies, were asked for permission to analyse the video and audio samples for scientific purposes (all agreed), thanked, and released. Within the next 24 hours, the participants’ choices were processed, matching choices were calculated, and those who had indicated mutual interest instantly received each other’s contact details via email.
**Follow-ups**
Six weeks and 12 months after a speed-dating session, all participants were invited by email to answer a brief online questionnaire about their sexual and relationship history. For participation in the 12-months follow-up, they received a voucher for a cinema ticket worth 5 Euros. Of the 382 participants, 94.8% participated in the follow-up after 6 weeks and 85.9% in the follow-up after 12 months.
**Measures**
**Pre-event online questionnaire**
The online questionnaire assessed demographic details, health status, stable personality traits and relationship and sexual history including questions about women’s contraception usage and menstrual cycle. The current analyses refer to the following variables: age (years), education (a scale from 1 = no school grade to 9 = PhD), monthly income (), sociosexuality as measured by the nine-item revised Sociosexual Orientation Inventory (men $\alpha = .84$, women $\alpha = .83$) (SOI-R; Penke & Asendorpf, 2008; Penke, in press), the dimensions of the Five Factor Model (FFM) of personality neuroticism (men $\alpha = .86$, women $\alpha = .83$), extraversion (men $\alpha = .79$, women $\alpha = .74$), openness to experience (men $\alpha = .71$, women $\alpha = .65$), agreeableness (men $\alpha = .75$, women $\alpha = .74$) and conscientiousness (men $\alpha = .83$, women $\alpha = .81$) (German NEO-FFI; Borkenau & Ostendorf, 1993; 12 items per dimension), shyness as measured by a five-item shyness scale (men $\alpha = .85$, women $\alpha = .81$) (Asendorpf & Wilpers, 1998), and one-item ratings on seven-point scales (1 = currently not searching, 7 = currently strongly searching) of the extent to which the participants were currently seeking a long-term mating partner (“To what extent are you currently looking for a stable partner for a long-term relationship?”) and a short-term mating partner (“To what extent are you currently looking for somebody for a short sexual affair or a one-night stand?”) (Buss & Schmitt, 1993).
**Pre-event assessment**
During the pre-event assessment, participants were recorded with a camcorder while standing upright in front of a neutral white background and under standardized lightning conditions in order to allow the extraction of various standardized facial and whole-body photographs from the videotapes. In addition, standardized vocal samples (counting aloud from 1 to 10) were recorded, and body height (m) and weight (kg, dressed but without shoes) were measured, from which the body mass index (BMI, kg/m$^2$) was calculated.
**Immediate dating outcome**
Directly after each interaction with a dating partner, each participant recorded on a scorecard whether they wanted to see this person again (yes/no). The scorecards contained the identity numbers of the dates in the exact order of encounter, to avoid assignment errors of the ratings. An additional column allowed participants to change their rating at the end of all dating interactions; this final choice served as the dating outcome variable at the time of the event.
**Follow-up 1**
During the first online follow-up 6 weeks after the speed-dating event, participants were asked about any contacts with speed-dating partners. This was guided by a list of all participants with whom they had matches. For each participant with whom contact was indicated, they were asked (1) how often it came to (a) written (email, SMS etc.), (b) phone or (c) face-to-face contact, (2) if they thought a romantic relationship was about to develop and (3) whether sexual intercourse had occurred. Because the reported frequencies were low, we reduced all outcome variables to dichotomous variables (contact yes/no).
**Follow-up 2**
The second online follow-up 1 year after the speed-dating event repeated the questions (2) and (3) of the follow-up 1 assessment. The current analyses refer to the two dichotomous variables that can be directly compared to the earlier follow-up, development of a relationship and occurrence of sexual intercourse.
**Facial attractiveness ratings**
Video capturing software was used to choose the one frame with the most frontal and neutral recording of each participant’s face and to convert it to a digital picture. Size was standardized to identical interpupilar distance. Because attractiveness impressions may vary with age of the perceiver, younger participants (those from the seven sessions with the lowest mean age, age $M = 25.8$, $SD = 2.7$) were judged by 15 heterosexual opposite-sex undergraduates who received course credit (age $M = 21.7$, $SD = 3.8$), and the remaining older participants (age $M = 37.6$, $SD = 5.5$) by 15 heterosexual opposite-sex older raters from the general population (age $M = 45.4$, $SD = 9.4$). Thus, a total of 60 raters were
involved. All raters judged the attractiveness of each picture on a scale from 1 (not attractive at all) to 7 (very attractive). Interrater reliabilities were good for both men rating female participants (younger: $\alpha = .89$, older: $\alpha = .91$) and women rating male participants (younger: $\alpha = .89$, older: $\alpha = .88$) such that the ratings could be aggregated across the raters.
**Vocal attractiveness ratings**
The standard vocal samples were judged for attractiveness on the same scale that was used for facial attractiveness. Male samples were rated by 28 heterosexual female undergraduates ($\alpha = .92$), female samples were rated by 22 heterosexual male undergraduates ($\alpha = .90$); all raters received course credit. Because interrater agreement was good, the ratings were aggregated across the raters.
**RESULTS**
**Overview**
First, we explain our strategy for data analysis, which is complex because of (1) the mutual dependency of the data within the speed-dating sessions and because of the systematic age differences between the sessions, and (2) the mutual dependency of the long-term outcome data for participants with multiple matches. We solve problem (1) by applying the Social Relation Model (SRM; Kenny, 1994; Kenny & La Voie, 1984) to each session, and by analysing the resulting SRM parameters with multi-level analyses with individuals at level 1 and sessions at level 2. We solve problem (2) by multi-level analyses with individuals’ matches at level 1 and individuals with matches at level 2. After presenting the overall outcome of the SRM analyses in terms of variance partitioning and reciprocity correlations, and the overall immediate and long-term outcomes of speed-dating, we present the results in the order of the hypotheses.
**Analysis strategy**
Speed-dating offers the possibility to decompose each observed score $x_{ij}$ during speed-dating for a target individual $i$ and an interaction partner $j$ (short- and long-term interest in $j$, final choice of $j$, match of the choices of $i$ and $j$) into three components according to a half-block design of the Social Relations Model (Kenny et al., 2006, chap. 8): The *actor effect* of individual $i$ (mean of $x_{ij}$ across all $j$; e.g. average short-term interest of $i$ across all interactions), the *partner effect* of the interaction partner $j$ (mean of $x_{ij}$ across all $i$; e.g. average short-term interest evoked in interaction partners across all interactions of $j$) and the *relationship effect* of $i$ with $j$ ($x_{ij}$—actor effect of $i$—partner effect of $j$; e.g. the degree to which $i$ reported short-term interest in $j$ more or less than expected by the general short-term interest of $i$ and the general tendency of $j$ to evoke short-term interest). Our design of approximately 22 participants within each of 17 groups provides estimates of SRM effects with sufficient statistical power (see Kenny et al., 2006, Table 8.8).
Actor and partner effects are scores at the *individual level*, whereas relationship effects are scores at the *dyadic level* and include measurement error unless it is controlled by repeated assessments. Based on this decomposition, two kinds of reciprocity can be computed: *individual reciprocity* (correlation between actor and partner effects of the same individual; e.g. does a person that chooses many dating partners (low choosiness) is also often chosen by them (high popularity)? and *dyadic reciprocity* (correlation of the relationship effects of $i$ with $j$ with the relationship effects of $j$ with $i$; e.g. if $i$ specifically chooses $j$, is also $j$ specifically choosing $i$)?
The actor and partner effects characterize individuals and can be predicted by other individual attributes (including physical attractiveness, education, income, personality). The relationship effects characterize dyads and can be predicted by other dyadic attributes such as the similarity of the members of a dyad in an individual characteristic, or by statistical interactions between individual attributes of the two dyad members.
Most studies using the SRM approach assume that the interacting groups are random samples from the same population (e.g. college students), and therefore control for group differences by centering actor and partner effects within each group; relationship effects are centred by definition anyway. In the current study, however, the groups were speed-dating sessions that strongly varied in the mean age of the participants of a session, and also somewhat in the number of participants of a session (session size). Therefore, we used uncentred actor and partner effects and analysed cross-session differences in these uncentred effects within a multi-level approach, using HLM 6.0.3 (Raudenbush, Bryk, & Congdon, 2005). The SRM actor and partner effects were predicted by individual attributes (level 1), and the regression coefficients at level 1 were predicted by mean age in session, session size and women’s contraceptive usage (level 2). Because session size and contraceptive usage did not show any significant effects, we report here only analyses with mean age in session as the level 2 predictor.\(^1\)
It was important to include only few predictors in the multi-level models because the degrees of freedom for the statistical tests were limited by the number of level 2 units (17 groups).\(^2\) Therefore, we first explored significant effects for single predictors at level 1, with mean age in session as
---
\(^1\)When the number of males and the number of females in a session were treated as separate level 2 predictors no significant level 2 effects were revealed either. Age differences within a session (age centered within session as a level 1 predictor) did not show any significant effect, which can be readily attributed to the low age variance within sessions. We also ran all analyses with age grand-centred at level 1 and no level 2 predictor (this age variable confounds effects of age within sessions and age between sessions). The results were highly similar to those found for age as a level 2 predictor. We prefer to report the results for age as a level 2 predictor because these results capture most of the age effects and can be more clearly interpreted.
\(^2\)Compared to typical applications of multi-level analyses in social psychology, the number of sessions (level 2 units) was rather small, but the number of individuals within sessions (level 1 units) was rather large, providing more reliable estimates of regression coefficients within level 2 units. On balance, application of multi-level analyses seems appropriate (Richard Gonzalez, personal communication, October 2008). Nevertheless, we also analysed the data ignoring the nested data structure by ordinary multiple regression analyses based on all 382 individuals or all (females in the sexwise analyses, taking advantage of stepwise regression techniques. The results were quite consistent with those reported here. We prefer to report the results for the multi-level analyses because they are conceptually superior.
the level 2 predictor. Level 1 predictors that showed a significant main effect or a significant cross-level interaction with age were then pairwise entered into new analyses until a maximum set of predictors at level 1 remained where each predictor showed a significant unique contribution in terms of a main effect or a cross-level interaction with age. This analysis strategy minimized problems of unstable results due to insufficient degrees of freedom or suppressor effects.
The individual reports obtained during the two follow-ups refer only to properties of matching dyads, with matched opposite-sex participants nested within individuals. Therefore, the long-term outcome data were analysed only for participants with matches by a multi-level analysis with data on the matches at level 1 and participants at level 2, using age as a level 2 variable. We ignored the nesting of participants within sessions for these analyses because the resulting 3-level analyses would require the estimation of too many parameters. Because all outcomes were dichotomous, logistic multi-level analyses were used (HLM Bernoulli option with robust standard errors).\(^3\)
Hypotheses were tested by one-tailed statistical tests; all other tests are two-tailed.
**SRM analyses of dating**
The SRM effects were computed according to the formulas provided by Kenny et al. (2006: chap. 8), but using uncentred actor and partner effects (see analysis strategy). The variance components and reciprocity correlations resulting from these SRM analyses are shown in Table 1. Relationship effects could not be separated from measurement error because multiple assessments were not available.
The relative amount of actor variances tended to be higher for males than for females. Thus, differences in choosiness and achieved matches were more pronounced in men than in women, which may be attributed to their higher variance in short-term mating interest (see Hypothesis 4b). The relationship plus error variance was always the largest share, as in nearly all SRM studies, which can be attributed to specifically relational dating preferences as well as the larger measurement error of the disaggregated dyadic effects as compared to the aggregated individual effects. The individual reciprocities were negative for both men and women, as expected in Hypothesis 2a. That is, there was a tendency that the more popular participants were more selective; however, this tendency was only significant for men. Fully confirmed was Hypothesis 3a, which expected a low positive dyadic reciprocity for choices. Thus, the more a participant was particularly attracted to a dating partner, the more the dating partner was also attracted to the participant (controlling for the participant’s actor effect and the dating partner’s partner effect). The reciprocity correlation was low, but highly significant due to the large number of dyads (\(N = 2160\)). The matches showed perfect reciprocities because the actor and partner effects of a participant are identical for matches.
**Outcomes**
The 382 participants were chosen on average by 3.92 speed-dating partners (range 0–13) and achieved on average 1.28 matches (reciprocated choices) (range 0–8); 116 men and 116 women (60.7%) achieved at least one match. Another way of looking at these immediate dating outcomes is to compute the individual probability of being chosen by one of the dating partners in one’s session, and the probability of achieving a match with one of these partners. These probabilities were on average 34.7% and 11.5%; for participants with matches, they were somewhat higher (see Table 2).
The long-term outcome of speed-dating was assessed in two follow-up assessments (6 weeks after the session, T1, and 1 year after the session, T2). Of the 232 participants with matches, 221 (95.3%) were reassessed at T1 and 205 (88.4%) at T2; thus, sample attrition was low. \(t\)-tests comparing the drop-outs with the continuing participants did not reveal any significant difference between these two groups in the individual attributes assessed before the speed-dating, neither for T1 nor for T2. The speed-dating outcomes also did not show any significant differences, with one exception: The drop-outs at T2 had more matches (24% of their speed-dating partners) than the participants continuing participation until T2 (18%; \(t(230) = 2.21, p < .05, d = 0.29\)). Thus, the T2 data may slightly underestimate the incidence of romantic relationships and sexual intercourse.
Data for the various outcomes after the speed-dating session at T1 and T2 are presented in Table 2. They are presented in terms of the probability of occurrence, both for all 382 speed-dating participants and for the 232 participants who achieved at least one match. For the latter participants, the occurrences are reported both for each match and overall (at T1, the participants reported on average 2.10 matches; at T2, they reported on average 2.05 matches). For example, the probability of meeting a speed-dating participant face-to-face after the day of the speed-dating was 38.6% for each match, thus \(38.6\% \times 2.10 = 81.1\%\) overall, and because only 60.7% of the speed-dating participants achieved a match, this probability reduced to an average of \(81.1\% \times 0.607 = 49.2\%\) for each speed-dating participant.
As Table 2 indicates, the probabilities for the various kinds of contact strongly decreased with increasing intensity of contact, from 87.2% for any contact to 49.2% for face-to-face contact, 6.6% for a developing romantic relationship 6 weeks after speed-dating, 5.8% for sexual intercourse at any time in the year following speed-dating, and 4.4% for reports of romantic relationships 1 year after speed-dating (which is a somewhat stronger requirement than the earlier judgement of a developing relationship).
Within-dyad agreement in the outcomes could be evaluated for the matching dyads by comparing men’s and women’s reports. As Table 2 indicates, agreement was high for face-to-face contact, contact by phone and sexual intercourse; these figures show that the participants reliably
---
\(^3\)The regression coefficients in these analyses refer to log-odds ratios \(\log OR\) and changes in log-odds ratios \(\log OR_{change}\); for the ease of interpretation, they were transformed into probabilities \(p\) and changes in probabilities \(p_{change}\) by using the transformations \(p = 1/(1 + e^{\log OR})\), \(p_{change} = 1/(1 + e^{(\log OR + \log OR_{change})}) - p\) (see Raudenbush et al., 2005).
Table 1. Variance partitioning and reciprocity correlations for choices and matches
| Parameter | Choices | | | Matches | | |
|----------------------------------|---------|-------|-------|---------|-------|-------|
| | Men | Women | | Men | Women | |
| Actor variance | 13% | 9% | | 11% | 7% | |
| Partner variance | 17% | 19% | | 7% | 11% | |
| Relationship + error variance | 70% | 72% | | 82% | 82% | |
| Individual reciprocity | −.24** | −.08 | | 1.00 | 1.00 | |
| Dyadic reciprocity | .06** | | | | | |
Note. N = 2160 dyads in 17 sessions.
** p < .01.
Table 2. Between-partner agreement and probabilities of the speed-dating outcomes
| Outcome | Agreement κ | Probability (%) for speed-daters |
|----------------------------------------------|-------------|----------------------------------|
| | | With matches, for | All daters |
| | | Each match | Overall | Overall |
| Being chosen by a dating partner† | — | — | 43.4 | 34.7 |
| Match with a dating partner† | — | — | 18.9 | 11.5 |
| Any contact (T1) | .70 | 68.4 | 143.6 | 87.2 |
| written | .54 | 59.9 | 125.8 | 76.4 |
| phone | .79 | 41.2 | 86.5 | 52.5 |
| face-to-face | .94 | 38.6 | 81.1 | 49.2 |
| Sexual intercourse (T1) | .79 | 3.4 | 7.1 | 4.3 |
| Relationship is developing (T1) | .59 | 5.2 | 10.9 | 6.6 |
| Sexual intercourse (T2) | .88 | 4.7 | 9.6 | 5.8 |
| Relationship had developed (T2) | .55 | 3.5 | 7.2 | 4.4 |
Note. Reported are within-dyad agreements (Cohen’s κ) and estimated probability of outcomes for the 232 participants who achieved at least 1 match and all 382 participants.
† Frequency of being chosen or reciprocated choices divided by number of one’s dating partners.
answered the follow-up questions. In contrast, the agreement was much lower for contact in written form (e.g. e-mails) and for romantic relationships. It seems that the participants did not remember written contacts very well and that they used somewhat different criteria for calling a relationship romantic.
Sex and age differences in the occurrence of the various types of contact were evaluated by multi-level analyses, with the matches of a speed-dating participant nested within this participant, treating sex and age as level 2 variables. Because the outcomes were dichotomous, we used logistic regressions (Bernoulli option in HLM with robust standard errors) and estimated probabilities $p$ and changes in probabilities $p_{\text{change}}$ (see section on analysis strategy). All sex and all sex by age effects were not significant, which is not surprising because sex differences could arise only by a sex difference in biased reporting. Two of the eight age effects were significant. Overall contact ($p_{\text{change}} = .007$, $p < .05$) and written contact ($p_{\text{change}} = .008$, $p < .05$) increased with age, such that an increase in 1 year of age corresponded to an increase of 0.7% in overall contact and of 0.8% in written contact. It seems that older participants tended to approach the matches in written form before interacting by phone or face-to-face.
Popularity hypotheses
Popularity as a dating partner was captured by the frequency of being chosen by one’s dating partners (i.e. the partner effect for choices). On average, male participants were chosen by 3.6 females (32% of their 11.2 dating partners), female participants were chosen by 4.1 males (37% of their dating partners). Individual differences in popularity were predicted separately for males and females by 13 individual-level variables: physical attractiveness (facial and vocal attractiveness, height and body mass index BMI); education and income; and personality (sociosexuality, shyness and the FFM dimensions). These 13 predictors showed low within-sex correlations ($|r| < .34$), except for medium-sized correlations between some of the personality scales.\(^4\) To facilitate the comparison of the results across the predictors with their heterogeneous scales, all predictors and outcomes were standardized within sex with $M = 0$ and $SD = 1$ such that $\beta = 1$ indicates that 1 $SD$ increase in the predictor leads to 1 $SD$ increase in the outcome.
\(^4\)Height and BMI were used either as raw scores or as the absolute deviation from the sex-typical mean in the sample; because the effects for the raw scores tended to be stronger, results for the deviation scores are not reported here.
Table 3. Significant predictors of choices and matches by sex
| Predictor | Choices | Matches |
|--------------------|---------|---------|
| | Actor effect | Partner effect | Effects |
| | Men | Women | Men | Women | Men | Women |
| Facial attractiveness | −.17* | −.12 | .49*** | .52*** | .31*** | .25** |
| Vocal attractiveness | −.05 | −.12 | .33*** | .19* | .20* | .03 |
| Body mass index | .11 | .24** | −.13* | −.18* | −.10 | .02 |
| Height | −.08 | −.02 | .17* | .05 | .04 | −.04 |
| Years of education | −.22** | −.02 | .16* | .08 | .02 | −.03 |
| Income | −.13 | .02 | .13* | −.03 | −.02 | .02 |
| Sociosexuality | .03 | .01 | .24** | .10 | .23** | .09 |
| Shyness | .08 | .15** | −.15* | −.08 | .08 | .08 |
| Openness | −.03 | −.04 | .20* | .05 | .14 | .00 |
Note. 190 men, 192 women, 17 sessions. All variables were standardized within sex. Reported are βs in multi-level predictions with the predictor at level 1 and no predictor at level 2. Predictors in boldface were retained in the final set of predictors with significant unique variance.
*p < .05; **p < .01; ***p < .001.
The significant predictors of popularity are presented in Table 3. Age by predictor interactions were tested but failed to reach significance in every case; thus, the predictions were invariant across age, and Table 3 presents the results for multi-level models without predictor at level 2. As described in the analysis strategy section, each predictor was first tested for significance. Significant effects were subsequently combined in a final set of predictors where each predictor showed a significant unique contribution in terms of a main effect. Because the βs in these multiple regressions did not differ much from the βs of the single predictions, they are not reported here.
As expected by Hypothesis 1a, men and women who were judged (by independent raters) as facially or vocally attractive, or who were slim according to their objectively measured BMI, were chosen more often by their dating partners. The expected negative effect of shyness was also confirmed but reached significance only for men. As expected by Hypothesis 1a, agreeableness had no effect on being chosen by either sex. Hypothesis 1b was also partly confirmed, in that men who were tall, open to experience, well educated, or had high income (all potential indicators of resource providing ability) were chosen more often by their female dating partners. However, contrary to Hypothesis 1b, conscientiousness (an indicator of steady resource striving) had no effect on male popularity. Instead, men’s sociosexuality was attractive to women and showed incremental validity over and above men’s physical attractiveness (see Table 3). Finally, the broad FFM dimensions of extraversion and neuroticism did not significantly predict popularity. Thus, the choices of both men and women were most strongly predicted by their dating partner’s facial attractiveness, females based their choices on more criteria than men did, and personality effects were found only for openness to experience, sociosexuality and shyness.
Choosiness hypotheses
Choosiness was captured by a low frequency of selecting dating partners (i.e. the negative actor effect for choices). As expected, many of the attributes that made individuals attractive were negatively related to the frequency of choices (see Table 3), and thus positively related to choosiness (Hypothesis 2a). Another way of looking at this pattern of results is to correlate the columns in Table 3 for the actor and partner effects separately for men and women. For example, the nine predictions of men’s actor effect are correlated with the nine predictions of men’s partner effect. These correlations were highly negative (for men, \( r = -.65 \), for women, \( r = -.82 \); because each relied only on nine data points, tests for significance made no sense in this case). These high negative correlations suggest that individual characteristics that made participants attractive for the opposite sex (high partner effect) made them also choosy (low actor effect). Consequently, popularity and choosiness were positively related (Hypothesis 2b) as shown by a negative individual reciprocity correlation between the frequency of choices received and choices made (see Table 1). This, however, reached statistical significance only in men.
The most important individual outcome variable for the further course of mating, the frequency of matches (reciprocated choices), was predicted for women equally well by their own choices and the choices of men (in both cases, \( \beta = .57, p < .001 \)), whereas men’s matches relied more on women’s choices (\( \beta = .71, p < .001 \)) than on men’s own choices (\( \beta = .52, p < .001; \chi^2(df = 1, n = 17) = 4.37, p < .05 \), for the difference; all variables standardized with \( M = 0 \) and \( SD = 1 \)). The negative individual reciprocity for men’s choices (see Table 1) contributed to this significant sex difference in the contribution of actor and partner effects.
Because the same predictor had opposite or at least different effects on the actor and partner effects that contributed positively to the matches, it is not surprising that the matches were less strongly predictable than the received choices (see Table 3). For men, only facial and vocal attractiveness and sociosexuality increased the frequency of matches, for women only facial attractiveness, and the predictions tended to be weaker for matches than for received choices in all four cases (see Table 3).
Concerning sex and age differences, men chose on average 4.1 women (37% of their 11.2 dating partners),
whereas women chose on average 3.6 men (32% of their dating partners). By definition, these figures mirror those for popularity (see above). As expected by Hypothesis 2c, a significant sex by age interaction ($b = -0.015, p = .01$) was found. As Figure 1 shows, men’s choosiness increased and women’s choosiness decreased with increasing age. Interestingly, no main effects of age or sex on choosiness were found: The sex difference in choosiness was not significant ($b = -0.05, ns$), nor was the age difference (with age as a level 2 predictor, $b = -0.011, ns$).
**Dyadic hypotheses**
Whereas the preceding hypotheses refer to the level of individuals, the dyadic hypotheses refer to the dyadic level. The SRM relationship effects for choices assess the degree to which a participant tends to choose a speed-dating partner more or less often than one would expect on the basis of the participant’s actor effect and the partner’s partner effect. Thus, each of the 2160 dyads was characterized by one relationship effect for the man and one for the woman. As already described in the section on the SRM results, Hypothesis 3a of a positive but low dyadic reciprocity was confirmed (see Table 1). Therefore, participants achieved fewer matches than they received choices (see Table 2). The fewer and less variable matches, in turn, limited the predictability of the individual dating success in terms of the frequency of achieved matches (see Table 3), further confirming Hypothesis 3a.
In order to test Hypothesis 3b that similarity in individual attributes (rather than dissimilarity) increased the probability of matching, we computed absolute differences between all within-sex standardized ($M = 0, SD = 1$) predictors of the individual effects for each dyad and regressed, for each predictor, the relationship effects for matching on these dissimilarity scores as well as on men’s and women’s individual scores across the 2160 dyads, using multi-level regressions. Statistically controlling for the individual predictors was necessary because the dissimilarity scores can be confounded with individual effects (see also Luo & Zhang, 2009). Age effects were studied as before in terms of mean age in session (level 2 variable), but were non-significant in all cases. Only one significant effect of similarity on matching was found in the 12 analyses: The more similar men and women were in their facial attractiveness, the higher was the relationship effect for matching for such a dyad ($\beta = -0.044, p < .03$). Thus, Hypothesis 3b was confirmed, but only for similarity in one individual characteristic, which is facial attractiveness.
**Short- versus long-term interest hypotheses**
Effects of age and sex on participants’ reports of short- versus long-term interest before the speed-dating events were analysed in a mixed analysis of covariance, with sex as a between-participant factor, mating interest as a within-participant factor, and age in session as a covariate. Because the age and the age-by-interest interactions were not significant, age was dropped for the final model. All three effects were significant ($F > 6.72, p < .01$, in each case). Confirming hypothesis H4a, the participants reported more long-term interest ($M = 5.12, SD = 1.84$) than short-term interest ($M = 2.85, SD = 1.45$), $t(381) = 18.99, p < .001, d = 1.37$ (Cohen’s effect size of the difference for paired-samples $t$-test). Confirming hypothesis H4b, the sex by interest interaction was due to the fact that men reported more short-term interest than women (for men, $M = 3.21, SD = 1.90$; for women, $M = 2.50, SD = 1.72$), $t(380) = 3.81, p < .001, d = 0.39$, and this effect was due to a higher variance of short-term interest in men than in women (for Levene’s test, $F(1380) = 7.15, p < .005$). In contrast, men and women did not differ in their long-term interest (for men, $M = 5.06, SD = 1.48$; for women, $M = 5.18, SD = 1.42$), $t < 1$ for difference in mean, $F < 1$ for difference in variance.
We tested Hypothesis 4c (compatibility of men’s and women’s mating tactics) by computing dissimilarity scores separately for short-term and long-term mating interest, just as in the tests of Hypothesis 3b. No significant (dis)similarity effects on matching were found.
Hypothesis 4d, predicting that short-term mating interest facilitates mating and long-term mating interest facilitates relating after the speed-dating sessions, requires that mating and relating were not overlapping completely. Indeed, a cross-classification of mating and relating showed only moderate agreement (Cohen’s $\kappa$ was .54 at T1 and .53 at T2). Therefore, we could test Hypothesis 4d by multi-level models with matches’ short- and long-term interest entered as simultaneous predictors at level 1, and age in session and sex as predictors at level 2. This person-centred approach is informative about attributes of matches that increase or decrease the probability of long-term outcomes with them, and the cross-level interactions test for moderating influences of sex and age on these predictive relations at level 1. Because age did not show any significant effects, it was dropped in the final models.
Significant cross-level effects were found in 6 of the 8 cases. Therefore, the effects of short- and long-term interest on mating and relating are reported separately for men and women. As Table 4 indicates, Hypothesis 4d was fully confirmed. Women had a preference for having sex with men who pursued more a short-term mating tactics but did not tend to develop a romantic relationship with them, whereas the long-term interest of men did not influence women’s mating or relating. Conversely, men had a preference for relating with women who pursued more a long-term mating
Table 4. Prediction of mating and relating by matches’ attributes
| Characteristic of matches | After 6 weeks | After 1 year |
|---------------------------|---------------|--------------|
| | Sexual intercourse | Romantic relationship | Sexual intercourse | Romantic relationship |
| Short-term interest | | | | |
| of males | .019* | −.003 | .018* | −.012 |
| of females | .007 | .012 | .010 | .013 |
| Long-term interest | | | | |
| of males | .008 | −.012 | .013 | −.007 |
| of females | .000 | .016* | .002 | .024** |
Note. Reported are predicted changes in probabilities for mating and relating with matches as estimated by logistic multi-level multiple regressions (HLM Bernoulli option) for standardized predictors \((M = 0, SD = 1)\) at T1 \((n = 221)\) and T2 \((n = 205)\). Short- and long-term interest were tested simultaneously.
*p < .05; **p < .01; ***p < .001.
tactics but did not tend to have sex with them, whereas the short-term interest of women did not influence men’s mating or relating. This pattern was identical for T1 and T2.
It should be noted that these effects were within-participant effects and were thus controlling for all individual attributes of the participant, providing stronger tests than between-participant analyses at the individual level. This seems to be the reason why it was possible at all to significantly predict variations in the small probabilities for mating and relating.
DISCUSSION
We studied short- and long-term outcomes of speed-dating in a large, age-heterogeneous community sample, predicting participants’ dating success by their own and their dating partner’s personality characteristics, and the mating and relating of successful daters over the year following the speed-dating event by their short- versus long-term mating interest. Our analyses were based on numerous evolutionarily informed hypotheses. Most of these hypotheses were confirmed and were consistent with earlier dating studies, lending further support to evolutionary accounts of human dating, mating and relating. First, we discuss the findings in the order of the hypotheses. Second, we highlight strengths and weaknesses of the speed-dating paradigm for research on sexual and romantic attraction. Third, we discuss practical implications for speed-dating as a means for finding a short-versus a long-term partner. Finally, we offer suggestions for future research using a speed-dating paradigm.
Popularity
The key finding for popularity was that both men and women’s popularity was largely based on easily perceivable physical attributes such as facial and vocal attractiveness, height and weight. This was already the full story for women’s popularity in speed-dating, that is, men used only physical cues for their choices. In contrast, women included more criteria, namely men’s sociosexuality and shyness as well as cues for current or future resource providing potential, such as education, income, and openness to experience (but not cues of steady resource striving like conscientiousness). Interestingly, there is evidence that all these attribute can be accurately judged in short periods of time (Asendorpf, 1989; Borkenau, Mauer, Riemann, Spinath, & Angleitner, 2004; Boothroyd, Jones, Burt, DeBruine, & Perrett, 2008; Gangestad, Simpson, DiGeronimo, & Biek, 1992; Kraus & Keltner, 2009). However, only sociosexuality added incremental predictive power over and above physical attributes in the current study.
Unexpected was that sociosexuality emerged as a relative powerful predictor of men’s popularity to women, particularly because women largely expressed a long-term mating interest. A possible explanation is that that male sociosexuality indicates a history of successful mating experience or mating skills that are attractive to women. Similarly, shyness showed the expected negative effect on popularity only for men, which might be explained by the traditional male sex role, which requires them to behave more active and proceptive in initial encounters with potential mates and is likely particularly difficult for shy men.
The broad personality dimensions extraversion, neuroticism, agreeableness and conscientiousness showed no influence on participants’ popularity. This was inconsistent with the findings of Luo and Zhang (2009) for a student sample who reported rather high correlations with these traits for women (but not men). Future studies are needed for deciding whether the personality effects reported by Luo and Zhang (2009) were chance findings due to their relative small sample of only 54 women and the heterogeneity of the correlations across speed-dating groups, or whether broad personality effects on popularity characterize only more homogeneous student populations.
In our study, the personality dimensions sociosexuality and shyness, which are specifically related to mating and social interactions with strangers, had more predictive power than the FFM dimensions of extraversion and neuroticism with which sociosexuality and shyness correlate: Sociosexuality with high extraversion (Schmitt & Shackelford, 2008), and shyness with low extraversion and high neuroticism (e.g. Asendorpf & Wilpers, 1998). This finding relates to the bandwidth-fidelity trade-off in behavioural predictions from personality (Cronbach & Gleser, 1965; Ones and Visweswaran, 1996; Paunonen, 2003), that is,
narrower traits that are tailored to specific situational contexts and behaviours often outperform broader traits in predictive power, whereas broader traits often outperform narrower traits if the goal is to predict many different behaviours in many different contexts.
**Choosiness**
Our data confirmed the expected positive correlation between choosiness and popularity (negative individual reciprocity in the terminology of the SRM), but significantly only for men. Luo and Zhang (2009) also found positive, though non-significant correlations for both men and women, possibly due to their small sample. Eastwick et al. (2007) reported negative individual reciprocities for ratings of romantic interest and ‘good chemistry’. Together with our finding that the predictions of actor and partner effects by individual attributes were mostly opposite in sign (see Table 3), we conclude that there is evidence for a positive correlation between choosiness and popularity. This is in line with mating market models, where highly popular people are predicted to be more careful in their choices and unpopular people are predicted to be more indiscriminative (Penke et al., 2007).
Strong evidence was found for the predicted interaction between age and sex for choosiness: The higher choosiness of women that is ubiquitous in studies of young adults decreased and even tended to reverse for older women. This is an important finding, because evolutionary accounts often assume a generally higher choosiness of the sex that invests more in offspring (females in most species; Trivers, 1972). It is interesting that Trivers’s parental investment model is based on a reproductive argument that does not apply to women that have reached menopause. Our expectation was based on context-dependent mating strategies (Gangestad & Simpson, 2000), and our results confirm that life history phases (e.g. reproductive vs. post-reproductive) provide an important context that affects human mating behaviour. However, studies of dating in older adults are scarce, so our finding awaits replication.
**Short-term versus long-term mating tactics**
Evolutionary theories predict that single women should generally pursue more long-term mating tactics (with certain exceptions), whereas men are more variable in pursuing long-term and short-term tactics (Buss & Schmitt, 1993; Gangestad & Simpson, 2000; Penke & Denissen, 2008). This hypothesis was strongly confirmed by participants’ self-rated interests before the speed-dating event.
Despite this expected sex difference, we also found clear evidence that speed-dating is a context dominated by long-term mating interest for both men and women. Due to their higher variability in short-term interest, men reported higher on average interest in short-term mating than women, but still much lower short-term interest than long-term interest, and their overall preference for long-term mating was not moderated by age. Thus, speed-dating is a social context that attracts mainly people pursuing long-term tactics, even at younger age.
**Dyadic effects**
We confirmed earlier findings by Eastwick et al. (2007) and Luo and Zhang (2009) of a positive but low dyadic reciprocity of choice. A particular preference for a dating partner, controlling for one’s choosiness and the partner’s popularity, tends to be reciprocated by this dating partner. Such reciprocal preferences require interaction to develop. It seems that the 3 minutes of interaction in our design were sufficient to build up such reciprocity in liking. However, the reciprocity was not high compared to figures such as .45 after participants having already received feedback about the choices of the dating partners (Luo & Zhang, 2009), or .61 for long-term acquaintances (Kenny, 1994: p. 102). The rather low dyadic reciprocity implied that participants’ matches were much rarer than their choices, which, in turn, limited the variability of dating success and its predictability by individual characteristics (see Table 3).
Also confirmed was our expectation that similarity of the dating partners facilitates reciprocated choices. However, after controlling for individual effects the similarity effect was only significant for facial attractiveness. Kurzban and Weeden (2005) found similarity effects for height and BMI, whereas Luo and Zhang (2009) did not find any significant effect for 44 tests of similarity. Together, these findings suggest that similarity effects are weak in studies of brief real dating interactions. This result is different from the conclusions from questionnaire studies of attraction to hypothetical partners, from dyadic interaction studies where similarity effects are confounded with individual effects, and from studies of similarity in couples that regularly find clear similarity effects even after controlling for individual effects (Klohnen & Luo, 2003). It seems that similarity effects need more time to emerge than the 3 minutes provided by speed-dating.
Finally, our expectation that women’s mating is predicted by the short-term mating preferences of their male matches, whereas men’s relating is predicted by the long-term preferences by their female matches was confirmed both 6 weeks and 1 year after speed-dating, attesting to the robustness of these findings. A differentiation of mating from relating was possible because after 6 weeks some participants reported relating without mating, and some participants reported sex outside of the context of a romantic relationship. Also, the within-participant tests were more powerful than the more traditional between-dyad tests, because they controlled for all individual characteristics of one of the partners.
**Strength and weaknesses of the speed-dating paradigm**
The present study highlights several strength of the speed-dating paradigm for research on sexual and romantic attraction: (a) A study of real life interactions with participants who are actually motivated to find a partner rather than being interested in participating in a psychological study, (b) the possibility to distinguish actor, partner and relationship effects in dating behaviour, (c) the possibility to distinguish individual from dyadic reciprocities, (d) the possibility to estimate actor and partner effects reliably because they are averaged across multiple dating partners, (e) a clear-cut criterion for dating success in terms of matching, (f) the possibility to study the further development of interactions and relationships with matched speed-dating partners from both partner’s perspective. The current study is the first one that took full advantage of (f) in a community sample.
Despite these strengths, the speed-dating paradigm has also two weaknesses for studies of attraction. First, it is not clear to which extent speed-dating participants are representative for their age group in terms of individual attributes and dating, mating and relating behaviour. Second, the first minutes of dating can be studied in much detail in this design, but there is a long and often rocky road from dating to mating and relating as indicated by the strong reduction in probabilities from dating success via written/phone contact to face-to-face contact, sexual intercourse and establishment of a romantic relationship (see Table 2). Along this road, multiple factors influence mating and relating that are not captured by dating, which is the focus of speed-dating studies, and therefore it is tempting but premature to generalize any finding from dating to attraction in general. For example, the strong influence of physical attractiveness and the weak influence of personality traits on attraction and choices in most dating studies including all speed-dating studies cannot be generalized to sexual or romantic attraction in the long run.
**Practical implications**
The two most important practical questions for men and women are: What kind of people will I meet in a speed-dating event, and what is my chance for securing a sexual or romantic partner from one speed-dating event? Concerning the first question, the composition of the present study in terms of age of participants seems to be representative for speed-dating events in general according to information provided both by speed-dating companies in Germany as well as the published data of more than 10 000 North-American speed-dating participants by Kurzban and Weeden (2005) who reported a mean age of 33.1 years (in our study: 32.8 years); the variability in age was even higher in our study (7.4 years as compared to 5.3 years). Therefore we are rather confident that our results can be generalized to speed-dating at least in Germany, if not in western cultures in terms of age range although our sample seems to be biased towards better education. From our data and the reports of the students who guided the participants through the session, we have no reason to assume that speed-dating participants are different from their age group in terms of personality or sexual and relationship experience. For example, the participants’ mean scores in the Big Five factors of personality closely correspond to those reported for representative German samples except for higher scores in openness to experiences which can be attributed to their higher educational level, and their partner history closely corresponds to data from a large German internet survey (Penke & Asendorpf, 2008, Study 1).
Concerning the chance to secure a sexual or a romantic partner, these chances are 6% and 4% according to our results. It is difficult to say whether these percentages are high or low because empirical data for alternatives to speed-dating are missing. What is the chance to find a sexual or romantic partner if one visits a café or a bar for 2 hours, looking for a partner? Probably much lower in case of a café, and probably much higher for bars with certain reputation, at least what a sex partner is concerned. Another way of looking at the probabilities of 6% and 4% is to convert them into time and money spent on multiple speed-dating events, assuming independence of the outcomes of each event. Assuming that one has to pay 30 for a speed-dating event lasting 3 hours including everything, finding a relationship partner requires investing 75 hours and 750 on average.
**Future studies**
Future studies using the speed-dating paradigm should make sure that dating outcomes are measured with more than one criterion, so that separating measurement error from relationship effects is possible. Also, they should try to study the process from dating to mating and relating in more detail by asking participants more often than we did about their contact with each other during the first 6 weeks or so after speed-dating (see Eastwick & Finkel, 2008, for such an approach). Much happens during these weeks, and a detailed process analysis of the post-dating routes to mating and relating would help to correct the picture from the first minutes of dating that shows men focusing only on physical attractiveness, and women focusing on not much more. Complemented by such process analyses, speed-dating seems to be a valuable tool for better understanding human mating and relating.
**ACKNOWLEDGMENTS**
The first two authors equally contributed to this paper. This study was supported by a grant of the German Science Foundation to J. B. Asendorpf (As 59/15-3). LP is funded by the UK Medical Research Council (Grant No. 82800), which is part of the Help The Aged-funded Disconnected Mind research programme. The authors thank Marie-Luise Haupt, Karsten Krauskopf, Linus Neumann and Sebastian Teubner for their collaboration in data assessment and analysis.
**REFERENCES**
Asendorpf, J. B. (1989). Shyness as a final common pathway for two kinds of inhibition. *Journal of Personality and Social Psychology, 57*, 481–492.
Asendorpf, J. B., & Wilpers, S. (1998). Personality effects on social relationships. *Journal of Personality and Social Psychology, 74*, 1531–1544.
Boothroyd, L. G., Jones, B. C., Burt, D. M., DeBruine, L. M., & Perrett, D. I. (2008). Facial correlates of sociosexuality. *Evolution and Human Behavior, 29*, 211–218.
Borkenau, P., Mauer, N., Riemann, R., Spinath, F. M., & Angleitner, A. (2004). Thin slices of behavior as cues of personality and intelligence. *Journal of Personality and Social Psychology, 86*, 599–614.
Borkenau, P., & Ostendorf, F. (1993). *NEO-Fünf-Faktoren Inventar (NEO-FFI) (NEO five-factor inventory)*. Göttingen, Germany: Verlag für Psychologie.
Buss, D. G., & Schmitt, D. P. (1993). Sexual strategies theory: An evolutionary perspective on human mating. *Psychological Review, 100*, 204–232.
Connolly, J. J., Kavanagh, E. J., & Viswesvaran, C. (2007). The convergent validity between self and observer ratings of personality: A meta-analytic review. *International Journal of Selection and Assessment, 15*, 110–117.
Cronbach, L. J., & Gleser, G. C. (1965). *Psychological tests and personnel decisions*. Urbana, IL: University of Illinois.
Darwin, C. (1871). *The descent of man and selection in relation to sex*. London, UK: John Murray.
Eastwick, P. W., & Finkel, E. J. (2008). Sex differences in mate preferences revisited: Do people know what they really desire in a romantic partner? *Journal of Personality and Social Psychology, 94*, 245–264.
Eastwick, P. W., Finkel, E. J., Mochon, D., & Ariely, D. (2007). Selective versus unselective romantic desire: Not all reciprocity is created equal. *Psychological Science, 18*, 317–319.
Feinberg, D. R. (2008). Are human faces and voices ornaments signaling common underlying cues to mate value? *Evolutionary Anthropology, 17*, 112–118.
Feingold, A. (1990). Gender differences in effects of physical attractiveness on romantic attraction: A comparison across five research paradigms. *Journal of Personality and Social Psychology, 59*, 981–993.
Fishman, R., Iyengar, S. S., Kamenica, E., & Simonson, I. (2006). Gender differences in mate selection: Evidence from a speed-dating experiment. *Quarterly Journal of Economics, 121*, 673–697.
Fraley, R. C., & Shaver, P. R. (2000). Adult romantic attachment: Theoretical developments, emerging controversies, and unanswered questions. *Review of General Psychology, 4*, 132–154.
Furman, W., & Shaffer, L. (2003). The role of romantic relationships in adolescent development. In P. Florsheim (Ed.), *Adolescent romantic relations and sexual behavior: Theory, research, and practical implications* (pp. 3–22). Mahwah, NJ: Lawrence Erlbaum.
Gangestad, S. W., & Simpson, J. A. (2000). The evolution of human mating: Trade-offs and strategic pluralism. *Behavioral and Brain Sciences, 23*, 573–644.
Gangestad, S. W., Simpson, J. A., DiGeronimo, K., & Biek, M. (1992). Differential accuracy in person perception across traits: Examination of a functional hypothesis. *Journal of Personality and Social Psychology, 62*, 688–698.
Gangestad, S. W., Thornhill, R., & Garver-Apgar, C. E. (2005). Adaptations to ovulation. *Current Directions in Psychological Science, 14*, 312–316.
Hughes, S. M., Dispenza, F., & Gallup, G. G. Jr., (2004). Ratings of voice attractiveness predict sexual behavior and body configuration. *Evolution and Human Behavior, 25*, 295–304.
John, O. P., & Robins, R. W. (1993). Determinants of interjudge agreement on personality traits: The Big Five domains, observability, evaluativeness, and the unique perspective of the self. *Journal of Personality, 61*, 521–551.
Kenny, D. A. (1994). *Interpersonal perception: A social relations analysis*. New York: Guilford Press.
Kenny, D. A., Kashy, D. A., & Cook, W. L. (2006). *Dyadic data analysis*. New York: Guilford Press.
Kenny, D. A., & La Voie, L. (1984). The social relations model. In L. Berkowitz (Ed.), *Advances in experimental social psychology* (Vol. 18, pp. 142–182). Orlando: FL: Academic Press.
Kenny, D. A., & West, T. V. (2008). Zero acquaintance: Definitions, statistical models, findings, and process. In N. Ambady, & J. J. Skowronski (Eds.), *First impressions* (pp. 129–146). New York, NY: Guilford Press.
Kenrick, D. T., & Keefe, R. C. (1992). Age preferences in mates reflect sex differences in human reproductive strategies. *Behavioral and Brain Sciences, 15*, 75–133.
Kenrick, D. T., Sadalla, E. K., Groth, G., & Trost, M. R. (1990). Evolution, traits, and the stages of human courtship: Qualifying the parental investment model. *Journal of Personality, 58*, 97–116.
Klohnen, E. C., & Luo, S. (2003). Interpersonal attraction and personality: What is attractive – Self similarity, ideal similarity, complementarity, or attachment security? *Journal of Personality and Social Psychology, 85*, 709–722.
Kraus, M. W., & Keltnuer, D. (2009). Signs of socioeconomic status: A thin-slicing approach. *Psychological Science, 20*, 99–106.
Kurzban, R., & Weeden, J. (2005). Hurrydate: Mate preferences in action. *Evolution and Human Behavior, 26*, 227–244.
Kurzban, R., & Weeden, J. (2007). Do advertised preferences predict the behavior of speed-daters? *Personal Relationships, 14*, 623–632.
Lenton, A. P., Penke, L., Todd, P. M., & Fasolo, B. (in press) The heart has its reasons: Social rationality in mate choice. In R. Hertwig, & U. Hoffrage The ABC Research Group (Eds.), *Social rationality*. New York, NY: Oxford University Press.
Luo, S., & Zhang, G. (2009). What leads to romantic attraction: Similarity, reciprocity, security, or beauty? Evidence from a speed-dating study. *Journal of Personality, 77*, 933–964.
Magnusson, P. K. E., Rasmussen, F., & Gyllensten, U. B. (2006). Height at age 18 years is a strong predictor of attained education later in life: Cohort study of over 950 000 Swedish men. *International Journal of Epidemiology, 35*, 658–663.
Masche-Taylor, C. G. N., & Lasker, G. W. (2005). Biosocial correlates of stature in a British national cohort. *Journal of Biosocial Sciences, 37*, 245–251.
Mueller, U., & Mazur, A. (2001). Evidence of unconstrained directional selection for male tallness. *Behavioral Ecology and Sociobiology, 50*, 302–311.
Nettle, D. (2002a). Height and reproductive success in a cohort of British men. *Human Nature, 13*, 473–491.
Nettle, D. (2002b). Women’s height, reproductive success and the evolution of sexual dimorphism in modern humans. *Proceedings of the Royal Society of London B, 269*, 1919–1923.
Ones, D. S., & Visweswaran, C. (1996). Bandwidth-fidelity dilemma in personality measurement for personnel selection. *Journal of Organizational Behaviour, 17*, 609–626.
Paunonen, S. V. (2003). Big Five factors of personality and replicated prediction of behavior. *Journal of Personality and Social Psychology, 84*, 411–424.
Pawlowski, B., Dunbar, R. I. M., & Lipowicz, A. (2000). Tall men have more reproductive success. *Nature, 403*, 156.
Penke, L. (in press) Revised sociosexual orientation inventory. In T. D. Fisher, C. M. Davis, W. L. Yarbler, & S. L. Davis (Eds.), *Handbook of sexuality-related measures* (3rd ed.). Thousand Oaks, CA: Taylor & Francis.
Penke, L., & Asendorpf, J. B. (2008). Beyond global sociosexual orientations: A more differentiated look at sociosexuality and its effects on courtship and romantic relationships. *Journal of Personality and Social Psychology, 95*, 1113–1135.
Penke, L., & Deriissen, J. J. A. (2008). Sex differences and lifestyle-dependent shifts in the attunement of self-esteem to self-perceived mate value: Hints to an adaptive mechanism? *Journal of Research in Personality, 42*, 1123–1129.
Penke, L., Todd, P. M., Lenton, A., & Fasolo, B. (2007). How self-assessments can guide human mating decisions. In G. Geher, & G. F. Miller (Eds.), *Mating intelligence: Sex, relationships, and the mind’s reproductive system* (pp. 37–75). Mahwah, NJ: Lawrence Erlbaum.
Place, S. S., Todd, P. M., Penke, L., & Asendorpf, J. B. (2009). The ability to judge the romantic interest of others. *Psychological Science, 20*, 22–26.
Place, S. S., Todd, P. M., Penke, L., & Asendorpf, J. B. (in press). Humans show mate copying after observing real mate choices. *Evolution and Human Behavior*.
Pollet, T. V., & Nettle, D. (2008). Taller women do better in a stressed environment: Height and reproductive success in rural Guatemalan women. *American Journal of Human Biology*, 20, 264–269.
Raudenbush, S., Bryk, A., & Congdon, R. (2005). *HLM 6.0.3 for windows (Manual and software)*. Lincolnwood, IL: Scientific Software International.
Rhodes, G. (2006). The evolutionary psychology of facial beauty. *Annual Review of Psychology*, 57, 199–226.
Schmitt, D. P., & Shackelford, T. K. (2008). Big Five traits related to short-term mating: From personality to promiscuity across 46 nations. *Evolutionary Psychology*, 6, 246–282.
Silventoinen, K., Lähtelma, E., & Rahkonen, O. (1999). Social background, adult body-height and health. *International Journal of Epidemiology*, 28, 911–918.
Szklarska, A., Koziel, S., Bielecki, T., & Malina, R. M. (2007). Influence of height on educational attainment in males at 19 year of age. *Journal of Biosocial Sciences*, 35, 575–582.
Swami, V., & Furnham, A. (2007). *The psychology of physical attraction*. London: Routledge.
Swami, V., Miller, R., Furnham, A., Penke, L., & Tovée, M. J. (2008). The influence of men’s sexual strategies on perceptions of women’s bodily attractiveness, health and fertility. *Personality and Individual Differences*, 44, 98–107.
Swami, V., & Tovee, M. J. (2005). Female physical attractiveness in Britain and Malaysia: A cross-cultural study. *Body Image*, 2, 115–128.
Thornhill, R., & Grammer, K. (1999). The body and face of women: One ornament that signals quality? *Evolution and Human Behavior*, 20, 105–120.
Todd, P. M., Penke, L., Fasolo, B., & Lenton, A. P. (2007). Different cognitive processes underlie human mate choices and mate preferences. *Proceedings of the National Academy of Sciences USA*, 104, 15011–15016.
Trivers, R. L. (1972). Parental investment and sexual selection. In B. Campbell (Ed.), *Sexual selection and the descent of man* (pp. 136–179). Chicago, IL: Aldine.
Watson, D., Klohnen, E. C., Casillas, A., Simms, E. N., Haig, J., & Berry, D. S. (2004). Match makers and deal breakers: Analyses of assortative mating in newlywed couples. *Journal of Personality*, 72, 1029–1068.
Yilmaz, N., Kilic, S., Kanat-Pektas, M., Gulerman, C., & Mollamahmutoglu, L. (2009). The relationship between obesity and fecundity. *Journal of Women’s Health*, 18, 633–636.
|
Air pollution is a major public health concern in the United States and worldwide, accounting for approximately 800 000 deaths annually.\(^1\) Historical episodes of accumulations of ambient particulate matter (PM), such as in London in December 1952 or in Donora Valley in the last 3 days of October 1948, have been associated with up to >10 times increased death rates, mostly from cardiovascular disease (CVD).\(^2\) In the past 30 years, massive efforts in controlling emissions have led to substantial lowering of air pollution levels. However, in the presence of stagnating weather, peaks more than 10 times higher than the background levels of PM pollution are still frequently recorded in U.S. cities\(^3\) and...
followed by increased CVD hospitalization and mortality within hours or days.\textsuperscript{4,5} A recent comparative study showed that PM and traffic pollution exposures, because they expose millions to unhealthy air, are among the most frequent triggers of myocardial infarction at the population level, with about twice as many events as heavy alcohol consumption and >10 times than cocaine abuse.\textsuperscript{6} As recently as in July 2012, warnings for unhealthy PM levels were issued for most large U.S. cities, as reported by the U.S. Environmental Protection Agency.\textsuperscript{3}
Blood pressure (BP) can change rapidly in response to environmental stressors\textsuperscript{7,8} and has been proposed as a primary intermediate response for acute PM-related cardiovascular events.\textsuperscript{4,7} Airborne PM $\leq 2.5 \mu m$ (PM$_{2.5}$ or fine particles) in aerodynamic diameter, as well as larger particles between 2.5 and 10 $\mu m$ (coarse particles), can be inhaled and deposited in the upper and lower airways. Because of their smaller size, fine particles can reach more deeply into the lungs and have been suggested to have more harmful effects on cardiovascular outcomes.\textsuperscript{4} Observational data and previous studies of controlled human exposures have reproducibly shown rapid adverse effects on BP as early as 2 hours during PM exposure.\textsuperscript{7,9} An increase as small as 1 mm Hg in usual systolic BP is estimated to increase the risk of CVD deaths by 2% to 4%,\textsuperscript{10,11} and transient increases have been linked to PM-related triggering of cardiovascular events.\textsuperscript{4,12} The limited understanding of the mechanisms linking air pollution exposure to cardiovascular outcomes, including effects on BP, is identified as a critical research and clinical gap in a 2010 statement of the American Heart Association.\textsuperscript{4}
DNA methylation, the most studied of the epigenetic mechanisms, is a natural process that suppresses gene expression via addition of methyl groups to the DNA. Loss of methylation in inflammatory genes has been shown in lymphocytes as early as 20 minutes after antigen presentation.\textsuperscript{13} Findings from human observational studies have suggested that PM exposure may determine loss of methylation in blood DNA, potentially reflecting activation of proinflammatory states in blood leukocytes.\textsuperscript{14} Specifically, PM-related hypomethylation has been repeatedly observed in transposable repeated elements, including $Alu$\textsuperscript{15–17} and long interspersed nuclear element-1 (LINE-1),\textsuperscript{15–18} as well as in candidate proinflammatory genes.\textsuperscript{16} Consistent with these observations, reduced methylation of genomic DNA in blood has also been observed in patients with cardiovascular disease,\textsuperscript{14,19} or CVD-related conditions and risk factors, including atherosclerosis,\textsuperscript{20} oxidative stress,\textsuperscript{21} and aging.\textsuperscript{22} However, current evidence on PM-induced hypomethylation rests entirely on observational studies, and no human experimental data are yet available to demonstrate effects of air pollution on DNA methylation. In addition, whether PM-induced hypomethylation mediates the effects of PM on cardiovascular outcomes, such as those on BP, has never been tested.
Controlled studies on human-exposure to concentrated ambient particles (CAPs) provide a unique opportunity to simulate air pollution peaks, while allowing for experimental control of the exposures. Previous controlled human exposure experiments have shown increased BP after exposure to fine CAPs\textsuperscript{9,23}—consistent with observational studies that have associated short-term PM$_{2.5}$ exposure with BP.\textsuperscript{4} To the best of our knowledge, changes in BP after coarse CAP exposure have not yet been documented. Albeit mostly deposited and cleared in the upper airways, coarse particles are enriched in organic components that activate innate inflammatory responses.\textsuperscript{24} Activation of specific innate immune responses in circulating leukocytes, such as those mediated through increased expression of the toll-like receptor-4 (TLR4), have been linked with BP and hypertension.\textsuperscript{25} Herein, we report the results of a double-blind randomized cross-over trial of controlled human exposures to fine and coarse CAPs. We experimentally determined effects on blood DNA methylation of LINE-1 and $Alu$ repetitive elements and candidate proinflammatory genes ($TLR4$, $IL-12$, $IL-6$, $INOS$). In addition to evaluating CAPs effects on blood DNA methylation, we tested for CAPs-induced effects on BP and conducted mediation analysis to estimate the proportion of the effects on BP mediated by DNA methylation.
**Materials and Methods**
**Study Participants**
The study included 15 healthy volunteers between 18 and 60 years of age. All participants were nonsmokers and free of CVD. All experiments were conducted between January 2008–March 2010 at the Gage Occupational and Environmental Health Unit, University of Toronto, Ontario, Canada. Exclusion criteria included a fasting total cholesterol $>6.2$ mmol/L, glucose $>7$ mmol/L, hypertension (resting BP $>140/90$ mm Hg), pregnancy/lactation, or ECG abnormalities. The study received institutional review board approval from St. Michael’s Hospital and University of Toronto. All participants provided written informed consent before enrolling.
**Study Design and Exposure Protocol**
Using a double-blind randomized placebo-controlled cross-over design, each participant underwent 3 exposures in random order: (i) fine CAPs; (ii) coarse CAPs; and (iii) High-Efficiency-Particulate-Air (HEPA)-filtered medical air (control), separated by a minimum 2-week washout period. Volunteers and study personnel were blinded to the exposure order. Only the technologist who generated the exposure was aware of
the exposure type. We utilized a controlled human exposure facility that concentrates fine or coarse particles under controlled conditions, using high-flow (5000 L/min) Harvard ambient particle concentrators (see details in Data S1). Briefly, ambient particles were drawn into a 1.8 m high PM$_{10}$ inlet located 10 m from a busy 4-lane downtown Toronto street, with $\approx$2500 vehicles passing during the 130-minute exposure. The CAP airstream was delivered directly to the volunteer who was seated inside a Lexan and steel tube frame enclosure (4.9 m$^3$, see exposure apparatus during one of the experiments in Figure S1). Participants were at rest and breathing freely (no mouthpiece) via an “oxygen type” facemask covering their nose and mouth.\textsuperscript{26} The target levels for the fine and coarse CAP experiments were 250 and 200 $\mu g/m^3$, respectively. The CAP/filtered air delivery system was designed so that there were no visual cues as to the exposure type while participants were seated in the exposure chamber.
**BP Measures and Blood Sample Collection**
All participants fasted ($\geq$8 hours) prior to their arrival at the facility at $\approx$7:30 AM. Blood samples were collected at $\approx$9 AM. Afterward, volunteers underwent the 130-minute exposure, at rest. Seated BP was obtained 10 minutes before exposure and 5 minutes after completion of the exposure. Postexposure blood samples were collected $\approx$1 hour after the end of the exposure. A standardized protocol for BP measurements was used as recommended by the American Heart Association (see Data S1, Figure S2).\textsuperscript{27}
**DNA Methylation Analyses**
Buffy coat was immediately obtained from blood in a preprocessing laboratory adjacent to the exposure facility, aliquoted, and frozen at $-20^\circ C$ until DNA isolation. All laboratory procedures on the buffy-coat samples, from DNA isolation through DNA methylation analyses, were performed in a single batch. DNA was purified using Qiagen DNeasy Blood and Tissue Kit (Qiagen). All samples from each participant were placed on the same analytical plate to avoid plate effects. DNA methylation analyses were performed by bisulfite PCR-Pyrosequencing. We performed DNA methylation analyses of $Alu$ and LINE-1 repeated sequences, as described previously,\textsuperscript{16} which allows for the amplification of a representative pool of repetitive elements. We developed assays for $TLR4$, $IL6$, $IL12$, and $iNOS$ methylation by locating their promoters using the Genomatix Software (Genomatix Software Inc) and amplified the sequences shown in Tables S1 and S2. In each assay, we measured %5mC at multiple CpG dinucleotides (Table S1). Every sample was tested in duplicate to confirm reliability.
**Statistical Analysis**
In each blood sample, DNA methylation analysis produced 8 values each for LINE-1 and $TLR4$ (methylation at 4 CpGs replicated in 2 runs), 6 values for $Alu$ (methylation at 3 CpGs replicated in 2 runs), and 4 values each for $IL-6$, $IL-12$, and $iNOS$ (methylation at 2 CpGs replicated in 2 runs). In addition, methylation was measured in each participant at 2 time points (pre- and postexposure) in each of the 3 randomized experiments. To account for this data structure and consider within-individual effects, we used mixed-effect models (PROC MIXED in SAS V9.2).\textsuperscript{16} We fitted mixed-effect models with a random intercept for each subject—which captures the correlation among measurements within the same subject; a random intercept for each CpG—which captures the correlation among measurements within the same CpG position; and a random slope for each position—which captures potential different effects of the exposures across the different positions. To control for potential confounding due to period effect, we also included a numeric variable indicating the order of exposure.
For each DNA methylation marker we assumed the following:
$$Y_{ijkl} = \beta_0 + \mu_i + \mu_{0j} + \beta_1(\text{fine}) + \mu_{1j}(\text{fine}) + \beta_2(\text{coarse}) + \mu_{2j}(\text{coarse}) + \beta_3(\text{post}) + \beta_4(\text{fine} \times \text{post}) + \beta_5(\text{coarse} \times \text{post}) + \beta_6(\text{order}) + \epsilon_{ijkl}$$
(1)
where $Y_{ijkl}$ is the value of methylation in subject i, CpG position j, time k (pre- or postexposure) and experiment l (fine CAPs, coarse CAPs, or medical air); $\beta_0$ is the overall intercept, which indicates the average methylation in the control group (medical air) in preexposure samples; $\mu_i$ is the random intercept for the subject i; $\mu_{0j}$ is the random intercept for each position; $\beta_1$ and $\beta_2$ are the main effects of the exposure to fine CAPs and to coarse CAPs, respectively, compared to the control exposure; $\beta_3$ is the main effect of time (postexposure compared to preexposure); $\mu_{1j}$ and $\mu_{2j}$ are the random slopes of the different CpG positions for each exposure; $\beta_4$ and $\beta_5$ are the interaction effects between exposure (fine and coarse, respectively) and time; $\beta_6$ represents the period effect.
The choice of the 6 methylation markers, and of BP as the clinical outcome, was made a priori. No other methylation markers or outcomes were examined. $P$ values from the model described in equation 1 were adjusted for multiple comparisons using a permutation test that accounts for data correlation.\textsuperscript{28} Methylation data are expected to be correlated and statistical tests are likely to be not independent. In this situation, commonly used methods of multiple-testing correction such as Bonferroni and FDR will overestimate the adjusted $P$ value. Permutation test represents a straightforward—albeit computationally heavier—and accurate approach to correct for multiple comparisons.\textsuperscript{28} Briefly, we randomly permuted the exposures within subject and then regressed each of the 6 DNA methylation markers over the exposures on this permuted dataset. The permutation breaks the link in the data between the exposures and DNA methylation, thus the dataset generated will be under the null hypothesis. We repeated this process 1000 times. A total of 12 000 $P$ values (6 genes $\times$ 1000 datasets $\times$ 2 exposures) were obtained. The adjusted permutation $P$ value was equal to the number of simulations with $P$ value smaller than the observed $P$ value divided by 12 000. Adjusted permutation $P$ values <0.05 were considered significant and reported alongside the nominal 95% CIs and $P$ values.
We then examined the effects of the exposures on systolic or diastolic BP by using the following mixed-effect models:
$$Y_{ikl} = \beta_0 + \mu_i + \beta_1(\text{fine}) + \beta_2(\text{coarse}) + \beta_3(\text{post}) + \beta_4(\text{fine} \times \text{post}) + \beta_5(\text{coarse} \times \text{post}) + \beta_6(\text{order}) + \epsilon_{ikl}$$
(2)
where $Y_{ikl}$ is the measured value of either systolic or diastolic BP for subject $i$, time $k$, and experiment $l$; $\beta_0$ is the overall intercept, which indicates the average value of BP in the control group (medical air) preexposure; $\beta_1$ and $\beta_2$ are the main effects of the exposure, respectively, to fine CAP and to coarse CAP compared to the control exposure; $\mu_i$ is the random intercept for the subject $i$; $\beta_3$ is the main effect of time (postexposure compared to preexposure); $\beta_4$ and $\beta_5$ are the interaction effects between exposure (fine and coarse, respectively) and time; $\beta_6$ represents the period effect.
From the regression coefficients of the models in equations 1 or 2, group means can be derived as the average values of the dependent variables for each combination of exposure and time. In our primary analysis, we present within-subject mean differences between postexposure measurements (ie, fine versus medical air; and coarse versus medical air). For those outcomes showing a statistically significant difference, we are also reporting differences between pre- and postexposure means.
We finally evaluated the association of the DNA methylation markers with systolic or diastolic BP levels in postexposure measures. To reduce multiple testing, we prioritized and present in the paper only the results for the methylation markers that showed significant differences after either fine or coarse CAP exposure. For both systolic and diastolic BP, we assumed the following model:
$$Y_{ikl} = \beta_0 + \mu_i + \beta_1(\text{methylation}) + \epsilon_{ikl}$$
(3)
where $Y_{ikl}$ is the measured value of either systolic and diastolic BP for subject $i$, time $k$, and experiment $l$; $\beta_0$ is the overall intercept; $\beta_1$ is the regression coefficient for each the DNA methylation fitted as the mean of CpG positions and runs; $\mu_i$ is the random intercept for the subject. A nominal $P$ value <0.05 was considered statistically significant.
**Mediation Analysis**
We performed mediation analysis to estimate the proportion of the exposure effects on BP mediated by DNA methylation. Based on the a priori assumption that a mediated effect is biologically plausible, this approach decomposes the total observed effect of exposure on BP into a direct effect of exposure and an indirect effect of exposure that acts via the mediator of interest\textsuperscript{29,30} (ie, DNA methylation). Mediation analysis usually requires a significant relation of the outcome to the exposure, a significant relation of the outcome to the mediator and a significant relation of the mediator to the exposure;\textsuperscript{31} as potential mediators, we therefore analyzed the methylation markers that satisfied all these assumptions. In order to establish mediation, a significant relation of the mediator to the outcome, when both the mediator and the exposure are predictors, is also required. To test the latter assumption we evaluated the potential mediators in the following model:
$$Y_{ikl} = \beta_0 + \mu_i + \beta_1(\text{methylation}) + \beta_2(\text{fine}) + \beta_3(\text{coarse}) + \epsilon_{ikl}$$
(4)
where $Y_{ikl}$ is the measured value of BP (either systolic or diastolic) for subject $i$, time $k$, and experiment $l$; $\beta_0$ is the overall intercept; $\mu_i$ is the random intercept for the subject; $\beta_1$ is the regression coefficient for each CpG position; $\beta_2$ and $\beta_3$ are the main effects of the exposure to fine and coarse CAPS, respectively, relative to the medical air control exposure.
Once all these assumptions are verified, it is possible to evaluate the indirect effect, which estimates the size of the effect of CAPs exposure on BP that is mediated by DNA methylation.\textsuperscript{32} We estimated the indirect effect via a mixed-effect mediation model using PROC MIXED in SAS 9.2.\textsuperscript{29,30}
**Results**
**Effects of Controlled Exposures on DNA Methylation**
Table 1 shows the baseline characteristics of the study participants. The study included 8 male and 7 female participants (average age 27.7 years). The actual average levels of PM mass concentrations achieved during the experiments were 241.8 $\mu g/m^3$, 210.6 $\mu g/m^3$, and 0.6 $\mu g/m^3$ during the fine CAP, coarse CAP, and control exposures, respectively.
Table 1. Characteristics of the Study Participants at the Enrollment Visit*, n=15
| Characteristics | Mean±SD or n (%) |
|--------------------------|------------------|
| Age, y | 27.7±8.6 |
| Gender | |
| Male | 8 (53.3) |
| Female | 7 (46.7) |
| Race | |
| White | 10 (66.7) |
| Black | 1 (6.7) |
| Asian | 4 (26.6) |
| BMI, kg/m² | 23.2±2.4 |
| Fasting glucose, mmol/L | 4.8±0.5 |
| Fasting cholesterol, mmol/L | 4.2±0.7 |
| Heart rate, beats/min | 67.9±12.6 |
| Systolic BP, mm Hg | 117.6±14.1 |
| Diastolic BP, mm Hg | 69.1±12.2 |
| White blood cell count, 109/L | 5.5±1.1 |
| DNA methylation, % 5mC | |
| Alu | 24.2±0.5 |
| LINE-1 | 84.3±0.7 |
| TLR4 | 3.6±0.8 |
| IL-6 | 45.5±7.4 |
| IL-12 | 94.9±0.7 |
| iNOS | 62.9±2.8 |
BMI indicates body mass index; BP, blood pressure; LINE-1, long interspersed nuclear element-1; TLR4, toll-like receptor-4; IL-6, interleukin 6; IL-12, interleukin 12; iNOS, inducible nitric oxide synthase gene.
*Variables assessed at a preliminary screening visit conducted before the beginning of the experiments except DNA methylation—methylation values were measured on blood samples collected before the first exposure experiments.
Figure 1 shows the average postexposure differences in methylation in blood samples collected after exposures to CAPs (fine or coarse) relative to the medical air control exposure, taken as reference (ie, mean within-subject differences between postexposure measurements obtained from model [1]); because methylation showed large between-marker differences in mean and range of values, to allow for comparability we report standardized βs expressing the difference between exposures as a fraction of the SD of DNA methylation. Alu methylation was significantly lower after fine-CAPs exposure (standardized β=−0.74, P=0.0006), compared to postmedical air (control) exposure. TLR4 methylation was significantly lower after both fine- (standardized β=−0.21, P=0.02) and coarse CAPs (standardized β=−0.27, P=0.01) exposures, relative to the postcontrol exposure (Figure 1). After P value adjustment for multiple comparisons, the postexposure difference in Alu methylation between fine CAPs and medical filtered air remained significant (permutation-adjusted P=0.03). The effect on TLR4 methylation remained significant for coarse CAPs (permutation-adjusted P=0.04), but not for fine CAPs (permutation-adjusted P=0.08). No significant differences were observed in postexposure LINE-1, IL-6, IL-12, or iNOS methylation. Postexposure comparisons for each of the 6 methylation markers are reported in Table S5.
We confirmed these findings by evaluating the average within-participant change in DNA methylation in postexposure relative to preexposure blood samples in each of the 3 exposures (fine CAPs, coarse CAPs, or control). Alu methylation...
showed a significant average within-participant decrease in postexposure samples after fine CAPs exposure (standardized $\beta=-0.40$, $P=0.05$), and no change after coarse CAPs (standardized $\beta=0.02$, $P=0.86$) or control exposures (standardized $\beta=0.20$, $P=0.26$). *TLR4* methylation showed a significant average within-participant decrease after fine (standardized $\beta=-0.21$, $P=0.02$) and coarse (standardized $\beta=-0.16$, $P=0.05$) CAPs exposures, and no significant changes after control exposure (standardized $\beta=0.11$, $P=0.28$).
**Effects of Controlled Exposures on BP**
Figure 2 shows the average BP differences after exposures to CAPs (fine or coarse) relative to the control exposure, taken as reference (ie, mean within-subject differences between postexposure measurements obtained from model [2]). Systolic BP postexposure was significantly higher after exposure to fine ($\beta=2.53$ mm Hg, $P=0.001$) and coarse ($\beta=1.56$ mm Hg, $P=0.03$) CAPs relative to measurements after the control exposure. Postexposure differences in diastolic BP were not statistically significant (Figure 2).
To confirm these findings, we also estimated the average within-participant BP change in postexposure measurements relative to preexposure for each of the 3 exposures. Both systolic and diastolic BP showed significant average within-participant postexposure increases after fine CAPs ($\beta=2.97$ mm Hg, $P=0.002$ for systolic BP; and $\beta=1.87$ mm Hg, $P=0.005$ for diastolic BP), and coarse CAPs ($\beta=2.11$ mm Hg, $P=0.0008$ for systolic BP; and $\beta=2.36$ mm Hg, $P=0.0007$ for diastolic BP). However, as expected due to normal circadian variations in BP, we also found in the control experiments a moderate postexposure increase in systolic BP ($\beta=0.25$ mm Hg, $P=0.74$) and a stronger increase in diastolic BP ($\beta=1.83$ mm Hg, $P=0.006$), albeit both were less pronounced than those found after fine or coarse CAPs exposures.
**Association of DNA Methylation With Blood Pressure**
To reduce false positive findings from multiple testing, we prioritized and present only the results on the association between DNA methylation and BP for the 2 markers that showed effects from CAP exposures, that is, *Alu* and *TLR4*. Decreased *Alu* methylation was associated with significantly increased diastolic BP ($\beta=0.41$, $P=0.04$) and nonsignificantly increased systolic BP ($\beta=0.40$, $P=0.15$). Decreased *TLR4* methylation was associated with significant increases of both diastolic ($\beta=0.84$, $P=0.02$) and systolic BP ($\beta=1.45$, $P=0.01$). Analyses stratified by exposure type showed correlations between DNA methylation and BP substantially homogeneous across the different exposures (data not shown) (Figure 3).
**Sensitivity Analyses**
By study design, exposure-order randomization is expected to balance potential confounders across exposure groups. We adjusted in our primary analysis for experiment order to limit confounding from random order imbalances. Moreover, by fitting a random intercept for each participant, our modeling
---
**Figure 2.** Effect of controlled exposures to fine concentrated ambient particles (CAPs) and coarse CAPs on systolic and diastolic blood pressure (BP). Differences in CAPs vs medical air (control) exposures of systolic and diastolic BP in postexposure measurements. $\beta$s and 95% confidence intervals expressing the difference in BP (mm Hg) between CAPs exposures and control exposure (HEPA-filtered medical air) in postexposures measurements. HEPA indicates high-efficiency particulate air.
**Figure 3.** Associations of *Alu* and *TLR4* methylation with systolic and diastolic blood pressure (BP). Standardized $\beta$s and 95% confidence intervals are shown in the figure. Standardize $\beta$s express the changes in BP as fractions of the BP standard deviation associated with a decrease in DNA methylation equal to its standard deviation. *TLR4* indicates toll-like receptor-4.
approach allowed for controlling of participant characteristics that do not vary over time. A source of residual confounding may be represented by carryover effect. To control for this potential confounder we performed a sensitivity analysis adjusting models (1) and (2) for the previous exposure (binary variable indicating if the previous exposure was either CAPs or medical air). No major differences in the results were observed (Tables S6 and S7). As additional sensitivity analyses, multiple regression models were fitted to control for potential time-varying confounders such as white blood cell counts and proportions of blood leukocyte cell types in differential blood counts (neutrophils, lymphocytes, monocytes, eosinophils, and basophils). We first checked if there was any difference in blood cell proportions between pre- and postexposure samples or when comparing exposure types in postexposure blood samples, and found no statistically significant differences (Table S8 and S9). Nonetheless, we further adjusted model (1) for white blood cell counts and proportions of blood leukocyte cell to exclude potential influences on DNA methylation. Results from these adjusted models (Table S10) were similar to those from our primary analysis.
**Mediation Analysis**
We performed mediation analysis to estimate the proportion of the effects of the exposures on BP that were mediated by postexposure changes in DNA methylation. *Alu* methylation satisfied the underlying assumptions for mediation analysis, described in the methods section, as it showed significant associations with fine CAPs exposures ($\beta=-0.74$, adjusted $P$ value=0.03), as well as with systolic BP in the model including both *Alu* methylation and fine CAPs exposure as independent predictors ($\beta=-0.37$, $P=0.03$). These results fulfill the assumptions for mediation analysis; therefore, we considered the potential pathway of fine CAPs exposure, *Alu* methylation and systolic BP and estimated indirect effect and proportion of mediation.
In the mediation analysis models, estimates of the proportion of mediation indicated that *Alu* hypomethylation mediated 10% of the positive association between fine CAP exposure and systolic BP (indirect effect: 0.40).
**Discussion**
In the present study of controlled human exposures, fine and coarse CAPs induced blood hypomethylation of the *Alu* repetitive elements and *TLR4* gene. Hypomethylation of both *Alu* and *TLR4* was found to be associated with increased systolic BP after the exposures. A wealth of epidemiological studies have reported associations between peaks of ambient particulate matter exposure and cardiovascular disease and deaths.\textsuperscript{1,5} Several biological mechanisms that may in part explain these findings have been reported through the use of controlled human exposures to CAPs.\textsuperscript{4} Blood pressure, a well-known predictor of cardiovascular risk, has been shown to exhibit small, but potentially critical acute increases in response to fine CAPs exposures.\textsuperscript{9} Decreased global and gene-specific DNA methylation has been proposed as a novel mechanism mediating the effect of air pollution exposure to CVD-related events.\textsuperscript{19} Castro et al\textsuperscript{20} found lower DNA methylation content in peripheral blood leukocytes from patients with atherosclerotic cardiovascular disease. Furthermore, recent findings from the Normative Aging Study have shown that lower blood LINE-1 methylation predicts incidence and mortality from ischemic heart disease and stroke.\textsuperscript{14} Processes related to cardiovascular disease, such as oxidative stress,\textsuperscript{21} atherosclerosis,\textsuperscript{20} and aging\textsuperscript{22} have been associated with lower DNA methylation content in blood. In vascular tissues, hypomethylated DNA has been shown to be prone to mutations or aberrant gene expression patterns leading to the transition from a normal phenotype to vascular fibrocellular lesions by increasing proliferation of vascular smooth cells and lipid deposition.\textsuperscript{20}
Previous \textit{in vitro} experiments have shown that repetitive-element and gene-specific hypomethylation is induced by biological processes, such as oxidative stress,\textsuperscript{21} which are generated by PM in exposed individuals.\textsuperscript{4} Oxidative DNA damage can interfere with the ability of methyltransferases to interact with DNA,\textsuperscript{21} thus resulting in hypomethylation of cytosine residues at CpG sites. The association between ambient particle exposure and DNA methylation has been observed in repetitive elements, such as *Alu* and LINE-1,\textsuperscript{15} in innate immune and proinflammatory pathways (*TLR4*, *IL-12*, *IL-\delta*),\textsuperscript{17,18} and in the inducible nitric oxide synthase gene (*iNOS*).\textsuperscript{16}
In our study, *Alu* and *TLR4* methylation were found to be significantly lower after CAPs exposures, providing for the first time—to the best of our knowledge—direct experimental evidence that PM exposure induces DNA hypomethylation in humans. We also observed a significant increase in systolic BP after CAPs exposures, confirming previous results on fine particle exposure\textsuperscript{4} and providing novel experimental evidence indicating that coarse exposure induces cardiovascular responses. Our findings also provide results consistent with the hypothesis that rapid hypomethylation of *Alu* and *TLR4* is an epigenetic mechanism that mediates the effects of particle exposure on BP. Activation of *Alu* repetitive elements, which is associated with hypomethylation of genomic DNA, increases in older individuals and has been suggested to contribute to biological aging and age-related chronic disease.\textsuperscript{33} Nearly 1 million copies of *Alu* sequences are dispersed throughout the genome.\textsuperscript{34} *Alu* methylation states have been shown to control...
DNA compaction and alter the expression of nearby genes.\textsuperscript{34} Previous studies have linked the presence of \textit{Alu} sequences in a number of genes related with hypertension—including the serine/threonine protein kinase family member \textit{WNK1},\textsuperscript{35} angiotensin I converting enzyme,\textsuperscript{36,37} tissue-type plasminogen activator,\textsuperscript{38} and 11beta-hydroxysteroid dehydrogenase type 2.\textsuperscript{39} Whether the association found in our study between global \textit{Alu} hypomethylation and BP is driven by hypomethylation of specific \textit{Alu} sequences in these genes, or in other genomic regions, remains to be determined.
Recent investigations have consistently indicated that expression of \textit{TLR4} on circulating leukocytes may act as a primary communication between the innate immune system and systemic vascular functions. Experimental models have shown that \textit{TLR4}, which is activated by endotoxin contained in PM, contributes to induce PM-related inflammation, oxidative stress, and CVD-related responses.\textsuperscript{40,41} Kampfrath et al\textsuperscript{40} showed that TLR4 deficiency prevented the increased microvascular vasoconstriction induced in mice by PM\textsubscript{2.5} exposure, as well as PM-related inflammatory and oxidative-stress responses. Bonfim et al\textsuperscript{41} showed that TLR4 inactivation with anti-TLR4 antibodies reduced the mean arterial pressure in spontaneously hypertensive rats. Recent molecular studies have shown that TLR4 binds a wide range of endogenous ligands related to BP regulation and cardiovascular function, including angiotensin II (AT-II), heat shock protein, fibrinogen, and fibronectin.\textsuperscript{41} Downstream products of TLR4 signaling include thromboxane A2, a potent vasoconstrictor that can induce rapid increases in BP.\textsuperscript{42} TLR4 expression is increased in both peripheral mononuclear leukocytes and cardiac tissues of hypertensive patients.\textsuperscript{25,43} Our results are consistent with these animal models that indicated PM-induced activation of TLR4 pathways as integral to PM-related cardiovascular effects.
Results from regression models in the present study were reported as standardized beta that indicated the effects of the exposures as a fraction of a SD of the DNA methylation markers. This presentation allowed to compare the size of CAPs effects across different markers and also showed that the effects of CAPs on DNA methylation were sizable relative to the methylation SDs, as—for instance—they amounted to as much as 74% of the SD of \textit{Alu} methylation. However, the SDs of both \textit{Alu} and \textit{TLR4} were small, thus indicating that \textit{Alu} and \textit{TLR4} methylation is tightly regulated. These findings suggest that even small changes in absolute DNA methylation, potentially corresponding to profound demethylation in a subset of blood leukocytes, may lead to effects on BP. In our study, CAP exposures did not show consistent effects on LINE-1 methylation. Methylation measures in LINE-1 and \textit{Alu} repetitive elements have been proposed as markers of genomic DNA methylation content based on results in cancer cells.\textsuperscript{33} However, the 2 repetitive elements are controlled through different mechanisms and have been shown to have different transcription patterns in response to environmental stressors and other conditions.\textsuperscript{33} For instance, \textit{Alu}—but not LINE-1—methylation has been shown to decrease through aging,\textsuperscript{33} potentially reflecting differential susceptibility to cumulative environmental insults over time. Our data provide further evidence that \textit{Alu} and LINE-1 methylation have different sensitivities to environmental stressors,\textsuperscript{15} and that they can show different associations with cardiovascular outcomes, such as increased BP. In contrast to previous evidence from in vitro and observational investigations,\textsuperscript{16–18} CAPs exposures did not affect \textit{IL-6}, \textit{IL-12}, or \textit{iNOS} methylation in our study. Despite the advantage of using high-precision quantitative pyrosequencing measures, the lower number of CpG sites analyzed in these genes, compared to those measured in \textit{Alu} and \textit{TLR4}, may have limited our capability to detect potential effects. Blood DNA is derived from a mixed cell population of different types of circulating white blood cells. Therefore, our findings on DNA methylation may have resulted from a CAP-induced shift in cell populations. However, CAPs exposure had no significant effects on the proportions of the major leukocyte cell types and sensitivity analysis showed no substantial differences in the results from models adjusted for leukocyte total count and proportions of neutrophils, lymphocytes, monocytes, basophils, and eosinophils. Although these findings limit the chances that shifts on major cell types underlie CAP-induced effects on DNA methylation, it is still possible that our results may be driven by changes in other smaller subpopulations, such as the different types of circulating lymphocytes. TLR4 has been found to be expressed in a variety of hematopoietic cells, including lymphocytes, monocytes, and myeloid cells.\textsuperscript{24} Future studies are needed to confirm our findings and determine the cell compartment responsible for the observed changes. Finally, the volunteers enrolled in the present study were healthy and not necessarily representative of the general population. Our results may not be generalizable to different population strata and, particularly, should be replicated in a higher risk population. Moreover, the relatively small sample size might have limited our capability to detect significant effects.
In conclusion, our results demonstrate for the first time in a human experimental study that PM exposure induces rapid DNA hypomethylation. \textit{Alu} and \textit{TLR4} hypomethylation may represent a novel mechanism that mediates environmental effects on BP.
**Sources of Funding**
The present study was supported from grants from Health Canada, Environment Canada, AllerGen NCE, US EPA (RD-83241601; RD-83479801) and National Institute of Environmental Health Science (ES000002, ES020268, ES009825, and ES019773). The contents of this publication
are solely the responsibility of the grantee and do not necessarily represent the official views of the funding agencies. Further, the funding agencies do not endorse the purchase of any commercial products or services mentioned in the publication.
Disclosures
None.
References
1. WHO. *Air Quality and Health*. Geneva, Switzerland: WHO; 2009.
2. Bell ML, Davis DL. Reassessment of the lethal London fog of 1952: novel indicators of acute and chronic consequences of acute exposure to air pollution. *Environ Health Perspect.* 2001;109(suppl 3):389–394.
3. Office of Air Quality Planning and Standards. *Air Now: Action Days*. U.S. Environmental Protection Agency; 2011.
4. Brook RD, Rajagopalan S, Pope CA III, Brook JR, Bhatnagar A, Diez-Roux AV, Holguin F, Hong Y, Luepker RV, Mittleman MA, Peters A, Siscovick D, Smith SC Jr, Whitsel L, Kaufman JD. Particulate matter air pollution and cardiovascular disease: an update to the scientific statement from the American Heart Association. *Circulation.* 2010;121:2331–2378.
5. O’Toole TE, Conklin DJ, Bhatnagar A. Environmental risk factors for heart disease. *Rev Environ Health.* 2008;23:167–202.
6. Nawrot TS, Perez L, Kunzli N, Munters E, Nemery B. Public health importance of triggers of myocardial infarction: a comparative risk assessment. *Lancet.* 2011;377:732–740.
7. Brook RD, Rajagopalan S. Particulate matter, air pollution, and blood pressure. *J Am Soc Hypertens.* 2009;3:332–350.
8. Zanobetti A, Canner MJ, Stone PH, Schwartz J, Sher D, Eagan-Bengston E, Gates KA, Hartley LH, Suh H, Gold DR. Ambient pollution and blood pressure in cardiac rehabilitation patients. *Circulation.* 2004;110:2184–2189.
9. Urch B, Silverman F, Corey P, Brook JR, Lukic KZ, Rajagopalan S, Brook RD. Acute blood pressure responses in healthy adults during controlled air pollution exposures. *Environ Health Perspect.* 2005;113:1052–1055.
10. Stamler J, Stamler R, Neaton JD. Blood pressure, systolic and diastolic, and cardiovascular risks. US population data. *Arch Intern Med.* 1993;153:598–615.
11. Van den Hoogen PC, Feskens EJ, Nagelkerke NJ, Menotti A, Nissinen A, Kromhout D. The relation between blood pressure and mortality due to coronary heart disease among men in different parts of the world. Seven countries study research group. *N Engl J Med.* 2000;342:1–8.
12. Baccarelli A, Benjamin EJ. Triggers of MI for the individual and in the community. *Lancet.* 2011;377:694–696.
13. Bruniquel D, Schwartz RH. Selective, stable demethylation of the interleukin-2 gene enhances transcription by an active process. *Nat Immunol.* 2003;4:235–240.
14. Baccarelli A, Wright RO, Bollati V, Litonjua AA, Zanobetti A, Tarantini L, Sparrow D, Vokonas P, Schwartz J. Ischemic heart disease and stroke in relation to blood DNA methylation. *Epidemiology.* 2010;21:819–828.
15. Baccarelli A, Wright RO, Bollati V, Tarantini L, Litonjua AA, Suh HH, Zanobetti A, Sparrow D, Vokonas PS, Schwartz J. Rapid DNA methylation changes after exposure to traffic particles. *Am J Respir Crit Care Med.* 2009;179:572–578.
16. Tarantini L, Bonzini M, Apostoli P, Pegoraro V, Bollati V, Marinelli B, Cantone L, Rizzo G, Hou L, Schwartz J, Bertazzi PA, Baccarelli A. Effects of particulate matter on genomic DNA methylation content and iNOS promoter methylation. *Environ Health Perspect.* 2009;117:217–222.
17. Madrigano J, Baccarelli A, Mittleman MA, Wright RO, Sparrow D, Vokonas PS, Tarantini L, Schwartz J. Prolonged exposure to particulate pollution, genes associated with glutathione pathways, and DNA methylation in a cohort of older men. *Environ Health Perspect.* 2011;119:977–982.
18. Bollati V, Baccarelli A, Hou L, Bonzini M, Fustinoni S, Cavallo D, Byun HM, Jiang J, Marinelli B, Pesatori AC, Bertazzi PA, Yang AS. Changes in DNA methylation patterns in subjects exposed to low-dose benzene. *Cancer Res.* 2007;67:876–880.
19. Baccarelli A, Rienstra M, Benjamin EJ. Cardiovascular epigenetics: basic concepts and results from animal and human studies. *Circ Cardiovasc Genet.* 2010;3:567–573.
20. Castro R, Rivera I, Struys EA, Jansen EE, Ravasco P, Camilo ME, Blom HJ, Jakobs C, Tavares de Almeida I. Increased homocysteine and S-adenosylmethionine concentrations and DNA hypomethylation in vascular disease. *Clin Chem.* 2003;49:1292–1296.
21. Valinluck V, Tsai HH, Rogstad DK, Burdzy A, Bird A, Sowers LC. Oxidative damage to methyl-CpG sequences inhibits the binding of the methyl-cpg binding domain (MBD) of methyl-CpG binding protein 2 (MeCP2). *Nucleic Acids Res.* 2004;32:4100–4108.
22. Fuke C, Shimabukuro M, Petronis A, Sugimoto J, Oda T, Miura K, Miyazaki T, Ogura C, Okazaki Y, Jinno Y. Age related changes in 5-methylcytosine content in human peripheral leukocytes and placentas: an HPLC-based study. *Ann Hum Genet.* 2004;68:196–204.
23. Brook RD, Urch B, Dvonch JT, Bard RL, Speck M, Keeler G, Morishita M, Marsik FJ, Kamal AS, Kaciroti N, Harkema J, Corey P, Silverman F, Gold DR, Wellerius G, Mittleman MA, Rajagopalan S, Brook JR. Insights into the mechanisms and mediators of the effects of air pollution exposure on blood pressure and vascular function in healthy humans. *Hypertension.* 2009;54:659–667.
24. Rehli M, Poltorak A, Schwarzscher L, Krause SW, Andreeßen R, Beutler B. PU.1 and interferon consensus sequence-binding protein regulate the myeloid expression of the human toll-like receptor 4 gene. *J Biol Chem.* 2000;275:9773–9781.
25. Markoulti ME, Kontaraki JE, Zacharis EA, Koichidisi CE, Giouzouaki A, Chlouverakis G, Vardas PE. TLR2 and TLR4 gene expression in peripheral monocytes in nondiabetic hypertensive patients: the effect of intensive blood pressure-lowering. *J Clin Hypertens (Greenwich).* 2012;14:330–335.
26. Petrovic B, Urch J, Broock J, Datema J, Purdham L, Liu Z, Lukic B, Zimmerman G, Toftier E, Downar P, Corey S, Tarlo I, Broder R, Dales F, Silverman S. Cardiorespiratory effects of concentrated ambient PM2.5: a pilot study using controlled human exposures. *Inhalation Toxicol.* 2000;12:173–188.
27. Alpert B, McCrindle B, Daniels S, Dennison B, Hayman L, Jacobson M, Mahoney L, Rocchini A, Steinberger J, Urbina E, Williams R. Recommendations for blood pressure measurement in human and experimental animals: Part 1: blood pressure measurement in humans. *Hypertension.* 2006;48:e3; author reply e5.
28. Wilker EH, Alexeeff SE, Suh H, Vokonas PS, Baccarelli A, Schwartz J. Ambient pollutants, polymorphisms associated with microRNA processing and adhesion molecules: the normative aging study. *Environ Health.* 2011;10:45.
29. VanderWeele TJ. Marginal structural models for the estimation of direct and indirect effects. *Epidemiology.* 2009;20:18–26.
30. Bauer DJ, Preacher KJ, Gil KM. Conceptualizing and testing random indirect effects and moderated mediation in multilevel models: new procedures and recommendations. *Psychol Methods.* 2006;11:142–163.
31. VanderWeele TJ, Valeri L, Ogburn EL. The role of measurement error and misclassification in mediation analysis: mediation and measurement error. *Epidemiology.* 2012;23:561–564.
32. Baron RM, Kenny DA. The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. *J Pers Soc Psychol.* 1986;51:1173–1182.
33. Bollati V, Schwartz J, Wright R, Litonjua A, Tarantini L, Suh H, Sparrow D, Vokonas P, Baccarelli A. Decline in genomic DNA methylation during aging in a cohort of elderly subjects. *Mech Ageing Dev.* 2009;130:234–239.
34. Baccarelli A, Bollati V. Epigenetics and environmental chemicals. *Curr Opin Pediatr.* 2009;21:243–251.
35. Putku M, Karp P, Org E, Seber S, Comas D, Virgimaa M, Veldre G, Juhanson P, Hallast P, Tonisson N, Shaw-Hawkins S, Caulfield MJ, Khussudzhinova E, Kazich V, Munroe PB, Laan M. Novel polymorphic AluYb9 insertion in the WNK1 gene is associated with blood pressure variation in Europeans. *Hum Mutat.* 2011;32:806–814.
36. Agirbasli M, Gunev AI, Ozturhan HS, Agirbasli D, Ulucan K, Sevinc D, Kirac D, Ryckman KK, Williams SM. Multifactor dimensionality reduction analysis of MTHFR, PAI-1, ACE, PON1, and eNOS gene polymorphisms in patients with early onset coronary artery disease. *Eur J Cardiovasc Prev Rehabil.* 2011;18:803–809.
37. Rieder MJ, Taylor SL, Clark AG, Nickerson DA. Sequence variation in the human angiotensin converting enzyme. *Nat Genet.* 1999;22:59–62.
38. Wang B, Zhou X, Dang A, Liu G, He R. *Alu*-repeat polymorphism in the gene coding for tissue-type plasminogen activator and the risk of hypertension in a Chinese han population. *Hypertens Res.* 2002;25:949–953.
39. Lovati E, Ferrari P, Dick B, Jostamdt K, Frey BM, Frey FJ, Schorr U, Sharma AM. Molecular basis of human salt sensitivity: the role of the 11beta-hydroxysteroid dehydrogenase type 2. *J Clin Endocrinol Metab.* 1999;84:3745–3749.
40. Kamprath T, Maiseyeu A, Ying Z, Shah Z, Deilius JA, Xu X, Kherada N, Brook RD, Reddy KM, Padture NP, Parthasarathy S, Chen LC, Moffatt-Bruce S, Sun Q,
41. Bomfim GF, Dos Santos RA, Oliveira MA, Giachini FR, Akamine EH, Tostes RC, Fortes ZB, Webb RC, Carvalho MH. Toll-like receptor 4 contributes to blood pressure regulation and vascular contraction in spontaneously hypertensive rats. *Clin Sci (Lond)*. 2012;122:535–543.
42. Alvarez Y, Briones AM, Balfagon G, Alonso MJ, Salaices M. Hypertension increases the participation of vasoconstrictor prostanoids from cyclooxygenase-2 in phenylephrine responses. *J Hypertens*. 2005;23:767–777.
43. Eissler R, Schmaderer C, Rusai K, Kuhne L, Sollinger D, Lahmer T, Witzke O, Lutz J, Heemann U, Baumann M. Hypertension augments cardiac toll-like receptor 4 expression and activity. *Hypertens Res*. 2011;34:551–558.
|
Learning to Locate Informative Features for Visual Identification
DRAFT: IJCV special issue www.eecs.berkeley.edu/Research/Projects/CS/vision/shape/vid
Andras Ferencz
Computer Science, U.C. Berkeley
firstname.lastname@example.org
Erik G. Learned-Miller
Computer Science, UMass Amherst
email@example.com
Jitendra Malik
Computer Science, U.C. Berkeley
firstname.lastname@example.org
Abstract
Object identification is a specialized type of recognition in which the category (e.g. cars) is known and the goal is to recognize an object’s exact identity (e.g. Bob’s BMW). Two special challenges characterize object identification. First, inter-object variation is often small (many cars look alike) and may be dwarfed by illumination or pose changes. Second, there may be many different instances of the category, but few or just one positive “training” examples per object instance. Because variation among object instances may be small, a solution must locate possibly subtle object-specific salient features, like a door handle, while avoiding distracting ones such as specular highlights. With just one training example per object instance, however, standard modeling and feature selection techniques cannot be used. We describe an on-line algorithm that takes one image from a known category and builds an efficient “same” versus “different” classification cascade by predicting the most discriminative features for that object instance. Our method not only estimates the saliency and scoring function for each candidate feature, but also models the dependency between features, building an ordered sequence of discriminative features specific to the given image. Learned stopping thresholds make the identifier very efficient. To make this possible, category-specific characteristics are learned automatically in an off-line training procedure from labeled image pairs of the category. Our method, using the same algorithm for both cars and faces, outperforms a wide variety of other methods.
1. Introduction
Figure 1 shows six cars. The two leftmost cars were photographed by one camera; the right four cars were seen later by another camera from a different angle. Suppose one wants to determine which images, if any, show the same vehicle. We call this task visual object identification. Object identification is a specialized form of object recognition in which the category (e.g. faces or cars) is known, and one must recognize the exact identity of objects. Most existing identification systems are aimed at biometric applications such as identifying fingerprints or faces.
The general term object recognition refers to a whole hierarchy of problems for detecting an object and placing it into a group of objects. These problems can be organized by the generality and composition of the groups into which objects are placed. The goal of “recognition” can be to put objects in a very broad group such as vehicles, a narrower one such as cars, a highly specific group such as red sedans, or the narrowest possible group, a single element group containing a specific object, such as “Bob’s BMW”.
Here our focus is identification, where the challenge is to distinguish between visually similar objects of one category (e.g. cars), as opposed to categorization where the algorithm must group together objects that belong to the same category but may be visually diverse [1, 12, 25, 32]. Identification is also distinct from object localization, where the goal is locating a specific object in scenes where distractors have little similarity to the target object [20].
These differences are more than semantic: the object identification problem poses different challenges than its
coarser cousin, object categorization. Specifically, object identification problems are characterized by the following two properties.
1. The inter-instance variation is often small, and this variation is often dwarfed by illumination or pose changes (see Figure 1). For example, many cars look very similar, but the variability in appearance of a single vehicle, due to lighting for example, can be quite large.
2. There are many different instances of each category (many different individual cars), but few (in our case just one) positive “training” examples per object instance (e.g., only one image representing “Bob’s BMW”). With only one example per instance, it is particularly challenging to build a classifier that identifies such an instance, which is precisely our goal.
People are good at identifying individual objects from familiar categories after seeing them only once. Consider faces. We zero in on discriminative features for a particular person such as a prominent mole or unusually thick eyebrows, yet are not distracted by equally unusual but non-repeatable features such as a messy strand of hair or illumination artifacts. Domain specific expertise makes this possible: having seen many faces one learns that a messy strand of hair is not often a reliable feature.
Human vision researchers report that acquisition of this expertise is accompanied by significant behavioral and physiological changes. Diamond et al. [9] showed that dog experts perform dog identification differently than non-experts; Tarr et al. [28] argued that the brain’s fusiform face area does visual processing of categories for which expertise has been gained.
Categorization algorithms such as [1, 4, 31, 33] learn to recognize objects that belong to a category. Here, we are attempting to go one step beyond this by becoming category experts, where instead of having a fixed set of features that we look for to recognize new object instances, we are able to predict the features of the new object that will be the most informative for distinguishing it from other objects of the same category. Figure 2 highlights this difference. Note that categorization is a prerequisite for identification, because identification systems such as ours assume that the given objects are from the known category.
### 1.1 The Three Steps of Object Identification
To clearly characterize the differences between object categorization and the main subject of this paper, object identification, we enumerate the key steps in each process. We
compare our object identification method with the traditional supervised learning paradigm of object categorization.
### 1.1.1 Object Categorization
In the simplest supervised learning framework for object *categorization*, the learner is supplied with two sets of examples: a set of positive examples that are in a category (like cars), and a set of negative examples that are not in the category. The standard procedure of categorization consists of two steps:
1. **Training** a classifier using examples labeled positive (in the category) and negative (not in the category), and
2. **Applying** the classifier to a new example to label it as positive or negative.
Theoretically, we could use the same scheme to do object *identification*. To recognize a particular individual, such as George Bush, we could collect sets of positive and negative examples, and use the traditional supervised learning method just described. As remarked previously, however, we would like to be able to identify an individual after seeing only a single picture. Traditional categorization schemes work very poorly when trained on only a single example. To address this lack of “training” data, we develop an entirely new scheme, based upon developing category expertise, and using this expertise to develop a customized identifier for each instance we wish to recognize.
### 1.1.2 Object Identification
In our new scheme, there are three steps rather than two. In the first step, performed off-line on training data, we develop expertise about the general category, such as faces. This is done by comparing the corresponding patches of many example pairs, some that match and some that do not. The goal is to analyze patch differences for each type of image pair, matching and non-matching, to understand under what conditions, we expect patches to be similar or different.
The expectation of the degree of differences between corresponding patches can depend upon many factors. Patches that are not covering the face should not match well even if it is the same person, while patches from the eye area are likely to match well if the images are of the same person, but not for different people. On the other hand, patches from the cheek area may match well even when the images are not of the same person (due to the frequent lack of texture in this region). Finally, forehead images from the same person are likely to match if there is no hair in the patch, but may not match well if there is hair, since hair is highly variable from appearance to appearance. These intuitions translate into a “scoring function” that relates the appearance similarity of individual matched patches to an indication of equivalence of the whole face.
In addition to modeling the appearance differences among patches conditioned on the type, location and appearance of the patches, we can also estimate the **expected utility** or **discriminativeness** of a single patch from this analysis. We rate the discriminativeness of a patch by considering whether the expected differences with a corresponding patch depend heavily on whether the faces match or not. For example, since a pair of corresponding patches which do not cover a portion of the face are expected to have large differences, *irrespective of whether the faces match or not*, such patches have low expected utility. On the other hand, patches near the eye region are expected to have small differences when the faces match, but to have larger differences when the faces do not match, and hence are expected to have high utility. In summary, the first step of our procedure produces models of patch differences conditioned on a variety of variables, and also allows us to assess the expected utility of a given patch based upon the patch difference models.
In the second step, which occurs at “test time”, we use this expertise to build an identifier (more specifically an identification *cascade*) for an object instance given a single image of that object. For each patch in the given image, we select a specific model of patch appearance differences from the global model of patch appearance differences defined in the first step. Then, using these models of patch appearance differences for each patch, we analyze the discriminativeness of each patch in the given image. Finally, we sort the patches in order from most discriminative to least discriminative.
In the third step, we use the object specific identifier to decide whether other instances of the category are the same as or different than the given instance. This is done by comparing patches in the given image (the “model” image) to corresponding patches in another image (a “database” or “test” image). The most discriminative patches (as ordered in the second step) are used first and followed, if still necessary, by those with less discriminative power.
In summary, we define the steps in object identification as:
1. **Learning a global model of patch\(^1\) differences**, and as a result, a model of patch discriminativeness,
2. **Building an identification cascade for a specific object instance** by selecting, from the global model, object-specific models of patch differences and sorting the patches by discriminativeness, and
---
\(^1\)To simplify the exposition, we have described the process of learning category expertise in terms of learning a model of patch discriminativeness, which is how the identifiers in this paper were built. However, it is straightforward to generalize this general scheme to encode category information in a way other than modeling patches—for example, by modeling the distributions of colors in images.
3. Applying the identification cascade to novel images to assess whether they are the same or different as the specific object instance for which the cascade was built.
We note that the last step of object categorization and the last step of our scheme for object identification are essentially the same. In the case of object categorization, we apply a categorizer to a new example to decide whether it represents a category of interest. In the case of object identification, we apply an identifier to a new example to decide whether it is the same object as the single given training example. Other than this last step, however, the two paradigms are quite different.
While the traditional object categorization scheme encodes information at only one level, the level of the object category, the identification scheme encodes two types of information. In the first step of object identification, we encode information about the entire category. In the second stage, we encode information about the specific object instance. It is the use of the category expertise learned in the first step that enables us to build effective identifiers from just a single example of each object. Without the category level information, it would be impossible to tell how to weight various areas of the image from only a single example. Our key contribution is a novel method for encoding the category information so that we can use it effectively to define an identifier for a new object instance.
1.1.3 Hyper-Features
As stated above, in step 1 of our object identification process, we analyze corresponding patches in image pairs to develop a global model of which types of patches are likely to be useful and which are not, and for the patches that are useful, to know how to score a correspondence. This model needs to generalize to whole space of possible patches so that at test time we can predict the utility and build a scoring function for patches of new instances (e.g. new car models) that were not in the training set.
Given that we register objects before analyzing patches, it should not be surprising that patches from certain parts of the image tend to be more informative than patches from other parts of an image. For example, when faces are registered to a canonical position by centering them in an image, the patches in the upper corners of the image tend to represent background, and hence are unlikely to be useful for discrimination. Hence, the spatial location of a patch is useful in predicting its future utility in discrimination, even when we do not yet know the patch we might be comparing it against. This type of “conditioning” on spatial location to predict utility is common in computer vision.
In this paper, however, we introduce a novel method for predicting the utility of image patches that goes beyond merely conditioning on spatial location. In particular, we also use appearance features of a patch to predict its utility. For example, a patch of someone’s cheek that is uniformly colored is not likely to be very discriminative since there is a strong possibility the corresponding patch on another different person’s face would be indistinguishable, causing this patch to be of no value in discrimination. However, if the patch on someone’s cheek shows a distinctive mole or scar, its predicted utility dramatically increases, since this feature would be unlikely to be repeated unless we are comparing against the same person.
Conditioning on visual features to predict utility, in addition to spatial location, gives our patch discriminativeness models much more power. We call these features on which we condition hyper-features, and their use to model image differences is the main contribution of this work.
The remainder of the paper is organized as follows. Section 2 discusses previous work in a number of areas. Section 3 summarizes the three stages of our algorithm: learning class expertise in training, building an identification cascade for a specific example, and running the identifier. Section 4 details our model for estimating “same” and “different” distributions for a patch. Section 5 describes our patch dependency model that allows us to generate a sequence of informative patches. From this sequence, we build the cascade in Section 6 by finding stopping thresholds for making “same” or “different” decisions. Section 7 details our experiments on multiple car and face data sets.
2. Comparison to Previous Work
In this section, we highlight relevant previous papers and describe how our method differs or improves on them.
2.1. Part-Based Recognition
Breaking an image into local subparts, where each part is encoded and matched separately, is a popular technique for object recognition (both categorization and identification) [4, 6, 10, 15, 18, 20, 24, 31, 32, 33, 34]. This strategy helps to mitigate the effects of distortion due to pose variation, as local regions are more likely than the whole object to be related by simple transformations. It also contains the disturbance due to occlusion and localized illumination effects such as specularities. Finally, it separates modeling of appearance and position. The key idea is that the parts, which are allowed to move relative to one other, can be treated as semi-independent assessments for the recognition task. The classifier then combines this evidence, optionally using the positional configuration of the detected parts as an additional cue, to determine the presence or absence of the object.
Due to the constraints of object identification described in the introduction, our system differs from previous work in a fundamental way. In the above systems, a model consisting of informative parts and (optionally) their relationships
is learned from a set of object and background examples. This “feature selection” step is fundamental to these methods and is possible because statistics such as the frequency of appearance of a particular feature (e.g., a quantized SIFT vector) can be directly computed for the positive and negative examples of a class. Thus these systems rely on underlying feature selection (and weighting) techniques such as Conditional Mutual Information [13] or AdaBoost [14]. In our setting this is not possible because only one example of a particular category instance (which plays the role of a “class” in our setting) will be presented, which is not enough to directly estimate the discriminativeness of any feature. Our main contribution is overcoming this fundamental barrier by learning a model for the space of all possible features tuned to a particular category (e.g., cars) that then allows us to pick the discriminative features for the given category instance (e.g., Bob’s BMW).
A minor additional difference compared to many of the above techniques is the choice of part representation. Popular encodings such as SIFT [20], which are designed to be invariant to significant distortions, are too coarse for our needs – they often destroy the very information that distinguishes one instance from another. Thus we use a more dense representation of multiple filter channels. However, we stress that this is not fundamental to our method, and any part encoding and comparison metric could be used within our learning framework.
### 2.2. Interclass Transfer
Because of the lack of training data for a particular instance, the general framework of most object recognition systems, that of selecting informative features using a set of object/non-object examples, is impossible to directly apply to our setting. In view of this difficulty, given a new category instance (e.g., Bob’s BMW), how can we pick good features, and how can we combine them to build an object instance identifier?
One possible solution is to try to pick universally good features, such as corners [18, 20], for detecting salient points. However, such features are not category specific: we expect to use different kinds of image features when distinguishing Bob’s car from other cars versus when we are distinguishing Bob’s dog from another dog.
Another possibility is to build generative models for each class including such characteristics as the typical illuminations, likely deformations, and variation in viewing direction. With a precise enough model, an algorithm should be able to find good features for discriminating instances of the category from each other [7]. Alternatively, good features could be explicitly coded into the algorithm [34]. However, this tends to be complicated and time consuming, and must be done individually for a particular category (see Section 2.4 below for examples). Additionally, one might hope that given a good statistical model and a large training data set, an algorithm would actually be better at finding informative features.
A better option is to attempt to transfer models from previous classification tasks of the same category (interclass transfer). Thrun, in [29], introduces such an interclass transfer (also referred to as lifelong learning or learning-to-learn) framework, in which a series of similar learning tasks are presented, where each subsequent task uses the model that was built for the previous tasks. More recently [22], distributions over parameters of a similarity transformation learned from one group of classes (letters) are used to model other classes (digits) for which only a single example is provided. In other work [12], priors for a fixed-degree constellation model are learned from one set of classes to train a detector for a new class given only a small number of positive examples of that class.
In all of these works, the set of hidden variables (the features used by [29], the transformations in [22], or the parameters of the constellation model in [12]) are predefined and the generalization from other categories can be thought of as learning priors for these fixed sets of variables. In contrast, we wish to learn how to identify any number of good features that are present in the single “training” example for an object instance that can then be assembled into a binary classification cascade. This forces us to learn a model for the space of all possible features (in our case, image patches).
### 2.3. Pairwise Constraints
The machine learning literature provides a different perspective on our interclass transfer problem. Our problem can be thought of as a standard learning task where the input is a pair of images, and the output is a “same” vs. “different” label. The task is then to learn a “distance metric” between images, specifically by choosing and weighting relevant features.
Recent work on equivalence constraints such as Relevant Component Analysis [27] and others [26, 35] show how to optimize a distance metric over the input space of features that maps the “same” pairs close to one another while keeping “different” ones apart. In our setting, the transformations that we would be interested in are subset selection and weighting (although our technique does more than weight each feature). These methods, however, assume that each example is described by the same predefined set of features, and that the comparison function is a specific distance metric over these features (e.g., Euclidean).
In our case, the “features” we are trying to use are subpatches of one image, compared to the best corresponding location in the other image. Thus our feature space is very high dimensional, and the comparison method is not a simple distance metric (notice, for example, that it is not symmetric due to the local search of best corresponding patch). Even if this space of features were discretized, it would be impossible to enumerate all possible such features, and most would never appear within the training set. These differences make our algorithm very different from other pairwise constraint techniques.
A core observation of this paper is that it is not necessary to enumerate all possible features. Instead, we can model the space of the features in a way that allows us to estimate the informativeness of novel features that the algorithm was not directly trained on (informative means that when this patch is compared to the best corresponding patch in a test image, the appearance similarity gives us information about the test image being the “same” or “different”). Thus we model this space of features (in our case, each feature is defined by the size, position and appearance of the image patch) using a smooth function (actually a pair of them, one based on the matching to the “same” cars and one based on “different” pairs). Then, given a new instance, the algorithm can select the most informative patches among the ones that are actually present in that image. Furthermore, our pair of functions gives us a way to convert the patch matching distance to a score for each selected patch (this is similar to but has more degrees of freedom than a linear feature weight).
Here our features are image patches. We note, however, that our technique could be used in any setting where RCA is used when the features can be embedded into a continuous space. This has the potential advantage of exploiting the relationship between the features that the above techniques have no access to.
2.4. Face Identification
Our goal in this work is to develop an identification system that is not designed for any particular category, but instead automatically learns category-specific characteristics. Nonetheless, it is useful to consider previous identification systems that were designed with a particular category in mind. Here we highlight a few face identification systems that are representative and relevant for our work. For an extensive survey of the field, we refer the reader to [36].
Techniques such as Eigenfaces [30] (PCA), Fishersfaces [2] (LDA), and Bayesian face recognition [23], like our method, start with a general face modeling step, and later make a same/difference determination for a new face. Bayesian face recognition, which won the 1996 FERET competition, explicitly uses “same” and “different” equivalence constraints similar to the techniques described in Section 2.3. These are all “holistic” techniques in that they use the whole face region as raw input to the recognition system. Specifically, they take registered and intensity normalized faces (or labeled collections of images in the case of LDA and Bayesian techniques) and find a lower dimensional subspace that, it is hoped, is more conducive to identification. This is analogous to step 1 in our procedure. To build a classifier, the model image is projected into this subspace, and the classifier compares the model and test images within this subspace.
More complex, feature-based methods typically use more face-specific models and hand labeled data. Two techniques in this category that have had a significant impact are elastic bunch graph matching [34], where hand selected fiducial points are matched within a graph that defines their relative positions, and the method of Blanz and Vetter [7], which maps images onto a 3D morphable face model.
We now turn to a more detailed description of our method.
3. Algorithm Overview
In this section, we outline the basic components of our system. After discussing initial preprocessing and alignment, we describe the three stages of our algorithm: global modeling of patch differences, building an identification cascade from one example, and application of the identification cascade. For clarity of exposition, we describe these stages in reverse order.
3.1. Preprocessing: Detection and Alignment
Our algorithm, as most identification systems, assumes that all images are known to contain objects of the given category (e.g. cars or faces) and have been brought into rough correspondence. For our algorithm, an approximate alignment is sufficient, because we search for matching patches in a small neighborhood (in our data sets 10-20% of image size) around the expected location. No foreground-background segmentation is required, as the system learns which features within the images (both in terms of position and appearance) are useful for identification - thus patches that are off of the object are rejected by the learning algorithm. The specific detection and alignment methods used for our various data sets are described in Section 7. For example, for the Cars 2 data set, objects were aligned based only on the centroid of a simple background subtraction based blob detector.
3.2. Applying the Object Instance Identifier
We now describe the object instance identifier, which is the final step in our three step identification system. We start by introducing some notation.
We assume that at test time, we are given a single image of an object instance, known as the model image. The goal will be to compare it to a large set of database images, known as test images. For each pair of images (the model image and one of the test images) we wish to make a determination about whether the images represent the same specific object, or two different objects. The variable $C$ will represent this match/mismatch variable, with $C = 1$ denoting that the two images are of the same object (i.e., they match) and $C = 0$ denoting that the two images are of different objects (i.e., they do not match).
For the purposes of illustration, we will often present pairs of images side by side, where the image on the **left** will be the model image and the image on the **right** will be one of the test images (Figures 3, 4, 10, 14). Thus we use $I^L$ to refer to the model image and $I^R$ to refer to the current test image. Thus, the identifier for a particular object instance decides if a test image (or “right” image) $I^R$ is the same as ($C = 1$) or different than ($C = 0$) the model image (or “left” image) $I^L$ it was trained for.
### 3.2.1. Patches
Our strategy is to break up the whole image comparison problem into the comparison of patches [31, 33]. The $m$ (possibly overlapping) patches in the left image will be denoted $F_j^L$, with $1 \leq j \leq m$. The corresponding patches in the right image are denoted $F_j^R$.
Although the exact choice of features, their encoding and the comparison metric are not crucial to our technique, we wanted to use features that were general enough to use in a wide variety of settings, but informative enough to capture the relative locality of object markings as well as large and small details of objects.
For our experiments, our patch sizes ranges from 12x12 pixels to the size of the full image, and are not constrained to be square. To compute the patch features, we begin by computing a Gaussian pyramid for each image. For each patch, based on its size, the image pixels are extracted from a level of the pyramid such that the number of pixels in the representation is approximately constant (for our experiments, all of our patches, except the smallest ones taken from the lowest level of the pyramid, contained between 500 and 750 pixels). Then we encode the pixels by applying a first derivative Gaussian odd-symmetric filter to the patch at four orientations (horizontal, vertical, and two diagonal), giving four signed numbers per pixel. The patch $F_j^L$ is defined by its appearance encoding, position $(x, y)$ and size $(w, h)$.
### 3.2.2. Matching
To compare a model patch $F_j^L$ to an equally encoded area of the right image $F_j^R$, we evaluate the normalized correlation and compute
$$d_j = 1 - \text{CorrCoeff}(F_j^L, F_j^R)$$
between the arrays of orientation vectors. Thus $d_j$ is a patch appearance distance where $0 \leq d_j \leq 2$.
As the two images to be compared have been processed to be in rough alignment, we need only search a small area of $I^R$ to find the best corresponding patch $F_j^R$—i.e., the one that minimizes $d_j$. We will refer to such a matched left and right patch pair $F_j^L, F_j^R$, together with the derived distance $d_j$, as a *bi-patch*. This appearance distance $d_j$ is used as evidence for deciding if $I^L$ and $I^R$ are the same ($C = 1$) or different ($C = 0$).
In choosing this representation and comparison function, we compared a number of commonly used encodings, including Lowe’s SIFT features [20] and shape contexts [3]. However, we found that due to the nature of the problem—where distinct objects can look very similar except for a few subtle differences—these techniques, which were developed to be insensitive to small differences, did not perform well. Specifically, using SIFT features as described in [20] (without category specific learning) resulted in false-positive error rates that were an order of magnitude larger than our best results and a factor of 2-3 worse than our baseline results (at the same recall rate). Among dense patch features, we chose normalized correlation of filter outputs after experiments comparing this distance function to L1 and L2 distances, and the encoding to raw pixels and edges as described elsewhere [33].
### 3.2.3. Likelihood Ratio Score
We pose the task of deciding if a test image $I^R$ is the same as a model image $I^L$ as a decision rule
$$R = \frac{P(C = 1|I^L, I^R)}{P(C = 0|I^L, I^R)}$$
$$= \frac{P(I^L, I^R|C = 1)P(C = 1)}{P(I^L, I^R|C = 0)P(C = 0)} > \lambda.$$
where $\lambda$ is chosen to balance the cost of the two types of decision errors. The prior probability of $C$ is assumed to be known.\footnote{For our car tracking application (see Section 7.3), dynamic models of traffic flow can supply the prior on $P(C)$.} Specifically, for the remaining equations in this paper, the priors are assumed to be equal, and hence are dropped from subsequent equations.
With our image decomposition into patches, the posteriors from Eq. (2) will be approximated using the bi-patches $F_1, ..., F_n$ as
$$P(C|I^L, I^R) \approx P(C|F_1, ..., F_m)$$
$$\propto P(F_1, ..., F_m|C).$$
Furthermore, we will assume a naive Bayes model in which, conditioned on $C$, the bi-patches are assumed to be independent (see Section 5 for our efforts to ensure that the selected patches are, in fact, as independent as possible). That is,
$$R = \frac{P(I^L, I^R|C = 1)}{P(I^L, I^R|C = 0)}$$
$$\approx \frac{P(F_1, ..., F_m|C = 1)}{P(F_1, ..., F_m|C = 0)}$$
$$= \prod_{j=1}^{m} \frac{P(F_j|C = 1)}{P(F_j|C = 0)}.$$
In practice, we compute the logarithm of this likelihood ratio, where each patch contributes an additive term. Modeling the likelihoods $P(F_j|C)$ in this ratio is the central focus of this paper.
In our current system, the only information from bi-patch $F_j$ that we use for scoring is the distance $d_j$. Thus, to convert $d_j$ to a score, the object instance identifier must consist of probability distribution functions $P(D_j|C = 1)$ and $P(D_j|C = 0)$ for each patch in the model image. These functions encode our expectations about how well we expect a patch in the test image to match a particular patch ($j$) in the model image, depending upon whether or not the images themselves represent the same object instance.
The object instance identifier computes the log likelihood ratio by evaluating these functions for each $d_j$ obtained by comparing the model image patches to test image patches. (A comment on notation: $d_j$ refers to the specific measured distance for a given model image patch and the corresponding test image patch, while $D_j$ denotes the random variable from which $d_j$ is a sample.) After $m$ patches have been matched, assuming independence, we score the match between images $I^L$ and $I^R$ using the sum of log likelihood ratios of matched patches:
$$R = \sum_{j=1}^{m} \log \frac{P(D_j = d_j|C = 1)}{P(D_j = d_j|C = 0)}. \quad (9)$$
To compute this quantity, we must evaluate $P(D_j = d_j|C = 1)$ and $P(D_j = d_j|C = 0)$. In our system, both of these will take the form of gamma distributions $\Gamma(d_j; \theta_j^{C=1})$ and $\Gamma(d_j; \theta_j^{C=0})$, where the parameters $\theta_j^{C=1}$ and $\theta_j^{C=0}$ for each patch and matching condition are defined as part of the object instance identifier. How we set these parameters using a single image is discussed in Section 3.3.
### 3.2.4. Making a Decision
The object instance identifier described above compared a fixed number of patches ($m$), computed the score $R$ by Eq. 9, and compared it to a threshold $\lambda$. $R > \lambda$ means that $I^L$ and $I^R$ are declared to be the same. Otherwise they are declared different. In Section 6, we define a more efficient object instance identifier by building a cascade from the sequence of patches. This is done by applying early termination thresholds $\lambda_k^{C=1}$ (for early match detection) or $\lambda_k^{C=0}$ (for early mismatch detection) after the first $k$ patches have been compared. These thresholds may allow the object instance identifier to stop and declare a result after comparing only $k$ patches.
### 3.2.5. Summary of the Object Instance Identifier
To summarize, the object instance identifier is defined by
1. a sequence of patches of varying sizes $F_j^L$ taken from the model image $I^L$,
2. for each patch $F_j^L$, a pair of parameters $\theta_j^{C=1}$ and $\theta_j^{C=0}$ that define the distributions $P(D_j|C = 1)$ and $P(D_j|C = 0)$, and
3. optionally, a set of thresholds $\lambda_k^{C=1}$ and $\lambda_k^{C=0}$ applied after matching the $k$-th patch.
For an example, refer to Figure 3.
### 3.3. Generating an Object Instance Identifier
Stepping back one step in the process, we now describe how an object instance identifier is built, or “generated”, from a single image of an object instance. Obviously, an identifier must be generated before it can be used to identify.
The identifier generator must take in a single model image $I^L$ of a new object from the given category and produce a sequence of patches $F_1^L, \ldots, F_m^L$ and their associated gamma distribution parameters, $\theta_1^{C=1}, \ldots, \theta_m^{C=1}$ and $\theta_1^{C=0}, \ldots, \theta_m^{C=0}$, for scoring based on the appearance distance measurement $d_j$ (which is measured when the patch $F_j^L$ is matched to a location in a test image $I^R$).
#### 3.3.1. Estimating $P(D_j|C)$
Estimating $P(D_j|C = 0)$ and $P(D_j|C = 1)$ means estimating parameters for two gamma distributions for each patch, such as the ones shown in Figure 3. Conceptually, we want $\theta_j^{C=0}$ and $\theta_j^{C=1}$ to be influenced by what patch $F_j^L$ looks like and where it is on the object. That is, we want a pair of functions $Q^{C=1}$ and $Q^{C=0}$ that map the position and appearance of the patch $F_j^L$ to the parameters of the gamma distribution $\theta_j^{C=1}$ and $\theta_j^{C=0}$.
$$Q^{C=1}: F_j^L \mapsto \theta_j^{C=1}$$
$$Q^{C=0}: F_j^L \mapsto \theta_j^{C=0}$$
These functions are estimated in the initial training phase (step 1), and how they are estimated is discussed at length below.
#### 3.3.2. Estimating Saliency
If we define the saliency of a patch as the amount of information about the decision likely to be gained if the patch were to be matched, then it is straightforward to estimate saliency given $P(D_j|C = 1)$ and $P(D_j|C = 0)$. Intuitively, if $P(D_j|C = 1)$ and $P(D_j|C = 0)$ are similar distributions, we do not expect much useful information from a value of $d_j$. On the other hand, if the distributions are very different, then $d_j$ can potentially tell us a great deal about our decision. Formally, this can be measured as the mutual information between the decision variable $C$ and the random variable $D_j$:
$$I(D_j; C) = H(D_j) - H(D_j|C).$$
Here $H()$ is Shannon entropy. Notice that this measure can be computed just from the estimated distributions of $D_j$, which, in turn, were estimated from the position and appearance of the model patch $F^L_j$, before the patch has ever been matched.
### 3.3.3. Finding Good Patches
The above mutual information formula allows us to estimate the saliency of any patch. Thus defining a sequence of patches to examine in order, from among all candidate patches, seems straightforward:
1. for each candidate patch
(a) estimate the distributions $P(D_j|C)$ from $F^L_j$ using the functions $Q^C$
(b) compute the mutual information $I(D_j; C)$
2. choose the top $m$ patches sorted by $I(D_j; C)$
The problem with this procedure is that the patches are not independent. Once we have matched a patch $F^L_j$, the amount of additional information we are expected to derive from matching a patch $F^L_i$ that overlaps $F^L_j$ is likely to be less than the mutual information $I(D_j; C)$ would suggest. We discuss a solution to this problem in Section 5.
However, assuming that this dependency problem can be solved, and given the functions $Q^C$, we have a complete algorithm for generating an object instance identifier from a single image.
### 3.4. Off-Line Training
Finally, we complete our reverse-order discussion by describing the first major step of our system, learning about a given category (e.g., cars) from training data. This procedure is done only once, and happens prior to testing.
The off-line training procedure defines the two functions $Q^{C=1}$ and $Q^{C=0}$ that estimate the parameters of the gamma distributions $P(D_j|C=1)$ and $P(D_j|C=0)$ from the position and appearance of a model image patch $F^L_j$. Additionally, it builds a dependency model among patches and defines the early termination cascade thresholds $\lambda_k^{C=1}$ and $\lambda_k^{C=0}$.
This off-line training starts with a large collection of image pairs from the category (see Section 7 for details about our data sets), where each left-right image pair is labeled as “same” or “different”. A large number of patches $F^L_j$ are sampled from the left images. Each patch is compared to its
Figure 4: Estimating the Distributions and Informativeness of Patches. The identifier generator takes an object model image (left), samples patches from it, estimates the same and different distributions and mutual information score for each patch, and selects the sequence of patches to use for identification. The gamma distributions (middle) were computed based on 10 selected hyper-features derived from the position and appearance of each patch $F_j^L$. In the model image (left), each candidate patch is marked by a dot at its center, where the size and color represent the mutual information score (bigger and redder means more informative). The estimated distributions for two patches is shown in the center (red and blue curves), together with the log likelihood ratio score (green line). When the patches are matched to a test image, the resulting appearance distance $d_j$ is indicated as a red vertical line.
corresponding patch in the right image. The correspondence is defined by finding the best matching patch over a small search area around the same location in the second image. Once the corresponding patch is found in the right image, the resulting value of $d_j$ is recorded and associated with the original patch from the left image.
The functions $Q^{C=0}$ and $Q^{C=1}$ will ultimately be polynomial functions of the hyper-features, that is, the location and appearance features of each patch. These polynomials are estimated in a maximum likelihood framework using a generalized linear model. In short, the functions $Q^{C=1}$ and $Q^{C=0}$ are optimized to produce gamma distributions which maximize the likelihoods $P(d_j|C)$ of the patch difference data from training. The details of this estimation are discussed in the following section.
4. Hyper-Features and Generalized Linear Models
In this section, we describe in detail how to estimate, from training data, the functions $Q^{C=0}$ and $Q^{C=1}$ that map the position and appearance of a model image patch $F_j^L$ to the parameters $\theta_j^C$ of the gamma distributions for $P(D_j|C)$.
We want to differentiate patches by producing distributions $P(D_j|C=1)$ and $P(D_j|C=0)$ tuned for patch $F_j^L$. When a training set of “same” ($C=1$) and “different” ($C=0$) images are available for a specific model image, estimating these distributions directly for each patch is straightforward. But how can we estimate the distribution $P(D_j|F_j^L,C=1)$, where $F_j^L$ is a patch from a new model image, when we only have that single positive example of $F_j^L$? The intuitive answer: by finding analogous patches in the training set of labeled (same/different) image pairs. However, since the space of all possible patches\footnote{For a 25x25 patch, appearance plus position (including size) is a point in $\mathbb{R}^{25*25+4}$.} is very large, the chance of having seen a patch very similar to $F_j^L$ in the training set is small. In the next two subsections we present two approaches both of which rely on projecting $F_j^L$ into a much lower dimensional space by extracting meaningful features from its position and appearance, i.e., the hyper-features.
4.1. Discrete Hyper-Features
First we explore a binning approach, where we place hyper-features into a number of pre-specified axis-aligned bins. For example we might break the $x-$coordinate of the position into four bins, the $y-$coordinate into three bins, and an appearance feature of the patch, such as contrast, into two bins. We would then label each patch with its position in this 4-by-3-by-2 histogram. For each bin, we estimate $P(D_j|F_j^L,C=1)$ and $P(D_j|F_j^L,C=0)$ by computing the parameters $(\theta_j^C)$ of the gamma distributions from all of the bi-patches $F_j$ whose left patch $F_j^L$ falls into that bin. More precisely, we use bi-patches from the “same” image pairs to estimate $\theta_j^{C=1}$ and the “different” pairs to find $\theta_j^{C=0}$.\footnote{Using the same binning of hyper-features, but modeling the resulting conditional distributions as normalized histograms, rather than gamma distributions, produces very similar results when enough data is available in each bin.}
Figure 5 compares the performance of various models on the Cars I data set, which is described in Section 7.1. Here, for simplicity of comparison, we use no patch selection (105 patches are sampled at fixed, equally spaced locations) and patch sizes are fixed to 25x25. The two bottom curves are
Figure 5: Identification with Patches. The bottom curve shows the precision vs. recall for non-patch based direct comparison of rectified images (the most accurate technique we found was to match a fixed subrectangle of the image by searching for the best normalized correlation of the 4 filter channels). The other curves show the performance of our algorithm on the Cars I data set, using all fixed sized patches (25x25 pixels) sampled from a grid such that each patch overlaps its neighbors by 50%. Notice that all three patch based models outperform the direct method. The three top curves show results for various models of $d_j$: (1) no dependence on patch characteristics (Baseline), (2) using hyper-features in discrete bins Section 4.1 (Discrete), and (3) using a generalized linear model with hyper-feature selection from Sections 4.2 and 4.3 (Continuous). The linear model significantly outperforms all of others. Compared to the baseline patch method it reduces the error in precision by close to 50% for most values of recall below 90% showing that conditioning the distributions on hyper-features boosts performance. Note: this figure differs from results presented later in that no patch selection was performed.
baseline experiments. The *direct image comparison* method compares the center part of the images using normalized correlation on a combination of intensity and filter channels and attempts to overcome slight misalignment. The *patch-based baseline* assumes a global distribution for $D_j$ that is the same for all patches.
The cyan “Discrete” curve in Figure 5 shows the performance improvement from conditioning on discrete hyper-features.
### 4.2. Continuous Hyper-Features
When too many hyper-feature bins are introduced, the performance of the discrete model degrades. The problem is that the amount of data needed to populate the histograms grows exponentially with the number of dimensions. In order to add additional appearance-based hyper-features, such as mean intensity, oriented edge energy, etc., we moved to a polynomial model to describe how hyper-features influence the choice of gamma distribution parameters.
Specifically, as before, we model the distributions $P(D_j|F^L_j, C = 1)$ and $P(D_j|F^L_j, C = 0)$ as gamma distributions $\Gamma(\theta^C)$ parameterized by the mean and shape parameter $\theta = \{\mu, \gamma\}$. See the left side of Figure 6 for examples of the gamma approximations to the empirical distributions.
The smooth variation of $\theta$ with respect to the hyper-features can be modeled using a generalized linear model (GLM). Ordinary (least-squares) linear models assume that the data for each conditional distribution is normally distributed with constant variance. GLMs are extensions to ordinary linear models that can fit data which is not normally distributed and where the dispersion parameter also depends on the covariates. See [21] for more information on GLMs.
Our goal is to fit gamma distributions to $P(D_j|F^L_j, C = 1)$ and $P(D_j|F^L_j, C = 0)$ for various patches by maximizing the probability density of data under gamma distributions whose parameters are simple polynomial functions of the hyper-features. Consider a set $X_1, ..., X_k$ of hyper-features such as position, contrast, and brightness of a patch. Let $Z = [Z_1, ..., Z_l]^T$ be a vector of $l$ pre-chosen monomials of those hyper-features, like squares, cubes, cross terms, or simply copies of the variables themselves. Then each bi-patch
Figure 6: Fitting a generalized linear model to the gamma distribution. We demonstrate our approach by fitting a gamma distribution, through the latent variables $\theta = (\mu, \sigma)$, to the $y$ position of the patches (in practice, we use the parameterization $\theta = (\mu, \gamma)$). Here we allowed $\mu$ and $\sigma$ to be a 3rd degree polynomial function of $y$ (i.e. $Z = [y^3, y^2, y, 1]^T$). Each row of the images labeled (a) displays the empirical density of $d$ conditioned on the $y$ position of the left patch ($F_L$) for all bi-patches sampled from the training data (darker means higher density). There are two of these: one for bi-patches taken from matching vehicles (the pairs labeled “same”); the other from mismatched data (“different” pairs). Row (b) shows the ordinary linear model fit, where the curve represents the mean. The outer curves in (c) show the $\pm \sigma$ (one standard deviation) range fit by the GLM. On the bottom left, the centers of patches from a model object are labeled with a dot whose size and color corresponds to the mutual information score $I(D; C)$. For two selected rows, each representing a particular $y$ position, the empirical distributions are displayed as a histogram. The gamma distributions as fit by the GLM are superimposed on the histograms. Notice that this model has learned that the top portion of the vehicles in the training set is not very informative, as the two distributions (the red and blue lines in the top histogram plot) are very similar. That is, $D_j$ will have low mutual information with $C$. In contrast, the bottom area is much more informative.
distance distribution has the form
\[ P(d|X_1, X_2, ..., X_k, C) = \Gamma(d; \alpha_C^\mu \cdot Z, \alpha_C^\gamma \cdot Z), \]
(10)
where the second and third arguments to \( \Gamma() \) are mean and shape parameters. Note that both the mean and shape parameters are linear functions of the hyper-feature monomials \( Z \), which is what makes this model a generalized linear model.
For our GLM, we use the identity link function\(^5\) for both \( \mu \) and \( \gamma \). While the identity is not the canonical link function for \( \mu \), its advantage is that our ML optimization can be initialized by solving an ordinary least squares problem. We experimentally compared it to the canonical inverse link (\( \mu = (\alpha_C^\mu * Z)^{-1} \)), but observed no noticeable change in performance on our data set. Each \( \alpha \) (there are four of these: \( \alpha_{C=0}^\mu, \alpha_{C=0}^\gamma, \alpha_{C=1}^\mu, \alpha_{C=1}^\gamma \)) is a vector of parameters of length \( l \) that weights each hyper-feature monomial \( Z_i \). The \( \alpha \)'s are adapted to maximize the joint data likelihood over all patches for \( C = 1 \) (using patches from the “same” image pairs) and for \( C = 0 \) (from the “different” image pairs) within the training set. These ideas are illustrated in detail in Figure 6, where, for demonstration purposes, we let our covariates \( Z = [y^3, y^2, y, 1]^T \) be a polynomial function of the \( y \) position.
### 4.3. Automatic Selection of Hyper-Features
While it is certainly possible to select the basic hyper-features \( X \) and their monomials \( Z \) manually, we make additional improvements to our system by considering larger potential sets of hyper-feature monomials and using feature selection techniques to select only those that are most useful.
Recall that in our GLM model we assumed a linear relationship between \( Z \) and \( \mu \). By ignoring the dispersion parameter, this allows us to use standard feature selection techniques, such as Least Angle Regression (LARS) [11], to choose a few (around 10) hyper-features from a large set of candidates. In order to use LARS (or most other feature selection methods) “out of the box”, we use regression based on an \( L2 \) loss function. While this is not optimal for non-normal data, from experiments we have verified that it is a reasonable approximation for the feature selection step.
To use LARS for feature selection, we start with a large set of candidate hyper-feature monomials: (a) the x and y positions of \( F^L \), (b) the intensity and contrast within \( F^L \) and the average intensity of the entire object, (c) the average energy in each of the four oriented filter channels, and (d) derived quantities from the above such as square, cubic, and cross terms as well as meaningful derived quantities such as the direction of the maximum edge energy. LARS is used to selected a subset of these, which act as the final set of hyper-features \( Z \). Once \( Z \) is set, we proceed as in Section 4.2.
Running an automatic feature selection technique on this large set of possible conditioning features gives us a principled method of reducing the complexity of our model. Reducing the complexity is important not only to speed up computation, but also to mitigate the risk of over-fitting to the training set. The top curve in Figure 5 shows results when \( Z \) includes the first 10 features found by LARS. Even with such a naive set of features to choose from, the performance of the system improves significantly. We believe that further improvement in our results is possible by designing more sophisticated hyper-features.
### 5. Modeling Pairwise Relationships Between Patches
In Sections 3 and 4, we described our method for scoring a model image patch \( F^L_j \) and its best match \( F^R_j \) by modeling the distribution of their difference in appearance, \( d_j \), conditioned on the match variable \( C \). Furthermore, in Section 3.3, we described how to infer the saliency of the patch \( F^L_j \) for matching based on these distributions. As we noted in that section, this works for picking the first patch, but is not optimal for picking subsequent patches. Once we have already matched and recorded the score of the first patch, the amount of information gained from a nearby patch is likely to be small, because their scores are likely to be correlated. Intuitively, the next chosen patch would ideally be a highly salient patch whose information about \( C \) is as independent as possible from the first patch. Similarly, the third patch should consider both the first and the second patches.
Let \( F^L_{(k)} \) represent the \( k \)th patch picked for the cascade and let \( F^L_{(1...n)} \) denote the first \( n \) of these patches. Assume we have already picked patches \( F^L_{(1...n)} \) and we wish to choose the next one, \( F^L_{(n+1)} \), from the remaining set of \( F^L \)'s. We would like to pick the one that maximizes the information gain or the conditional mutual information:
\[ I(D_{(n+1)}; C|D_{(1...n)}) = I(D_{(1...n+1)}; C) - I(D_{(1...n)}; C). \]
This quantity is difficult to estimate, due to the need to model the joint distribution of all \( D_{(1...n)} \) patches. However, note that the information gain of a new feature is upper bounded by the information gain of that feature relative to any single feature that has already been chosen. That is,
\[ I(D_{(n+1)}; C|D_{(1...n)}) \leq \min_{1 \leq i \leq n} I(D_{(n+1)}; C|D_{(i)}). \]
(11)
Thus, rather than maximizing the full information gain, Vidal-Naquet and Ullman [31] (see [13] for a comparison to other feature selection techniques) proposed the following heuristic that maximizes this upper bound on the amount of “additional” information:
\[ \arg \max_j \min_i I(D_j; C|D_{(i)}), \]
(12)
Figure 7: **Bivariate Gamma Distributions**. We demonstrate our technique by plotting the empirical and modeled joint densities of all patch pairs from the training set which are a fixed distance away from each other. On the **left** side, the two patches are far apart, thus they tend to be uncorrelated for both “same” ($C = 1$) and “different” ($C = 0$) pairs. This is evident from the empirical joint densities $d_1$ vs. $d_2$ (labeled $d_{far}$), computed by taking all pairs of “same” and “different” 25x25 pixel bi-patches from the training set that were more than 60 pixels apart. The great mismatch between the $P(d_1, d_{far} | C = 1)$ and $P(d_1, d_{far} | C = 0)$ distributions implies that the *joint mutual information* between $(d_1, d_{far})$ and $C$ is high. Furthermore, the mismatch in the joint distributions is *significantly larger* (as measured in bits) than the mismatch between the marginal conditional distributions shown below them in row (c). This means that the information gain, the joint mutual information less the marginal mutual information, is high. In contrast, the **right** side shows the case where the patches are very close (overlap 50% horizontally). Here $d_1$ vs. $d_2$ (labeled $d_{near}$) are very correlated. While there is still some disagreement between the joint distributions for $C = 0$ and $C = 1$, the information contained in this discrepancy (as measured in bits) is almost equal to the information contained in the discrepancy between the marginal distributions shown beneath them in row (c). That is, the joint distributions provide no additional information, or information gain, over the marginal distributions. Our parametric model for these joint densities are shown at the bottom (d). Notice that the modeled marginal distributions of $d_2$ (c) are gamma and are unaffected by the correlation parameter. The lines superimposed on the bivariate plots show the mean and variance of $d_1$ conditioned on $d_2$: notice that these are very similar for the empirical (b) and model (d) densities.
where $i$ varies over the already chosen patches, and $j$ varies over the remaining patches.
We use a related, but slightly different heuristic. When $D_j$ and $D_{(i)}$ are completely correlated (that is, $D_{(i)}$ predicts $D_j$) then $I(D_j; C|D_{(i)}) = 0$. However, even when $D_j$ and $D_{(i)}$ are completely independent given $C$, $I(D_j; C|D_{(i)})$ does not equal $I(D_j; C)$. This somewhat counterintuitive result is due to the fact that there is only a total of 1 bit of information in $C$, some of which has already been discovered by matching patch $F_j$. This property causes problems for the above pairwise approximation, as in some circumstances it might lead to choosing a suboptimal next patch $F_{(i)}$. In particular, a patch that is highly correlated with an uninformative patch might win out against another patch that is lightly correlated with a very informative one. Hence, in order to find the best next patch, we use a quantity related to $I(D_j; C|D_{(i)})$, but one which varies between 0 and $I(D_j; C)$ depending only on the correlation:
$$\arg \max_j \min_i I(D_j; C|D_{(i)}) \times \frac{I(D_j; C)}{I(D^*_j; C|D_{(i)})}. \quad (13)$$
Here $D^*_j$ is a random variable with the same marginal distribution as $D_j$ but is independent of $D_{(i)}$ when conditioned on $C$. This formulation also turns out to be easier to approximate within our framework (see Section 5.3).
### 5.1. Dependency Model
To compute (13), we need to estimate conditional mutual informations of the form
$$I(D_j; C|D_{(i)}) = I(D_j, D_{(i)}; C) - I(D_{(i)}; C).$$
In Section 3.3, we showed that we can determine the second term, $I(D_{(i)}; C)$, from the estimated gamma distributions for $P(D_{(i)}|C=1)$ and $P(D_{(i)}|C=0)$. Similarly, to calculate $I(D_j, D_{(i)}; C)$, we need to estimate the bivariate distributions $P(D_{(i)}, D_j|C=1)$ and $P(D_{(i)}, D_j|C=0)$.
Because there is relatively little data for each pair of patch locations, and because we want to evaluate the dependence of patches conditioned not only on location but also appearance-based hyper-features, we again use a generalized linear model to gain statistical leverage, this time to model the joint distributions of pairs of patch distances. The central goal in choosing a parameterization of the conditional joint distributions $P(D_{(i)}, D_j|C=1)$ and $P(D_{(i)}, D_j|C=0)$ is to choose a form for the distributions such that, when the parameters are estimated, the resulting computation of the joint mutual information is as accurate as possible. In order to achieve this, we adopt the following strategy for parametric estimates of the conditional joint distributions.
- We constrain each joint distribution to be an instance of Kibble’s bivariate gamma distribution [19], a generalization of the one-dimensional gamma distribution that is constrained to have gamma distributions as marginals. A Kibble distribution has four parameters: $\mu_1$, $\mu_2$, $\gamma$, and $\rho$, with $0 < \rho < 1$. $\mu_1$ and $\mu_2$ are mean parameters for the marginals. $\gamma$ is a dispersion parameter for both marginals. $\rho$ is the correlation between $d_{(i)}$ and $d_j$, and varies from 0, indicating full independence of the marginals, to 1, in which the marginals are completely correlated (see Figure 7).
- We further constrain each distribution to have the same mean parameter for each marginal, i.e. $\mu_1 = \mu_2$ for each joint distribution. The shared mean parameter and the shared dispersion parameter $\gamma$ are set to the parameters of the marginal distribution $P(d_j|C=0)$ and $P(d_j|C=1)$ in the respective cases.
- Finally, we constrain the pair of distributions $P(D_{(i)}, D_j|C=1)$ and $P(D_{(i)}, D_j|C=0)$ to share the same correlation parameter $\rho$.
Thus we use Kibble’s bivariate distribution with 3 parameters, which we write as $K(\mu, \gamma, \rho)$ (see Appendix B).
### 5.2. Predicting Patch Correlations from Hyper-Feature Differences
Given the above formulation, we have reduced the problem of finding the next best patch, $F^L_{(n+1)}$, to the problem of estimating the correlation parameter $\rho$ of Kibble’s bivariate gamma distribution for any pair of patches $F^L_{(i)}$ (one of the $n$ patches already selected) and $F^L_j$ (a candidate for $F^L_{(n+1)}$). The intuition is that patches that are nearby and overlapping or that lie on the same underlying image features (for example the horizontal line on the side of the car in Figure 8) are likely to be highly correlated, whereas two patches that are of different sizes and far away from one another are likely to be less so.
We model $\rho$, the last parameter of $K(\mu_j^{C=1}, \gamma_j^{C=1}, \rho)$ and $K(\mu_j^{C=0}, \gamma_j^{C=0}, \rho)$, similarly to our GLM estimate of its other parameters (see Section 3.3): we let $\rho$ be a linear function of the difference of various hyper-features of the two patches, $F_{(i)}^L$ and $F_j^L$. Clear candidates for these covariates are the difference in position and size of the two patches, as well as some image-based features such as the difference in the amount of contrast within each patch. To ensure $0 < \rho < 1$, we use a sigmoid link function
$$\rho = (1 - \exp(\beta \cdot Y))^{\neg 1},$$
where $Y$ is our vector of hyper-feature differences and $\beta$ is the GLM parameter vector.
Given a data set of patch pairs $F_{(i)}^L$ and $F_j^L$ and associated distances $d_{(i)}$ and $d_j$ (found by matching the “left” patches to a “right” image of the same or of a different object), we estimate the linear coefficients $\beta$. This is done by maximizing the likelihood of $K(\mu_j^{C=1}, \gamma_j^{C=1}, \rho)$ using data taken from image pairs that are known to be the “same”\footnote{$\mu_j^{C=1}$ and $\gamma_j^{C=1}$ are estimated from $F_j^L$ by the method of Section 3.4 and are fixed for this optimization.} and $K(\mu_j^{C=0}, \gamma_j^{C=0}, \rho)$ using data taken from “different” image pairs. Also similarly to Section 3.4, we choose the encoding of $Y$ automatically, by the method of forward feature selection [17] over candidate hyper-feature difference variables. As anticipated, the top ranked variables encoded differences in position, size, contrast, and orientation energy. Our final model uses the top 10 variables.
### 5.3. Online Estimation of Patch Order
As we described in Section 5.1, we wish to select patches in a greedy fashion based on Eqn. 13. In the previous section, we have shown how to estimate $I(D_j; C|D_{(i)})$. Based on this, computing $I(D_j^2; C|D_{(i)})$ is straightforward: use the same Kibble densities as with $D_j$ but just set the correlation parameter $\rho = 0$.
Unfortunately, computing these quantities online is very expensive (notice that the formula for the Kibble distribution contains an infinite sum). However, we noticed that $k = \frac{I(D_j; C|D_{(i)})}{I(D_j^2; C|D_{(i)})}$, which varies from $0 < k < 1$, is well approximated by $k = (1 - \rho)$. Thus in practice, to find the next best patch, our algorithm finds the patch $j$ such that
$$\arg \max_j \min_i I(D_j; C) \times (1 - \rho_{j(i)})$$
where $\rho_{j(i)}$ is computed by Eqn. 14 from the hyper-feature differences between patch $F_j$ and $F_{(i)}$.
### 6. Building the Cascade
Now that we have a model for patch dependence, we can create a sequence of patches $F_j^L$ (see Section 3.3) that, when matched, collectively capture the maximum amount of information about the decision $C$ (same or different?). The sequence is ordered so that the first patch is the most informative, the second slightly less so and so on. The final step of creating a cascade is to define early stopping thresholds on the log likelihood ratio sum $R$ that can be applied after each patch in the sequence has been matched and its score added to $R$ (see Section 3.2).
We assume that we are given a global threshold $\lambda$ (see Section 3.2) that defines a global choice between selectivity and sensitivity. What remains is the definition of thresholds at each step, $\lambda_{(k)}^{C=1}$ and $\lambda_{(k)}^{C=0}$, which allow the system to accept (declare “same”) if $R > \lambda_{(k)}^{C=1}$ or reject (declare “different”) if $R \leq \lambda_{(k)}^{C=1}$. If neither of these conditions is met,
the system should continue by comparing the $k+1$th patches of each image.
To learn these thresholds, we generate identifiers on the left training images and run the resulting identifier comparing against the right images of our training data set. This will produce a performance curve for each choice of $k$, the number of patches included in the classification score, including $k = m$, the sum for which $\lambda$ is defined. Our goal for the cascade is to make decisions as early as possible but to avoid increasing the error on the training set. These two constraints exactly define the thresholds $\lambda^{C=1}_{(k)}$ and $\lambda^{C=0}_{(k)}$:
1. For each “same” and “different” pair in the training set
(a) generate an identifier with a sequence of $m$ patches based on $I^L$
(b) classify $I^R$ by evaluating
$$ R = \sum_{j=1}^{m} \log \frac{P(D_j = d_j | C = 1)}{P(D_j = d_j | C = 0)} > \lambda $$
2. Let $I_{C=1}$ be the set of correctly classified “same” pairs (where label is “same” and $R > \lambda$). Set the rejection threshold $\lambda^{C=0}_{(k)}$ by
$$ \lambda^{C=0}_{(k)} = \max_{I_{C=1}} \sum_{j=1}^{k} \log \frac{P(D_j = d_j | C = 1)}{P(D_j = d_j | C = 0)} $$
That is, we want $\lambda^{C=0}_{(k)}$ to be as large as possible without misclassifying any additional “same” pairs over the base identifier which uses all patches.
3. Similarly define $I_{C=0}$, and set $\lambda^{C=1}_{(k)}$ using the min.
7. Results
The goal of this work is to create an identification system that could be applied to different categories, where the algorithm would automatically learn (based on off-line training examples) how to select category-specific salient features from a new image. In this section, we demonstrate that after category training, our algorithm is in fact able take a single image of a novel object and solely based on it create a highly effective “same” vs. “different” classification cascade of image patches. Specifically, we wish to show that for visual identification each of the following leads to an improvement in performance in terms of accuracy and/or computational efficiency:
1. breaking the object up into patches, matching each one separately and combining the results,
2. differentiating patches by estimating a scoring and saliency function for each patch (based on its hyper-features),
3. modeling the dependency between patches to create a sequence of patches to be examined in order, and
4. applying early termination thresholds to the patch sequence to create the cascade.
We tested our algorithm on three different data sets: (1) cars from two cameras with significant pose differential, (2) faces from news photographs, and (3) cars from a wide-area tracking system with 33 cameras and 1000’s of unique vehicles. Examples from these three data sets are shown in Figure 9, with the top 10 patches of the classification cascade. Notice that the sequence of patches for each object reflects both category knowledge (for cars, the system tends to select descriptive patches on the side with strong horizontal gradients and around the wheels, while for faces the eyes and eyebrows are preferred) and object specific characteristics.
For each data set, a different automatic preprocessing step was applied to detect objects and approximately align them. After this, the same identification algorithm was applied to
Figure 11: Precision vs. Recall Using Different Numbers of Patches (Cars 1). These are precision vs. recall curves for our full model. Each curve represents the performance tradeoff between precision and recall, when the system uses a fixed number of patches. The lowest curve uses only the single most informative patch, while the top curve uses up to 100 patches. The 85% recall rate, where the different models of Figure 12 are compared, is noted by a vertical black dashed line. A magenta X, at recall = 84.9 and precision = 84.8, marks the performance of the cascade model.
Figure 12: Comparing Performance of Different Models (Cars 1). The curves plot the performance of various models, as measured by the false-positive rate (fraction of different pairs labeled incorrectly as same), at a fixed recall rate of 85%. The y-axis shows the log error rate, while the x-axis plots the log number of patches the models were allowed to use (up to a max of 100). As the number of patches increases, the performance improves until a point, after which it levels off and, for the models that order patches according to information gain, even decreases (when non-informative patches begin to pollute the score). The (red) model that does not use hyper-features (i.e. uses the same distributions for all patches), performs very poorly compared to the hyper-feature versions, even when it is allowed to use 100 patches. The second curve from the top uses our hyper-feature model to score the patches, but random selection to pick the patch order. The position only model uses only position (including size: \(x, y, w, h\)) based hyper-features for selecting patch order (i.e. it computes a fixed patch order for all cars). The light blue model sorts patches by mutual information, without considering dependencies. The last curve shows our full model based on selecting patches according to their conditional mutual information, using both positional and image-based hyper-features. Finally, the magenta X at 4.3 patches and 1.02% error shows the performance of the cascade model.
all three sets. For lack of space, we detail our experiments on data set 1, enumerate the results of data set 2, and only summarize our experience with data set 3. Qualitatively, our results on the three are consistent in showing that each of the above aspects of our system improves the performance, and that the overall system is both efficient and effective.
7.1. Cars 1
358 unique vehicles (179 training, 179 test) were extracted using a blob tracker from 1.5 hours of video from two cameras located one block apart. The pose of the cameras relative to the road (see Figure 1) was known from static camera calibration, and alignment included warping the sides of the vehicles to be approximately parallel to the image plane. Additionally, by detecting the wheels, we rescaled each vehicle to be the same length (inter-wheel distance of 150 pixels). This last step actually hurts the performance of our system, as it throws away size as a cue (the camera calibration gives us a good estimate of actual size). However, we wanted to demonstrate the performance when such calibration information is not available (this is similar to our face data set, where each face has been normalized to a canonical size).
Within training and testing sets, about 2685 pairs (true to false ratio of 1:15) of mismatched cars were formed from
Figure 13: *How many patches does it take to make a decision?* This histogram shows the number of patches that were matched by the classification cascade before a decision could be made. On average, 4.2 patches were required to make a negative (declaring a difference) decision, and 6.7 patches to make a positive one.
non-corresponding images, one from each camera. These included only those car pairs that were superficially similar in intensity and size. Using the best whole image comparison method we could find (normalized correlation on blurred filter outputs) on this set produces 14% false positives (29% precision) at a 15% miss rate (85% recall). Example correct and incorrect classification results using our cascade are shown in Figure 10. This data set together with more example results are available from our web site.
Figure 12 compares several versions of our model by plotting the false-positive rate (y-axis) with a fixed miss rate of 15% (85% recall), for a fixed budget of patches (x-axis). The 85% recall point was selected based on Figure 11, by picking the equal precision-recall point given the 1 to 15 true-to-false ratio.
The *Random Order* curve uses our hyper-feature model for scoring, but chooses the patches randomly. By comparing this curve to its neighbors, notice the performance gain associated with differentiating patches based on hyper-features both for scoring (*No Hyper-Features* vs. *Random Order*) and for patch selection (*Random Order* vs. *Mutual Information*). Comparing *Mutual Information* vs. *Conditional MI* shows that modeling patch dependence is important for choosing a small number of patches (see range 5-20) that together have high information content (Section 5). Comparing *Position Only* (which only uses positional hyper-features) vs. *Conditional MI* (which uses both positional and appearance hyper-features) shows that patch appearance characteristics are significant for both scoring and saliency estimation. Finally, the cascade performs (1.02% error, with mean of 4.3 patches used) as well as the full model and better than any of the others, even when those are given an unlimited computation budget.
Figure 11 shows another way to look at the performance of our full model given a fixed patch (computation) budget (the *Conditional MI* curve of Figure 12 represents the intersection of these curves with the 85% recall line). The cascade performance is also plotted here (follow the black arrow). The distribution of the number of patches it took to make a decision in the cascade model is plotted in Figure 13.
### 7.2. Faces
We used a subset of the “Faces in the News” data set described in [5], where the faces have been automatically detected from news photographs and registered by their algorithm. Our training and test sets each used 103 different people, with two images per person. This is an extremely difficult data set for any identification algorithm, as these face images were collected in a completely uncontrolled manner (news photographs).
Table 1 summarizes our results for running the same algorithm as above on this set. Note the same pattern as above: the patch based system generally outperforms whole object systems (here we compare against state of the art PCA and LDA algorithms with face specific preprocessing using CSU’s implementation [8]); estimating a scoring and saliency function through hyper-features greatly improves the performance of the patch based system; the cascades, using less than 6 patches on average, performs as well as
| Recall Rate | 60% | 70% | 80% | 90% |
|---------------------|-----|-----|-----|-----|
| PCA + MahCosine | 82% | 73% | 62% | 59% |
| Filter + NormCor | 83% | 73% | 67% | 57% |
| No Hyper-Features | 86% | 73% | 68% | 62% |
| Random 10 Patches | 79% | 71% | 64% | 60% |
| Top 1 CMI Patch | 86% | 76% | 69% | 63% |
| Top 50 CMI Patches | 92% | 84% | 75% | 67% |
| **CMI Cascade** | **92%** | **84%** | **76%** | **66%** |
Table 1: *Precision vs. Recall for Faces.*
Each column denotes the precision associated with a given recall rate along the P-R curve. *PCA + MahCosine* and *Filter + NormCor* are whole face comparison techniques. *PCA + MahCosine* is the best curve produced by [8], which implements PCA and LDA algorithms with face-specific preprocessing. *Filter + NormCor* uses the same representation and comparison method as our patches, but applied to the whole face. The last four all use our patch based system with hyper-features. The last three use conditional mutual information based patch selection, where the number of patches allowed is set to 1, 50, and variable (cascade), respectively. These cascades use between 4 and 6 patches on average to make a decision.
always using the best 50 patches (performance actually declines above 50 patches). Refer to Figure 14 for example classification results. As part of an extension of this current paper [16], we have also compared this algorithm to Bayesian face recognition [23], which won the 1996 FERET face identification competition, and found our algorithm to perform significantly better on this difficult data set. Our more recent work further improves on the results reported here by training a discriminative model on top of the hyper-features.
### 7.3. Cars 2
We are helping to develop a wide-area car tracking system where this component must re-identify vehicles when they pass by a camera. Detection is performed by a blob tracker and the images are registered by aligning the centroid of the object mask (the cameras are located approximately perpendicular to the road). We tested our algorithm on a subset of data collected from 33 cameras and 1000’s of unique vehicles, by learning an identifier generating function for each camera pair. In this fashion, the system incorporates the typical distortions that a vehicle undergoes between these cameras.
Equal error rates for our classification cascade were 3-5% for near lane (vehicle length $\sim$140 pixels) and 5-7% for far lane ($\sim$60 pixels), using 3-5 patches on average. Whole object comparison methods (we tested several different techniques) and using patches without hyper-features resulted in error rates that were 2 to 3 times as large.
### 7.4. Algorithm Complexity
This algorithm was designed to be able to perform real-time object identification. The most computationally expensive part is the off-line training, as many patches must be sampled and matched using normalized correlation (3-800,000 in our experiments above). For the on-line step, our observation is that a model will be built only once for each category instance (step 2 of our algorithm), but that model will then be applied (matched) many times to incoming images (step 3). Thus we choose to pay the one-time cost of scanning over all candidate patches to build the most efficient classification cascade possible.
Evaluating the informativeness of all patches within an image is, however, not as computationally daunting as it sounds: computing all of our hyper-features for a patch can be performed using integral images, making their computation time independent of patch size. Given the vector of hyper-features for a patch, computing the “same” and “different” gamma distributions used by the information measure involves computing 2 dot products (one for each degree of freedom of the distribution). Finally, the mutual information measure is computed using a table-lookup based on the gamma parameters.
The most expensive on-line step (by far) is matching the patches of the cascade in step 3 by searching for the most similar patch in terms of normalized correlation. Therefore the on-line running time of our algorithm is directly a function of the average number of patches that must be matched before a decision can be made, and the size of the area that needs be searched for the best matching patch. Given the care with which we pick the patches of our cascade, the average number of patches is typically less than 5 (see above). The search area, on the other hand, depends not on our algorithm but the accuracy of object detection method used.
Our current implementation is in Matlab. We estimate that an optimized implementation of our algorithm would be able to perform the vehicle identification component of the system described above with up to five new vehicle reports per second, and 15 candidate ids per report, in real time on a single processor.
### 8 Conclusion
We have described a new object identification system that is general in that it can be applied to different categories of objects. We have shown that the key step is to teach the system to be able to pick informative features for new objects within the given category given an off-line labeled category training set. Our main contribution is a novel learning algorithm that
actually models the space of possible features for the category, allowing it to select and score features that it has never seen before, based on a single new example.
In the introduction, we have argued that our goal was to build an algorithm that has the potential to recognize the usefulness of a mole on a person’s cheek for identification, even when it had never seen another person with such a mole. The hope is that the system would be able to generalize from having seen other similar facial blemishes in other locations (that is, in our case, image patches with mole-like appearance characteristics), and recognize that such patches, wherever they are located, make good features.
While we have no faces with obvious moles in our data set, Figure 15 shows two example results that make us hopeful that we are on the right track. In both, the algorithm has picked a very atypical patch as its top most informative feature, due to an unusual image characteristic. The left image contains the only person in the the our data set who is wearing sunglasses and it is the only image for which the algorithm has decided not to pick a patch near the eyes as its top feature. The right example shows a truck towing a flatbed trailer, where the unique connecting area is chosen as the best feature. We can only hypothesize how the algorithm made these seemingly correct yet unusual choices.
In both cases the appearance of the features dominated the decision: the homogeneous area of the sunglasses replaced the usually informative eye features, while the elongated lines of the trailer are reminiscent of the type of features that are found to be informative elsewhere. While accuracy of these unusual choices are quantitatively very difficult to measure, we believe that the overall performance of our algorithm is due to this ability to pick the right patches for most objects.
Appendix A. Gamma Distribution
Gamma distributions are non-zero in the range $0 < x < \infty$ and have two degrees of freedom, most commonly parameterized as a shape parameter $\gamma$ and a scale parameter $\beta$. In this work, we typically use the parameters $\gamma$ and the mean $\mu$, where $\mu = \beta \times \gamma$. With this parameterization, the probability density function has the form
$$f(x; \mu, \gamma) = \frac{\gamma^\gamma (\frac{x}{\mu})^{\gamma-1} \exp(-\frac{x}{\mu})}{\mu \Gamma(\gamma)},$$
where $\Gamma()$ is the gamma function. For examples of gamma distributions, refer to Figures 3 and 6. In this paper we use the notation $\Gamma(\mu, \gamma)$ for the gamma distribution.
Appendix B. Kibble’s Bivariate Distribution
Kibble’s bivariate gamma distribution is non-zero in the range $0 < x, y < \infty$ and has up to four degrees of freedom: the marginal parameters $\mu_x, \mu_y, \gamma$, and a correlation term $\rho$. Such a distribution has gamma marginals, where $\mu_x$ and $\gamma$ define the $x$ marginal and $\mu_y$ and $\gamma$ define the $y$ marginal. The parameter $\rho$, which ranges $0 \leq \rho < 1$, is the correlation coefficient between the variables $x$ and $y$: when $\rho$ is small, $x$ and $y$ are close to independent; when $\rho$ is large, $x$ and $y$ are highly correlated. If we let $t_x = \frac{x\gamma}{\mu_x}$ and $t_y = \frac{y\gamma}{\mu_y}$, then this bivariate distribution has the form
$$f(x, y; \mu_x, \mu_y, \gamma, \rho) = \frac{(t_x \times t_y)(\gamma - 1) \exp(-\frac{t_x + t_y}{1-\rho})}{(1-\rho)^\gamma \Gamma(\gamma)} \times \sum_{j=0}^{\infty} \frac{\rho^j (t_x \times t_y)^j}{(1-\rho)^{2j} \Gamma(\gamma + j)j!}.$$
The rate of convergence of the infinite series depends heavily on the $\rho$ parameter, where values of $\rho$ close to 1 converge much more slowly. Examples of Kibble’s distribution can be found in Figure 7(d). In this paper, we always set $\mu_x = \mu_y$, and thus denote Kibble’s distribution as $K(\mu, \gamma, \rho)$.
Acknowledgments
This work was supported by the DARPA CZTS project. We would like to thank Sarnoff Corporation for the wide area car tracking data set (“Cars 2”), especially Ying Sang and Harpreet Sawhney. From Berkeley, Ryan White provided the face data set and Hao Zhang an implementation for LARS. Vidit Jain at UMass Amherst provided comparison to Bayesian face recognition. Author Learned-Miller was partially supported by NSF CAREER Award IIS-0546666.
References
[1] Y. Amit and D. Geman. A computational model for visual selection. Neural Computation, 11(7):1691–1715, 1999.
[2] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. Fisherfaces: Recognition using
class specific linear projections. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 19(7):711–720, 1997.
[3] S. Belongie, J. Malik, and J. Puzicha. Matching shapes. In *International Conference on Computer Vision*, pages 454–463, 2001.
[4] A. Berg, T. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondence. *CVPR*, pages 26–33, 2005.
[5] T. L. Berg, A. C. Berg, J. Edwards, M. Maire, R. White, Y. W. Teh, E. Learned-Miller, and D. A. Forsyth. Names and faces in the news. *CVPR*, 2:848–854, 2004.
[6] E. J. Bernstein and Y. Amit. Part-based statistical models for object classification and detection. In *IEEE Computer Vision and Pattern Recognition*, pages 734–740, 2005.
[7] V. Blanz, S. Romdhani, and T. Vetter. Face identification across different poses and illuminations with a 3d morphable model. *Proceedings of the 5th International Conference on Automatic Face and Gesture Recognition*, pages 202–207, 2002.
[8] D. Bolme, R. Beveridge, M. Teixeira, and B. Draper. The CSU face identification evaluation system: Its purpose, features and structure. *ICVS*, pages 128–138, 2003.
[9] R. Diamond and S. Carey. Why faces are and are not special: An effect of expertise. *Journal of Experimental Psychology*, Gent(115):107–117, 1986.
[10] G. Dork and C. Schmid. Object class recognition using discriminative local features. Technical Report RR-5497, INRIA Rhone-Alpes, 2005.
[11] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. *Annals of Statistics*, 32(2):407–499, 2004.
[12] L. Fei-Fei, R. Fergus, and P. Perona. A Bayesian approach to unsupervised one-shot learning of object categories. In *International Conference on Computer Vision*, volume 2, pages 1134–1141, 2003.
[13] F. Fleuret. Fast binary feature selection with conditional mutual information. *Journal of Machine Learning Research*, 5:1531–1555, 2004.
[14] Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In *13th International Conference on Machine Learning*, pages 148–156, 1996.
[15] B. Heisele, T. Poggio, and M. Pontil. Face detection in still gray images. *Massachusetts Institute of Technology Artificial Intelligence Lab*, A.I. Memo No. 521, May 2000.
[16] V. Jain, A. Ferencz, and E. Learned-Miller. Discriminative training of hyper-feature models for object identification. In *British Machine Vision Conference*, volume 1, pages 357–366, 2006.
[17] G. H. John, R. Kohavi, and K. Pfleger. Irrelevant features and the subset selection problem. In *International Conference on Machine Learning*, pages 121–129, 1994.
[18] T. Kadir and M. Brady. Scale, saliency and image description. *International Journal of Computer Vision*, 45(2):83–105, 2001.
[19] W. F. Kibble. A two-variate gamma type distribution. *Sankhya*, 5:137–150, 1941.
[20] D. Lowe. Distinctive image features from scale-invariant keypoints. *International Journal of Computer Vision*, 60(2):91–110, 2004.
[21] P. McCullagh and J. A. Nelder. *Generalized Linear Models*. Chapman and Hall, 1989.
[22] E. G. Miller, N. E. Matsakis, and P. A. Viola. Learning from one example through shared densities on transforms. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pages 464–471, 2000.
[23] B. Moghaddam, T. Jebara, and A. Pentland. Bayesian face recognition. *Pattern Recognition*, 33:1771–1782, November 2000.
[24] G. Mori, S. Belongie, and J. Malik. Shape contexts enable efficient retrieval of similar shapes. *CVPR*, pages 723–730, 2001.
[25] H. Schneiderman and T. Kanade. A statistical approach to 3d object detection applied to faces and cars. *CVPR*, pages 1746–1759, 2000.
[26] N. Shental, A. Bar-Hillel, T. Hertz, and D. Weinshall. Computing Gaussian mixture models with EM using equivalence constraints. *NIPS*, 2003.
[27] N. Shental, T. Hertz, D. Weinshall, and M. Pavel. Adjustment learning and relevant component analysis. *ECCV*, 2002.
[28] M. Tarr and I. Gauthier. FFA: A flexible fusiform area for subordinate-level visual processing automatized by expertise. *Nature Neuroscience*, 3(8):764–769, 2000.
[29] S. Thrun. *Explanation-based neural network learning: A lifelong learning approach*. Kluwer, 1996.
[30] M. Turk and A. Pentland. Eigenfaces for recognition. *Journal of Cognitive Neuroscience*, 3(1):71–86, 1991.
[31] M. Vidal-Naquet and S. Ullman. Object recognition with informative features and linear classification. In *International Conference on Computer Vision*, pages 281–288, 2003.
[32] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In *IEEE Conference on Computer Vision and Pattern Recognition*, pages 511–518, 2001.
[33] M. Weber, M. Welling, and P. Perona. Unsupervised learning of models for recognition. *ECCV*, 1:18–32, 2000.
[34] L. Wiskott, J. Fellous, N. Krger, and C. von der Malsburg. Face recognition by elastic bunch graph matching. *Proc. 7th Intern. Conf. on Computer Analysis of Images and Patterns*, 19(7):775–779, 1997.
[35] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance metric learning with application to clustering with side-information. *Advances in Neural Information Processing Systems*, 2002.
[36] W. Zhao, R. Chellappa, A. Rosenfeld, and P. Phillips. Face recognition: A literature survey. *ACM Computing Surveys*, 35(4):399–458, 2003.
|
The Odisha Land Rights to Slum Dwellers Act, 2017
Act 10 of 2017
Keyword(s):
Basic Urban Service, EWS, In-situ Redevelopment, Public Interest, Rehabilitation, Slum Area, Slum Dweller, Tenable Settlements, Urban Area
DISCLAIMER: This document is being furnished to you for your information by PRS Legislative Research (PRS). The contents of this document have been obtained from sources PRS believes to be reliable. These contents have not been independently verified, and PRS makes no representation or warranty as to the accuracy, completeness or correctness. In some cases the Principal Act and/or Amendment Act may not be available. Principal Acts may or may not include subsequent amendments. For authoritative text, please contact the relevant state department concerned or refer to the latest government publication or the gazette notification. Any person using this material should take their own professional and legal advice before acting on any information contained in this document. PRS or any persons connected with it do not accept any liability arising from the use of this document. PRS or any persons connected with it shall not be in any way responsible for any loss, damage, or distress to any person on account of any action taken or not taken on the basis of this document.
# THE ODISHA LAND RIGHTS TO SLUM DWELLERS ACT, 2017
## TABLE OF CONTENTS
**PREAMBLE:**
**SECTIONS:**
### CHAPTER I
**PRELIMINARY**
1. Short title, extent and commencement.
2. Definitions.
### CHAPTER II
**LAND RIGHTS**
3. Land right to slum dweller.
4. Redevelopment of slums.
5. Abatement of proceedings.
### CHAPTER III
**AUTHORITY AND PROCEDURE FOR SETTLEMENT OF LAND RIGHTS**
6. Urban Area Slum Redevelopment and Rehabilitation Committee.
7. Conduct of Business of the Committee.
8. Appeal.
### CHAPTER IV
**OFFENCES AND PENALTIES**
9. Penalty for contravention of the provision.
### CHAPTER V
**MISCELLANEOUS**
10. Urban Poor Welfare Fund.
11. Protection of action taken against good faith.
12. Nodal Agency.
13. Power to remove difficulties.
14. Bar of jurisdiction of Civil Court.
15. Cognizance of Offence.
16. Power to make rules.
17. Effect of other laws.
18. Repeal and savings.
The Odisha Gazette
EXTRAORDINARY
PUBLISHED BY AUTHORITY
No. 1652, CUTTACK, MONDAY, OCTOBER 16, 2017 / ASWINA 24, 1939
LAW DEPARTMENT
NOTIFICATION
The 16th October, 2017
No. 11055-I-Legis-20/2017/L.—The following Act of the Odisha Legislative Assembly having been assented to by the Governor on the 16th October, 2017 is hereby published for general information.
ODISHA ACT 10 OF 2017
THE ODISHA LAND RIGHTS TO SLUM DWELLERS ACT, 2017
AN ACT TO PROVIDE FOR ASSIGNING LAND RIGHTS TO IDENTIFIED SLUM DWELLERS, FOR REDEVELOPMENT, REHABILITATION AND UPGRADATION OF SLUMS, AND FOR MATTERS CONNECTED THEREWITH OR INCIDENTAL THERETO.
Be it enacted by the Legislature of the State of Odisha in the Sixty-eighth Year of the Republic of India, as follows:
CHAPTER I
PRELIMINARY
Short title, extent and commencement.
1. (1) This Act may be called the Odisha Land Rights to Slum Dwellers Act, 2017.
(2) It extends to urban areas in the whole of the State of Odisha.
(3) It shall be deemed to have come into force on the 30th day of August, 2017.
Definitions.
2. In this Act, unless the context otherwise requires, —
(a) “Authorised Officer” means the head of the Slum Redevelopment and Rehabilitation Committee or any officer authorized by the State Government, by order, to exercise powers as may be prescribed;
(b) “basic urban services” means services of drinking water supply, sanitation, drainage, sewerage, solid waste disposal and street lighting;
(c) "Collector" means the Collector of a district and includes Additional District Magistrate or any officer specially appointed by the State Government to perform the functions of a Collector under this Act;
(d) "Committee" means the Urban Area Slum Redevelopment and Rehabilitation Committee constituted under section 6;
(e) "EWS" means an economically weaker section beneficiary whose household income is up to such limit as notified by the State Government, from time to time;
(f) "family" means husband, wife, unmarried son, unmarried daughter or any other person related by blood and wholly dependent on the slum dweller;
(g) "Financial Institution" means any company possessing licence under the Banking Regulation Act, 1949 to carry on banking business and includes a Housing Finance Institution which has obtained certificate of registration under the National Housing Bank Act, 1987;
(h) "Government land" means any land owned or acquired by the State Government or its undertakings or the Municipal Council or the Notified Area Council, as the case may be;
(i) "in-situ redevelopment" means the process of redevelopment of existing slum areas by providing basic civic and infrastructural services to the slum dwellers, on the land on which the slum is based;
(j) "landless person" means a person who is a citizen of India and does not own either in his own name or in the name of any member of his family any house or land, or land rights granted or inherited under this Act, in the urban area;
(k) "land right" means right to land assigned to slum dwellers under section 3;
(l) "member" means a member of the Committee or sub-committee, as the case may be, and includes the Chairperson;
(m) "occupation" means occupation of a land by a slum dweller for residential purposes;
(n) "prescribed" means prescribed by rules made under this Act;
(o) "public interest" means land usage as prescribed under the city development plan or zonal development plans under the approved city development plan or the provision of basic urban services to public at large or prohibition of human habitation in environmentally hazardous sites or ecologically sensitive sites or heritage sites;
(p) "redevelopment" means improvement to the existing slum by providing basic urban services and facilitating improvement of housing conditions in accordance with the housing scheme framed by the State Government, from time to time;
(q) "rehabilitation" means relocation of slum dwellers to other location in accordance with the housing scheme framed by the State Government, from time to time;
(r) "slum" or "slum area" means a compact settlement of at least twenty households with a collection of poorly built tenements, mostly of temporary nature, crowded together usually with inadequate sanitary and drinking water facilities in unhygienic conditions, which may be on the State Government land in an urban area;
(s) "slum dweller" means any landless person in occupation within the limits of a slum area;
(l) "tenable settlements" means the settlement as decided by the Committee, where existence of human habitation does not entail undue risk to the safety or health or life of the residents or habitation or such sites are not considered contrary to public interest or the land is not required for any public or development purpose;
(u) "untenable settlements" means such areas where existence of human habitation entails undue risk to the safety or health or life of the inhabitants themselves or where habitation on such areas is considered by the Committee not to be in public interest;
(v) "urban area" means the area comprised within the limits of Municipal Council and Notified Area Council constituted under the Odisha Municipal Act, 1950;
(w) Words and expressions used herein but not defined shall have the same meaning as assigned to them under the Odisha Municipal Act, 1950.
CHAPTER II
LAND RIGHTS
3. (1) Notwithstanding anything contained in any other State law for the time being in force, and subject to provisions of sub-section (2), every landless person, occupying land in a slum in any urban area by such date as may be notified by the State Government, shall be entitled for settlement of land and certificate of land right shall be issued in accordance with the provisions of this Act.
(2) The land shall be settled in favour of a slum dweller to the extent specified hereinafter, namely:
(a) A slum dweller shall be entitled to a land as nearly as may be,
(i) where the slum is situated within the Municipal Council area, not exceeding forty-five square meter;
(ii) where the slum is situated within the Notified Area Council area, not exceeding sixty-square meter;
Provided that where a slum dweller is not getting in-situ settlement, in such event the maximum limit of land in a relocation site shall not exceed thirty square meter.
Provided further that where the slum dweller is in occupation of land in any of the area mentioned in sub-clauses (i) or (ii), less than the maximum area mentioned therein, the land in actual occupation of such slum dweller shall be settled accordingly.
(b) Where the slum dweller belongs to EWS category,
(i) the land shall be settled free of cost; and
(ii) where settlement of land is made in excess of thirty square meter subject to maximum limit fixed in clause (a), the cost of such excess land shall be
calculated at such per centum of the benchmark value of land as may be determined by the State Government, from time to time.
(c) where the slum dweller belongs to any category other than EWS, the land shall be settled at such cost which shall be calculated at such per centum of the benchmark value of the land, as may be determined by the State Government, from time to time; and
(d) where a slum dweller occupies land beyond the maximum permissible limit provided under clause (a), he shall voluntarily vacate such excess land and the Authorised Officer shall take over the possession of such excess land before the issue of the certificate of land right.
(3) The land so settled as per sub-section (1) shall be heritable but not transferable by sub-lease, sale, gift, or any other manner whatsoever:
Provided that, the land so settled may be mortgaged for the purpose of raising finance in the form of housing loan from any financial institution.
(4) The certificate of land right shall be issued jointly in the name of both the spouses in case of married persons and in the name of single head in the case of a household headed by a single person.
(5) If the slum dweller, with whom the land has been settled or right has been accrued for allotment of any land under this Act, transfers such land except by way of mortgage under sub-section (3) or uses the said land for any purpose other than residential purpose, the following consequences shall follow, namely:—
(a) the certificate of land right issued under sub-section (1) shall stand automatically cancelled;
(b) such transfer shall be null and void;
(c) no right shall accrue to the transferee in respect of such land;
(d) the Authorized Officer shall dispossess the person who is in actual possession of such land;
(e) such slum dweller shall be debarred from getting any land in future under this Act; and
(f) such slum dweller shall be guilty of an offence under this Act.
(6) The slum dweller, with whom the land has been settled under this Act, shall not hold any certificate of land right in any other urban area of the State and if he holds any such certificate, he shall surrender all such certificates to the Authorised Officer in such manner as may be prescribed.
(7) If any slum dweller is found to have obtained more than one certificate of land rights by way of misrepresentation of facts, the Authorised Officer shall, after giving reasonable opportunity of being heard to the slum dweller, cancel all the certificate of land rights and, without prejudice to the penalty that may be imposed under this Act, dispossess the person from such land.
(8) The evidence for grant of certificate of land right under sub-section (1) in favour of slum dweller shall include —
(a) Government authorized documents such as Aadhaar Card, voter identity card, ration card under National Food Security Act, 2013, smart card under Rashtriya Swasthya Bima Yojana (RSBY) or passport; or
(b) Government records such as Census, survey maps, satellite imagery, plans, reports, reports of committees and commissions, Government orders, notifications, circulars, resolutions.
(9) The certificate of land right granted under sub-section (1) shall be acceptable as evidence for address proof of residence.
4. (1) Subject to the other provisions of this Act, the land rights conferred under sub-section (1) of section 3 shall, as far as practicable, be provided in-situ and on as-is where-is basis:
Provided that, where the State Government decides that the site has untenable settlements, in such circumstances the slum dwellers shall be rehabilitated elsewhere:
Provided further that,—
(a) where it is decided that the slum dweller shall be rehabilitated elsewhere, the said site shall be utilized for any other purpose as the State Government may decide;
(b) where, after providing land in the existing slum to slum dweller, any land remains surplus, the State Government may utilize such land for any purpose as it may decide.
(2) In the event of in-situ redevelopment, the applicable planning and building regulations shall be applied, and wherever any relaxation is felt necessary for implementation of the redevelopment plan, the same may be deemed to have been granted under permissible deviation under the said regulations.
(3) During redevelopment of the slum area, transit space shall be provided to the slum dwellers for such duration as may be necessary as provided under the housing scheme issued by the State Government, from time to time.
5. All proceedings relating to eviction of slum dwellers pending on the ground of unauthorized occupation before any authority or Court under any State law shall abate on issue of certificate of land right under this Act.
CHAPTER III
AUTHORITY AND PROCEDURE FOR SETTLEMENT OF LAND RIGHTS
6. (1) For the purpose of this Act, the State Government shall, by notification, constitute a Committee called the Urban Area Slum Redevelopment and Rehabilitation Committee for each urban area with the name of such urban area, as it deems necessary and such Committee shall have the authority to approve the list of persons on whom the land under this Act shall be settled and shall exercise jurisdiction over the areas and exercise such powers and functions as may be prescribed.
(2) Every Committee shall be headed by the Collector of the district and shall comprise of such other members as may be prescribed.
(3) Without prejudice to the generality of powers and functions under sub-section (1), the Committee shall,—
(a) undertake necessary surveys, undertake spatial mapping, fix the physical boundary of the slums, identify eligible slum dwellers with community participation, prepare and publish
the list of slum dwellers to whom certificate of land right has been issued, in such manner as may be prescribed; and
(b) for the purpose of facilitating the implementation of the provisions of this Act and rules framed thereunder, constitute such sub-committee for each slum area or cluster of slums, comprising of such number of members as may be specified by the Committee.
(4) For the purpose of efficient functioning of the Committee, the State Government shall provide such officers and employees as may be notified, from time to time.
7. The procedure and conduct of business and functions of the Committee shall be such as may be prescribed.
8. (1) Subject to such rules as may be made, an appeal from any decision or order made under this Act shall lie to such officer as may be appointed by the State Government.
(2) Every appeal, preferred under this section, shall be heard and disposed of in such manner as may be prescribed.
(3) Every order passed by the Appellate Authority under this section shall be final.
CHAPTER IV
OFFENCES AND PENALTIES
9. Whoever contravenes the provisions of sub-sections (5) and (6) of section 3 or fails to comply with any notice or order issued under this Act or rules made thereunder, shall be punished with imprisonment of either description for a term which may extend to one year or with fine, which may extend to twenty thousand rupees, or with both.
CHAPTER V
MISCELLANEOUS
10. (1) There shall be constituted a fund called the Urban Poor Welfare Fund at the level of each urban local body to which the moneys received from the slum dwellers under this Act shall be credited and in addition to the same, the following receipts may also be credited to the said fund, namely:—
(a) contributions from the State Government and Central Government, if any;
(b) contributions from organizations, philanthropists, individuals and Non-Government Organisations; and
(c) any other funding source as may be notified by the State Government.
(2) The constitution and administration of the fund shall be in such manner as may be prescribed.
Explanation. — For the purpose of this section, the expression ‘urban local body’ means the Municipal Council and Notified Area Council constituted under the Odisha Municipal Act, 1950.
11. No suit, prosecution or other legal proceedings shall lie against the State Government or any officer or other employee of the State Government or the Committee or any sub-committee constituted under this Act, which is, in good faith, done or intended to be done under this Act.
12. The State Government or any officer or any Authority authorized by the State Government in this behalf shall be the Nodal Agency for the implementation of the provisions of this Act.
13. (1) If any difficulty arises in giving effect to the provisions of this Act, the State Government may, by order published in the Official Gazette, make such provisions, not inconsistent with the provisions of this Act, as may appear to it to be necessary or expedient for the removal of the difficulty.
(2) Every order made under this section shall, as soon as may be after it is made, be laid before the Odisha Legislative Assembly.
14. No civil court shall have jurisdiction to entertain any suit or proceeding in respect of any matter which the State Government or Committee constituted under this Act is empowered by or under this Act to determine and no injunction shall be granted by any court or other authority in respect of any action taken or to be taken in pursuance of any power conferred by or under this Act.
15. (1) No court inferior to that of a Judicial Magistrate of First Class shall try any offence punishable under this Act.
(2) No court shall take cognizance of any offence punishable under this Act, except upon a complaint in writing made by the Authorised Officer or any officer of the State Government or Committee, authorized by the State Government.
16. The State Government may, by notification in the Official Gazette, make rules to carry out all or any of the provisions of this Act.
17. The provisions of this Act or rules made thereunder shall have effect notwithstanding anything inconsistent therewith contained in any other State laws.
18. (1) The Odisha Land Right to Slum Dwellers Ordinance, 2017 is hereby repealed.
(2) Notwithstanding such repeal, anything done or any action taken under the Ordinance, so repealed shall be deemed to have been done or taken under the corresponding provisions of this Act.
By Order of the Governor
B.P. ROUTRAY
Principal Secretary to Government
Printed and published by the Director, Printing, Stationery and Publication, Odisha, Cuttack-10
Ex. Gaz. 980-173+280
|
NONVOTING COMMON STOCK: STATE CONSTITUTIONAL PROHIBITIONS
To meet the needs of the business community in the 1920's, a virtually new device, nonvoting common stock, was introduced into the realm of corporate finance. The principal utility of this innovation has been that, while providing for corporate expansion and financial flexibility, it also allows retention of voting control within a select class of stockholders.\(^1\) A few courts, however, have refused to recognize the validity of this device, despite enabling legislation, because of state constitutional provisions which have been construed so as to prohibit the issuance of nonvoting stock.\(^2\)
Illustrative of this approach is *State ex rel. Dewey Portland Cement Company v. O'Brien*.\(^3\) There, in order to permit an increase in its authorized capital, a cement company sought a charter amendment dividing a proposed new stock issue into two classes: class A common and class B common, with voting rights vested only in the latter. The Secretary of State of West Virginia declined to certify the desired modification, and the Supreme Court of Appeals subsequently refused to issue a writ of mandamus to compel the Secretary to act, concluding that in so far as the Constitution of West Virginia guaranteed to every stockholder
---
\(^1\) Three reasons have been suggested for the increased utilization of classified common stock: (1) investor-speculator's demand for a share in the bounteous profits being reaped by industry during this period; (2) desire of management to acquire additional capital while at the same time retaining full control of the corporation; (3) desire of bankers and promoters to have something new to offer to the public. *Dewing, Financial Policy of Corporations* 165 (4th ed. 1941); *Dewing, The Development of Class A and Class B Stocks*, 5 *Harv. Bus. Rev.* 332 (1927). See *General Investment Co. v. Bethlehem Steel Corp.*, 87 N.J. Eq. 234, 100 Atl. 347 (1917). *Cf.* *Warren v. Pim*, 66 N.J. Eq. 353, 59 Atl. 773 (1904).
\(^2\) In the absence of conflicting constitutional and statutory provisions, nonvoting stock was early recognized. See *In re Barrow Haematite Steel Co.*, 39 Ch. D. 582 (1888). Perhaps the first American case actually to deal with the legality of nonvoting stock was *Miller v. Ratterman*, 47 Ohio St. 141, 157, 24 N.E. 496, 500 (1889). The court there said: "The promise to the preferred stockholders was to award them the first net earnings, the holders of the common stock to share in such of the net earnings as they might, by good management, be able to make over and above the 8 percent."
\(^3\) 96 S.E.2d 171 (W. Va. 1957).
the right to vote in the election of directors, including the right to vote cumulatively, the issuance of stock stripped of voting rights could not be tolerated.
The language of the West Virginia Constitution,\(^4\) considered alone, would perhaps support the court's decision. Furthermore, in Illinois\(^5\) and Delaware,\(^6\) under identical or similar provisions, the courts have likewise held that every stockholder is guaranteed an unqualifiable right to vote all stock registered in his name. The Missouri Supreme Court, on the other hand, has interpreted an analogous constitutional provision simply to protect those stockholders who by corporate charter are entitled to vote in the election of directors against manipulations which might deprive a minority of a voice in the corporation's affairs.\(^7\) Indeed,
---
\(^4\) W. Va. Const. art. XI, § 4 (1872) provides as follows: "The Legislature shall provide by law that in all elections for directors or managers of incorporated companies, every stockholder shall have the right to vote, in person or by proxy, for the numbers of shares of stock owned by him, for as many persons as there are directors or managers to be elected, or to cumulate said shares, and give one candidate as many votes as the number of directors multiplied by the number of his shares of stock, shall equal, or to distribute them on the same principle among as many candidates as he shall think fit; and such directors or managers shall not be elected in any other manner." See Note, 40 W. Va. L. Rev. 97 (1934). Adopted in 1872, the West Virginia Supreme Court of Appeals has considered this section but twice prior to the instant case. Germer v. Triple-State Natural Gas and Oil Co., 60 W. Va. 143, 54 S.E. 509 (1906); Cross v. W. Virginia Cent. and Pa. R. Co., 35 W. Va. 174, 12 S.E. 1071 (1891). In both cases it was held that a corporation could not deprive a stockholder of the right to vote cumulatively in the election of directors.
\(^5\) ILL. CONST. art. XI, § 3 (1870). In State ex rel. Watsuka Telephone Co. v. Emmerson, 302 Ill. 300, 134 N.E. 701 (1922) the court denied mandamus to enforce issuance of a corporate charter by the Secretary of State because the proposed preferred stock denied the stockholder the right to vote. It was held that the words declare the meaning of the constitution, and neither the courts nor the legislatures have the right to add to or take away from that meaning. It is perhaps significant that Illinois had no statute authorizing nonvoting stock, and legislative construction of a constitution carries considerable weight in interpreting its provisions. See Comment, Corporations—Stockholders—Voting Powers—Preferred Stock—Constitutional Law. 17 ILL. L. Rev. 138 (1923). Cf. Durkee v. People ex rel. Askren, 155 Ill. 334, 40 N.E. 626 (1895); Wright v. Central Calif. Water Co., 67 Cal. 532, 8 Pac. 70 (1885).
\(^6\) DEL. CONST. art. 91, § 6 (1897). Delaware had a statute analogous to that of West Virginia which permitted the issuance of nonvoting stock. DEL. LAWS c. 273, § 20 (1889). In Brooks v. State, 3 Boyce 1, 79 Atl. 790 (Del. 1911), the court, in construing the constitutional provision held that the statute authorizing nonvoting stock was unconstitutional and all issues of stock thereunder invalid. In 1903, however, prior to this decision, Delaware had repealed the constitutional provision in question. See DEL. LAWS c. 254, § 2 (1903).
\(^7\) MO. CONST. art. 12, § 6 (1875). In State ex rel. Frank v. Swanger, 190 Mo. 561, 89 S.W. 872 (1905), the court stated: "We hold that the evident purpose of section 6, art. 12, of our Constitution was the guaranty to stockholders having the right
this was apparently the interpretation that the West Virginia legislature placed on its own governing constitutional provision when it enacted its statute\(^8\) sanctioning nonvoting stock.\(^9\)
At common law, it was well settled that absent a charter provision to vote of cumulating their votes, and has no reference to the contractual right of the stockholders inter sese of providing that preferred stockholders shall or shall not have the right to vote such stock, and to hold that it has taken away this well-recognized common-law right would be to distort its obvious purpose.” Although the Swanger case dealt with nonvoting preferred stock, presumably, the above quoted language would also apply to nonvoting common stock should the question of its validity ever arise.
\(^8\) W. Va. Code Ann. §§ 3034, 3078 (1949). As early as 1864 West Virginia enacted a statute providing that stockholders of any corporation may provide for the issue of preferred stock, upon such terms and conditions, and with such stipulations and regulations respecting preferences as they may see fit to prescribe. W. Va. Acts c. 43, § 5 (1864). This was the same statute that was in force when art. XI, § 4 of the Constitution was adopted in 1872. In 1873, one year after the enactment of the Constitution, the legislature provided that in all elections of directors, every stockholder shall have the right to vote for the number of shares of stock owned by him for as many persons as there are directors to be elected, or to cumulate said shares. W. Va. Acts c. 181, § 44 (1872-73). This same provision was re-enacted without substantial change in 1881. W. Va. Acts c. 17, § 56 (1881). In 1882, the general corporation law was enacted which included the provisions of W. Va. Acts c. 43, § 1 (1864) and c. 17, § 56 (1881). W. Va. Acts c. 96, § 16 and § 44 (1882). The present W. Va. Code Ann. § 3034 (1949) follows substantially the language of W. Va. Acts c. 96, § 16 (1882), and W. Va. Code Ann. § 3078 (1949) follows substantially the language of W. Va. Acts c. 96, § 44 (1882).
\(^9\) 2 Report of the Code Revisors of West Virginia 10 (1931): “The weight of modern judicial opinion seems to hold such a provision unconstitutional, but these decisions, while well reasoned in many respects, seem to ignore the flexibility of a state constitution to meet changing public conditions, and for this reason do not seem to give as much weight as we think should be given to the real purpose of the provision, which was to secure the right of cumulative voting. . . . Attention is also called to the fact that the constitutional provision referred to relates only to voting for directors, and does not relate to the right to vote on other corporate acts.” But see Note, 40 W. Va. L. Rev. 97 (1933) wherein Malvin G. Sperry, Chairman of the Code Commission of 1921 states: “We studied with considerable misgiving the well-reasoned cases of the People v. Emmerson, [302 Ill. 300, 134 N.E. 707 (1922)] decided by the Supreme Court of Illinois in 1922; Brooks v. The State, [3 Boyce 1, 79 Atl. 790 (Del.: 1911)] decided by the Supreme Court of Delaware in 1911; and Randle v. Winona Coal Company, [206 Ala. 315, 89 So. 790 (1921)] decided by the Supreme Court of Alabama in 1921. [These courts] answered in the negative every proposition which was or apparently could be asserted in favor of the constitutionality of legislative action of the respective states similar to the West Virginia act of 1901, in the face of the constitutional inhibition in the state of Illinois identical, and in the state of Delaware almost identical, with the provision of West Virginia. The courts of those states were not influenced by an [sic] business consideration, or obligation to prevent confusion in business affairs, or by any question of public policy. Indeed, they held that in the matter of public policy the rule clearly favored the right to give to every shareholder of a corporation one vote for each share of stock held.”
to the contrary, each stockholder was entitled to only one vote; without regard to the number of shares he held.\textsuperscript{10} The obvious inequity of this method of voting was early recognized, however, and each stockholder was soon legislatively assured a specific number of votes based upon a decreasing proportional ratio to the number of shares he held in the corporation.\textsuperscript{11} But in most jurisdictions, these safeguards were subsequently eroded by statutory provisions which entitled each stockholder to one vote for each share, thus enabling a majority bloc to exclude entirely the minority from representation.\textsuperscript{12} A resulting fear of abuse led to the introduction of cumulative voting.\textsuperscript{13} Viewed in this
\textsuperscript{10} Commonwealth v. Conover, 10 Phila. 55 (1873); Taylor v. Griswold, 2 Green 222 (N.J. 1834); \textit{In re Horbury Bridge Coal, Iron and Wagon Co.}, 11 Ch. D. 109 (1879). In Luthy v. Ream, 270 Ill. 170, 181, 110 N.E. 373, 377 (1915) it was held that: " . . . the power to vote is inherently attached to and inseparable from the real ownership of each share. . . ." See FLETCHER, \textsc{Cyclopedia of Corporations}, §§ 2025, 2045 (perm. ed. 1952); Williston, \textit{History of the Law of Business Corporations Before 1800}, 2 \textsc{Harv. L. Rev.} 105, 156 (1888). Cf. Tracy v. Brentwood Village Corp. 30 Del. Ch. 296, 59 A.2d 708 (1948); McLain v. Lanova Corp., 28 Del. Ch. 176, 39 A.2d 209, (1944); \textit{In re Giant Portland Cement Co.}, 26 Del. Ch. 32, 21 A.2d 697 (1941).
\textsuperscript{11} The Virginia statutes which first set forth the ratio of votes to the number of shares held by the stockholder, varied the ratio and made it dependent upon the categorization of the corporation. For example, in manufacturing and mining corporations, the ratio was one vote for every share up to fifteen shares; one additional vote for every five shares from fifteen shares up to one hundred shares; and one additional vote for every twenty shares over one hundred shares. \textsc{Va. Acts c. 84, § 5} (1837). In 1849, title 18 of the Code of Virginia was enacted to cover all chartered corporations and, for the first time, standardize to a degree, the ratio of votes to the number of shares held by the stockholder. \textsc{Code of Va. tit. 18, c. 57, § 10} (1849).
\textsuperscript{12} Mo. \textsc{Laws, Corp.}, § 3 (1849) provided that: "All elections shall be by ballot, and each stockholder shall be entitled to as many votes as he owns shares of stock in the said company. . . ." Gregg v. Granby Mining and Smelting Co., 164 Mo. 616, 625, 65 S.W. 312, 313 (1901): "It was evidently the purpose of our legislature to settle this question [of a stockholder's voting rights] by a positive enactment. . . . Thus, the share is made the unit of election, and not the person who owns it, regardless of the number of his shares."
Del. \textsc{Laws}, c. 273, § 20 (1899) provided that: "A stockholder shall be entitled to one vote for each share of stock he may hold [in the corporation]."
Ill. \textsc{Laws, Corp.}, § 3 (1859) provided that: "At such meeting stockholders may vote, either in person or by proxy, one vote for each share of stock held and thus represented."
\textsuperscript{13} See FLETCHER, \textsc{Cyclopedia of Corporations}, § 2048 (perm. ed. 1952); BALLANTINE, \textsc{Corporations} 404, § 177 (rev. ed. 1946). For the mechanical aspects of cumulative voting see WILLIAMS, \textsc{Cumulative Voting for Directors} 40-46 (1951). For arguments for and against the cumulative method of voting see Young, \textit{The Case for Cumulative Voting}, Wis. L. Rev. 49 (1950); Axley, \textit{The Case Against Cumulative Voting}, Wis. L. Rev. 278 (1950).
perspective, the West Virginia constitutional provision was apparently designed solely to insure representation to minority voters.
Yet, it is arguably anomalous thus to assure representation to minorities, but to deny it to nonvoting stockholders who, in fact, may represent a majority of the corporate investors. In this connection, even those jurisdictions that recognize the validity of nonvoting stock, in the face of similar constitutional impediments, have limited their sanction to nonvoting preferred stock, reasoning that preferred stockholders are otherwise protected by law or agreement and, therefore, do not need representation as a protective measure.\textsuperscript{14}
Admittedly, nonvoting stock does not conduce the promotion of intracorporate democracy, but in reality, it would seem that this shortcoming is largely academic. The growth of corporations and the widespread dispersion of stock ownership have combined to reduce true stockholder democracy to an unattainable ideal.\textsuperscript{15} Illustrative of this thesis is the proxy, which, although originally designed to extend representation to the absent stockholder, has become one of the principal instruments not by which corporate democracy is sustained, but by which the right to representation is usually delegated to representatives of the control group.\textsuperscript{16}
\textsuperscript{14} \textit{State ex rel. Frank v. Swanger}, 190 Mo. 561, 89 S.W. 872 (1905); \textit{American Railway-Frog Co. v. Haven}, 101 Mass. 398 (1869); \textit{State ex rel. Danforth v. Hunton}, 28 Vt. 594 (1856). \textit{Gutheim & Dougall, Corporate Financial Policy} 91 (2d ed. 1948); \textit{Dewing, Financial Policy of Corporations} 163 (4th ed. 1941).
\textsuperscript{15} \textit{Berle and Means, The Modern Corporation and Private Property} 138 (1932). For a discussion of the status of stockholder democracy today, see \textit{Emerson and Latcham, Shareholder Democracy, A Broader Outlook for Corporations} (1954).
\textsuperscript{16} \textit{Grantz v. Claughton}, 187 F.2d 46 (2d Cir. 1951), \textit{cert. denied}, 341 U.S. 920 (1951); \textit{Pacific Gas & Electric Co. v. SEC}, 127 F.2d 378 (9th Cir. 1942), 139 F.2d 298 (9th Cir. 1943), \textit{aff'd}, 324 U.S. 826 (1944) (17.71% of stock as controlling interest); \textit{American Gas & Electric Co. v. SEC}, 134 F.2d 633 (App. D.C. 1943), \textit{cert. denied}, 319 U.S. 763 (1943) (17.5% as controlling interest); \textit{Detroit Edison Co. v. SEC}, 119 F.2d 730 (6th Cir. 1941), \textit{cert. denied}, 314 U.S. 618 (1941) (19.2% as controlling interest); \textit{Rochester Tel. Co. v. United States}, 307 U.S. 125 (1939), 39 Colum. L. Rev. 295; \textit{Natural Gas Pipeline Co. v. Slattery}, 302 U.S. 300 (1937) (33.33% as controlling interest); \textit{Koppers United Co. v. SEC}, 138 F.2d 577 (App. D.C. 1943) (14.59% as controlling interest); \textit{Morgan Stanley & Co. v. SEC}, 126 F.2d 325 (2d Cir. 1942); \textit{Berle and Means, The Modern Corporation and Private Property} 82 (1932) (14.9% given as controlling interest in Standard Oil Company). See \textit{Loss, Securities Regulation} 7-13, 458 (1951); \textit{Timberg, Corporate Fictional-Logical, Social and International Implications}, 46 Colum. L. Rev. 533, 561 (1946). Cf. \textit{Comments, Interpretation of "Holding Company" and "Affiliate" Under the Public Utility Holding Company Act}, 51 Yale L.J. 1018 (1942); \textit{Holding Company Act—"Controlling Influence"}, 40 Mich. L. Rev. 274 (1941).
Is the right to vote in ordinary managerial matters, then, a right all stockholders must possess in order to insure adequate individual investor protection? Nonvoting stock vests management in a select class of shares, but it does not exclude the nonvoting shareholder from exercising all of the perogatives of corporate ownership. For example, inuring to every stockholder are, by virtue of numerous statutes and decisions, the right to vote upon the propriety of a dissolution or of a sale of all the assets and a voice in matters affecting the ratio of voting to nonvoting stock, as well as in matters concerning the surrender of the corporate charter. Further, although judicial policy has tended to
17 There is substantial authority for answering this question in the affirmative. Since 1926, the New York Stock Exchange has refused to list nonvoting common stock, and since 1940, certain preferred stock, the voting rights of which have been substantially curtailed. Loss, Securities Regulation 488 (1951). Similarly, the SEC may not authorize the sale of a security of a registered utility unless such security is a common stock having at least equal voting rights with any outstanding security of the declarant. Public Utility Holding Company Act, 1935, 49 Stat. 815, 15 U.S.C. § 79g(c)(1) (1952). The Federal Bankruptcy Act requires that plans of reorganization must include provisions prohibiting the reorganized company from issuing nonvoting stock. Bankruptcy Act, 1898, 52 Stat. 895 (1938), 11 U.S.C. § 616(12)(a) (1952).
18 For example, N.Y. STOCK CORP. LAW, § 105. All stock, voting and non-voting shall be considered voting for purposes of dissolution unless there is an express provision to the contrary in the charter. But, when a sale of all the assets is proposed, only those stockholders entitled to vote may vote. N.Y. STOCK CORP. LAW, § 10. Accord, N.J. STAT. ANN., § 14:13-1 (1939). Mo. REV. STAT. ANN., § 351.090 (1949), provides, in case a charter amendment would adversely affect issued and outstanding nonvoting stock, then the vote of such nonvoting stock must be taken before the amendment can be made.
Some states, however, make no distinction between voting and nonvoting stock, the effect of which is to exclude nonvoting stock from voting on such matters. W. VA. CODE ANN., §§ 3076, 3093 (1949); ILL. STAT. ANN., §§ 32.074, 32.076 (Jones, 1934); DEL. CODE ANN., tit. 8, §§ 271, 275 (1953).
The leading decision is generally considered to be Abbott v. American Hard Rubber Co., 33 Barb. 578 (N.Y. 1861). The common law rule is that a corporation has no power to sell all its property and discontinue business against the dissent of a single stockholder. Luehrmann v. Lincoln Trust & Title Co., 192 S.W. 1026, 1032 (Mo. 1917). Jones v. Bank of Leadville, 10 Colo. 464, 476, 17 Pac. 272, 278 (1887).
See Fletcher, Cyclopedia of Corporations, § 2945 (perm. ed. 1950); Sprecher, The Right of Minority Stockholders to Prevent the Dissolution of a Profitable Enterprise, 33 Ky. L.J. 150 (1945); Lattin, Equitable Limitations on Statutory or Charter Powers Given to Majority Stockholders, 30 Mich. L. Rev. 645 (1932); Berle, Nonvoting Stock and "Bankers' Control," 39 Harv. L. Rev. 673 (1926); Comments, Sale of All or Substantially All of Corporate Assets—Effect of Modern Statutes, 45 Mich. L. Rev. 341 (1947); Limitations of the Statutory Power of Majority Stockholders to Dissolve a Corporation, 25 Harv. L. Rev. 677 (1912);
view with liberality the activities of management—except where directors' actions are ultra vires or fraudulent or oppressive to the minority—it is well settled that those entrusted with the management of corporate affairs must exercise the highest degree of trust and good faith.\textsuperscript{19} This would seem especially true where one or more classes of owners are not represented by the board of directors. Thus, when every stockholder enjoys the right to vote, courts will generally decline to substitute their judgment for that of the majority, since, presumably, the interests of the majority will best serve the interests of the corporation.\textsuperscript{20} On the other hand, when one or more classes of stockholders are not represented by the directors, the presumption of good faith would seem to be attenuated, and, at the instigation of a nonvoting stockholder, courts tend carefully to evaluate alleged misconduct by the directors.\textsuperscript{21} Accordingly, it would seem that even though nonvoting
\textsuperscript{19} Fielding v. Allen, 99 F. Supp. 137 (S.D.N.Y. 1951); Otis & Co. v. Pennsylvania R. Co., 61 F. Supp. 905 (E.D. Pa. 1945), aff'd, 155 F.2d 522 (3rd Cir. 1946), 31 Va. L. Rev. 695 (1945); Abrams v. Allen, 297 N.Y. 52, 74 N.E.2d 305, reh. denied, 297 N.Y. 604, 75 N.E.2d 274 (1947), 15 U. Chi. L. Rev. 423 (1948), 48 Colum. L. Rev. 290 (1948), 33 Cornell L.Q. 421 (1948), 61 Harv. L. Rev. 541 (1948), 31 Marq. L. Rev. 294 (1948), 46 Mich. L. Rev. 683 (1948), 23 N.Y.U.L.Q. Rev. 209 (1948), 48 Stan. Intrr. L. Rev. 147 (1948), 21 So. Calif. L. Rev. 403 (1948), 96 U. Pa. L. Rev. 418 (1948), 57 Yale L.J. 489 (1948); Bayer v. Beron, 49 N.Y.S.2d 2 (1944); Turner v. American Metal Co., 268 App. Div. 239, 259, 50 N.Y.S.2d 800, 810 (1944); Chelrob, Inc. v. Barrett, 293 N.Y. 442, 57 N.E.2d 285, reh. denied, 293 N.Y. 859, 59 N.E.2d 446 (1944); Shaw v. Davis, 28 Atl. 619, 621 (Md. 1894): "... whenever any action of either directors or stockholders is relied on ... for the purpose of invoking the interposition of a court of equity, if the act complained of be neither ultra vires, fraudulent, nor illegal, the court will refuse intervention because powerless to grant it, and will leave all such matters to be disposed of by the majority of the stockholders in such manner as their interest may dictate, and their action will be binding on all, whether approved by the minority or not." See Ballentine, Corporations 160, 161 (rev. ed. 1946); Uhlman, The Duty of Corporate Directors to Exercise Business Judgment, 20 B.U.L. Rev. 488 (1940); Latty, Partial Survey of Minority Shareholder Protection in American Corporation Law, 1 J. Bus. L. 110 (1957).
\textsuperscript{20} See note 19 supra.
\textsuperscript{21} Duty of directors: Kavanaugh v. Kavanaugh Knitting Co., 226 N.Y. 185, 123 N.E. 148 (1919). See Swope, Some Aspects of Corporate Management, 23 Harv. Bus. Rev. 314 (1943); Douglas, Directors Who Do Not Direct, 47 Harv. L. Rev. 1305 (1934); Rhodes, Personal Liability of Directors for Corporate Mismanagement,
stockholders are denied direct managerial participation, they are, nevertheless, protected by statute or decision in proprietary matters.\textsuperscript{22}
It would appear, therefore, that the decision of the West Virginia court in the instant case effectively places a premium upon evasive techniques\textsuperscript{23} to vest control in a minority group.\textsuperscript{24} The court remarked that
\begin{itemize}
\item Duty of directors and majority stockholders to nonvoting stockholders: Bates Street Shirt Co. v. Waite, 130 Me. 352, 156 Atl. 293 (1931); Kidd v. New Hampshire Traction Co., 74 N.H. 170, 66 Atl. 127 (1907). See BALLANTINE, CORPORATIONS 156 (rev. ed. 1946); Berle, Non-voting Stock and "Bankers' Control," 39 HARV. L. REV. 673 (1926).
\item Duty of majority stockholders to minority stockholders: Geddes v. Anaconda Copper Mining Co., 254 U.S. 599 (1921); Zahn v. Transamerica Corp., 162 F.2d 36 (3rd Cir. 1947), 36 CALIF. L. REV. 325 (1948), 33 CORNELL L.Q. 414 (1948), 61 HARV. L. REV. 359 (1948), 41 ILL. L. REV. 122 (1946) (see 63 F. Supp. 243 (D.C. Del. 1945)), 46 MICH. L. REV. 1061 (1948), 96 U. PA. L. REV. 276 (1947); Lebold v. Inland S.S. Co., 82 F.2d 351 (7th Cir. 1936); Nave-McCord Mercantile Co. v. Ranney, 29 F.2d 383 (8th Cir. 1928); Outwater v. Public Service Corp. of New Jersey, 103 N.J. Eq. 461, 143 Atl. 729 (1928), aff'd, 104 N.J. Eq. 490, 146 Atl. 916 (1929); Allied Chemical & Dye Corp. v. Steel & Tube Co. of America, 14 Del. Ch. 1, 120 Atl. 486 (1924); Kavanaugh v. Kavanaugh Knitting Co., supra; Theis v. Spokane Falls Gas Light Co., 34 Wash. 23, 74 Pac. 1004 (1904). See Lattin, Equitable Limitations on Statutory or Charter Powers Given to Majority Stockholders, 30 MICH. L. REV. 645 (1932); Berle, Corporate Powers as Powers in Trust, 44 HARV. L. REV. 1049 (1931). Note, 33 YALE L.J. 436 (1924).
\item Duty of directors to the corporation: See Uhlmam, Legal Status of Corporate Directors, 19 B.U.L. REV. 12 (1939); Dodd, Is Enforcement of Fiduciary Duties of Corporate Managers Practicable? 2 U. CHI. L. REV. 194 (1935). Notes, 35 COLUM. L. REV. 219 (1935); 44 YALE L.J. 547 (1935); 83 U. PA. L. REV. 56 (1934); 8 WIS. L. REV. 342 (1933); 45 HARV. L. REV. 1388 (1932); 29 COLUM. L. REV. 338 (1929).
\end{itemize}
\textsuperscript{22}See note 18 supra.
\textsuperscript{23}For example: Where the issuance of nonvoting stock is not permitted, corporations may accomplish its end result by dividing the class of stock that is to be given control into small denominations thereby increasing its voting strength. Thus, in a corporation with $100,000 capital stock, $75,000 class A shares could be given a par value of $100 each and thereby 750 votes; the remaining $25,000 class B stock could then be divided into shares of $25 par value and thereby possess 1000 votes. Another device for achieving the same objective is "vote laden" stock, i.e., control by voting strength disproportionate to investment. Thus, the class A stock could be given one vote per share, while the class B stock has 2 votes per share. The plan most commonly used to frustrate the effectiveness of minority representation is to classify the board of directors and stagger the election of each class. A board comprised of six members whose term of office is three years may be classified into three groups whereby two directors are elected each year. This latter device, however, would not achieve the
nothing in the West Virginia Constitution prohibits stockholders of a private corporation from waiving their right to vote or from entering into an express agreement with other stockholders affecting the manner in which they exercise this right.\textsuperscript{26} The weight of authority supports this conclusion, even though the obvious purpose of voting control agreements is to secure or retain control of the corporation in a select group of stockholders,\textsuperscript{28} and even though the consideration may
\textsuperscript{26} "Minority stockholder" is used in contradistinction to controlling stockholder, though in fact, the minority may well represent a majority of the stockholders. Schmid v. Ballard, 175 Minn. 138, 220 N.W. 423 (1928); Cases cited note 18 \textit{supra}. \textit{Cf.} Alster v. British Type Investors, 83 F. Supp. 949 (S.D.N.Y. 1949); Note, 8 U. Chi. L. Rev. 335 (1941).
\textsuperscript{28} 56 S.E.2d 171, 180 (W. Va. 1957).
\textsuperscript{29} E.K. Buck Retail Stores v. Harket, 157 Neb. 867, 62 N.W.2d 288 (1954), 33 Neb. L. Rev. 636; Ringling Bros. B. & B. Combined Shows, Inc. v. Ringling, 29 Del. Ch. 610, 53 A.2d 441 (1947), 36 Calif. L. Rev. 28: (1948), 60 Harv. L. Rev. 651 (1947), 46 Mich. L. Rev. 70 (1947), 15 U. Pa. L. Rev. 738 (1948), 96 U. Pa. L. Rev. 121 (1947); Gumbiner v. Alden Inn, 389 Ill. 273, 59 N.E.2d 648 (1945). It is particularly interesting that Illinois has long recognized the validity of such agreements, while expressly prohibiting nonvoting stock. Clark v. Dodge, 269 N.Y. 410, 199 N.E. 641, 5 Brooklyn L. Rev. 336, 36 Colum. L. Rev. 836, 21 Minn. L. Rev. 103, 13 N.Y.U.L.Q. Rev. 585, 11 St. Johns L. Rev. 117, Fitzgerald v. Christy, 242 Ill. App. 343 (1926); Horn v. J.O. Nessen Lumber Co., 236 Ill. App. 187 (1925); Thompson v. J.D. Thompson Carnation Co., 279 Ill. 54, 116 N.E. 648 (1917); Luthy v. Ream, 270 Ill. 170, 110 N.E. 373 (1915); Venner v. Chicago City R. Co., 258 Ill. 523, 101 N.E. 949 (1913); Kantzler v. Bensingar, 214 Ill. 589, 73 N.E. 874 (1905); Higgins v. Lansingh, 154 Ill. 301, 40 N.E. 362 (1895); Faulds v. Yates, 57 Ill. 416 (1870). See Fletcher, \textit{Cyclopedia of Corporations}, § 2064 (perm. ed. 1923); Ballantine, \textit{Corporations} 442, § 189 (rev. ed. 1946); Delaney, \textit{The Corporate Director: Can His Hands Be Tied in Advance}, 50 Colum. L. Rev. 52 (1950); Comment, \textit{Stockholders' Control by Agreement}, 17 Fordham L. Rev. 95 (1948); Annot., 71 A.L.R. 1289 (1930). \textit{Cf.} Benintendi v. Kenton Hotel, 294 N.Y. 112, 60 N.E.2d 829 (1945) (dictum). Also, \textit{cf.} Durkee v. People \textit{ex rel.} Asken 155 Ill. 354, 40 N.E. 626 (1895). An agreement which violates express provisions of the constitution and statutes relative to the right to vote is void. Some courts, however, apparently take the view that such agreements are per se invalid as against public policy. Clark v. First Nat. Bk. of Ottumwa, 219 Iowa 637, 259 N.W. 211 (1935); Stott v. Stott, 258 Mich. 547, 242 N.W. 747 (1932), 18 Iowa L. Rev. 89; Bridges v. Staton, 150 N.C. 216, 63 S.E. 892 (1909).
Assuming a valid agreement, courts may award damages for breach of the agreement, E. K. Buck Retail Stores v. Harket, 157 Neb. 867, 62 N.W.2d 288 (1954), or grant an injunction to prevent conduct not in conformity with, and decree specific performance of the agreement. Katcher v. Ohman, 26 N.J. Super 28, 97 A.2d 180 (1953) (agreement enforceable by specific performance); Kronenberg v. Sullivan
consist of the purchase of stock on the faith of a promise of others similarly to purchase under the terms of the agreement.\textsuperscript{27} Perhaps the only limitation is that the agreement must not work a hardship on the corporation or in any way oppress creditors or other stockholders not parties to it.\textsuperscript{28}
Stockholders, however, generally may not irrevocably sever by contract the voting rights from stock ownership.\textsuperscript{29} Thus, even though a
County Steam Laundry Co., 91 N.Y.S.2d 144, \textit{aff'd}, without \textit{op.}, 277 App. Div. 916, 98 N.Y.S.2d 658, \textit{motion to retease denied}, 278 App. Div. 726, 103 N.Y.S.2d 660 (1949) (violation enjoined); Ringling Bros. B. & B. Combined Shows, Inc. v. Ringling, 29 Del. Ch. 610, 53 A.2d 441 (1947) (votes of stockholder who breached agreement treated as of no effect); Martucci v. Martucci, 42 N.Y.S.2d 223, \textit{aff'd without op.}, 266 App. Div. 840, 43 N.Y.S. 2d 516, \textit{app. denied}, 266 App. Div. 917, 43 N.Y.S.2d 517 (1943) (agreement enforceable by specific performance); Harris v. Magrelli, 131 Misc. 380, 226 N.Y.S. 621 (1928) (violation of agreement enjoined); Clark v. Dodge, 269 N.Y. 410, 199 N.E. 641 (1936) (agreement enforceable by specific performance); Fitzgerald v. Christy, 242 Ill. App. 343 (1926) (violation of agreement enjoined).
\textit{Contra}, Haldeman v. Haldeman, 176 Ky. 635, 197 S.W. 376 (1917). Although holding the particular agreement was invalid as against public policy, the court stated that even if it was assumed that the contract was valid, a court of equity would not grant specific performance. The court said to do so would be in effect to have the court elect the directors, a matter which has historically been within the province of the stockholders. Gage v. Fisher, 5 N.D. 297, 65 N.W. 809 (1895). See Annot., 71 A.L.R. 1289 (1930).
\textsuperscript{27} Asher v. Ruppa, 173 F.2d 10 (7th Cir. 1949); Gray v. Bloomington & Normal Ry., 120 Ill. App. 159 (1905); Smith v. San Francisco & N.P. Ry. Co., 115 Cal. 584, 47 Pac. 582 (1897). \textit{Contra}, Johnson v. Spartanburg County Fair Ass'n, 210 S.C. 56, 41 S.E.2d 599 (1947).
\textsuperscript{28} Ford v. Magee, 160 F.2d 457 (2d Cir. 1947), \textit{cert. denied}, 332 U.S. 759 (1947); Feich v. Kaufman, 174 Ill. App. 306 (1912); McQuade v. Stoneham, 263 N.Y. 323, 189 N.E. 234, \textit{reh. denied}, 264 N.Y. 460, 191 N.E. 514 (1934). An agreement among a minority in number for the purpose of obtaining control of the corporation by election is not illegal since stockholders have the right to combine their interests and voting power to secure control of the corporation. It is only when such agreements contravene express charter or statutory provisions or contemplate any fraud, oppression, or wrong against other stockholders or an illegal object, that they are invalid and not binding upon the parties thereto. Manson v. Curtis, 223 N.Y. 313, 119 N.E. 559 (1918).
\textsuperscript{29} Luthy v. Ream, 270 Ill. 170, 110 N.E. 373 (1915); Gage v. Fisher, 5 N.D. 297, 65 N.W. 809 (1895). See Fletcher, \textit{Cyclopedia of Corporations}, § 2065 (perm. ed. 1952); Comment, \textit{Separation of the Voting Power from Legal and Beneficial Ownership of Corporate Stock}, 47 Mich. L. Rev. 547 (1949); Note, 61 Harv. L. Rev. 1062 (1948). \textit{Contra}, White v. Snell, 35 Utah 434, 100 Pac. 927 (1909); Smith v. San Francisco & N.P. Ry. Co., 115 Cal. 584, 47 Pac. 582 (1897) (neither is it illegal nor against public policy to separate the voting power of the stock from its ownership). Cf. Winsor v. Commonwealth Coal Co., 63 Wash. 62, 114 Pac. 908 (1911) (statute specifically provided for the separation).
stockholder may enter into a valid contract delegating authority to vote his stock, he, nevertheless, retains the power to abrogate the agreement when the delegatee threatens to exercise the vote in a manner inimical to the stockholder's best interests.\textsuperscript{30} Furthermore, it is arguable that a stockholder who is able contractually to delegate his voting rights in an arm's-length voting control agreement enjoys a somewhat more favorable bargaining position than one whose right to vote is denied by his purchase of stock with predetermined rights and privileges.
Voting-control agreements, nevertheless, do not provide the flexibility in corporate finance or the complete close stockholder control that non-voting stock accomplishes. Furthermore, the practical difficulties inherent in any attempt to secure such flexibility and control by means of voting-control agreements, tend to render their use ineffectual in large and widely-held corporations. Consequently, the Supreme Court of Appeals of West Virginia has foreclosed the most effective means of obtaining corporate financial flexibility and close stockholder control that was thought formerly to exist.
If, then, complete stockholder democracy is a practical impossibility and if the West Virginia Constitution is sufficiently ambiguous to warrant the construction that the vigorous dissent in the instant case espoused and which the Missouri court, in interpreting its analogous constitutional provision adopted, the result here is unfortunate.\textsuperscript{31} Unless a court is bound by clear and unequivocal language, it should proceed with caution in nullifying a statute upon which extensive business practice and expectations have been built.\textsuperscript{32}
\textsuperscript{30} Shepang Voting Trust Cases, 60 Conn. 553, 24 Atl. 32 (1890); Warren v. Pim, 66 N.J. Eq. 353, 59 Atl. 773 (1904); Morel v. Hoge, 130 Ga. 625, 61 S.E. 487 (1908); Bridges v. First Nat. Bank, 152 N.C. 293, 67 S.E. 770 (1910).
\textsuperscript{31} See note 3 supra. The dissenting judge argued first, that the constitutional provision appeared under the subhead "Rights of Stockholders," and a stockholder has no rights except those acquired by contract. Secondly, that great deference should be given past administrative and legislative interpretation of the provision. Finally, assuming that the provision was intended to guarantee the right to vote to every stockholder, there was no valid reason why the stockholders could not waive the right.
\textsuperscript{32} The West Virginia Legislature has proposed an amendment to article XI, section 4 of the constitution which will provide in part as follows: "The Legislature shall provide by law that every corporation, . . . shall have power to issue one or more classes and series within classes of stock, with or without par value, with full, limited or no voting powers. . ." Senate Bill No. 251.
|
A Test of Neutrality Based on Interlocus Associations
John K. Kelly\(^1\)
*Department of Ecology and Evolution, University of Chicago, Chicago, Illinois 60637*
Manuscript received October 21, 1996
Accepted for publication April 18, 1997
ABSTRACT
The evolutionary processes governing variability within genomic regions of low recombination have been the focus of many studies. Here, I investigate the statistical properties of a measure of intralocus genetic associations under the assumption that mutations are selectively neutral and sites are completely linked. This measure, denoted \(Z_{ns}\), is based on the squared correlation of allelic identity at pairs of polymorphic sites. Upper bounds for \(Z_{ns}\) are determined by simulations. Various deviations from the neutral model, including several different forms of natural selection, will inflate the value of \(Z_{ns}\) relative to its neutral theory expectations. Larger than expected values of \(Z_{ns}\) are observed in genetic samples from the *yellow-ac-scuta* and *Adh* regions of *Drosophila melanogaster*.
THERE is now a substantial body of statistical theory to test hypotheses regarding gene sequence evolution (Ewens 1990; Hudson 1990; Kreitman 1990). Much of this theory predicts the patterns of genetic variation that are likely to arise in the absence of natural selection (Kimura 1983). These neutral models provide a null hypothesis that can be tested against genetic data. They also provide a basis for comparison when models involving natural selection are considered.
The simplest tests of neutrality concern gene sequence variation within samples from a single population at a single genetic locus (a nonrecombining sequence). The current set of test statistics in this category measure the frequency distribution of mutant alleles within the sample (Watterson 1978; Tajima 1989a; Fu and Li 1993; Braverman *et al.* 1995; Simmonsen *et al.* 1995). A second important characteristic of gene sequence variation, not directly measured by these tests, is the pattern of associations among mutant alleles at different polymorphic sites.
The most commonly used measure of interlocus genetic associations is the linkage disequilibrium. Consider a population of sequences that is polymorphic for two alternative alleles at a series of nucleotide sites. Let \(p_i\) and \(p_j\) denote the frequency of the mutant allele at the \(i\)th and \(j\)th loci, respectively. The linkage disequilibrium between loci \(i\) and \(j\), denoted \(D_{ij}\), is
\[
D_{ij} = p_{ij} - p_i p_j,
\]
where \(p_{ij}\) is the frequency of sequences that have mutant alleles at both sites.
The magnitude of the linkage disequilibrium depends on both the strength of association between the two loci and the frequency of mutant alleles at each locus. A standardized measure of linkage disequilibrium (ranging from 0 to 1) is \(\delta_{ij}\), the squared correlation of allelic identity between loci \(i\) and \(j\) (Hartl and Clark 1989, pp. 53–54):
\[
\delta_{ij} = \frac{D_{ij}^2}{p_i (1 - p_i) p_j (1 - p_j)}.
\]
It is noteworthy that \(\delta_{ij}\) yields the same value regardless of which alternative allele at each locus is considered the mutant. Thus, it can be calculated without information about the ancestral allele at each site.
In general usage, the linkage disequilibrium (and any statistic derived from it) is defined by the haplotype frequencies within the entire population. Here, we will be concerned with the values of \(D_{ij}\) and \(\delta_{ij}\) within a sample of gene sequences (calculated from the haplotypic frequencies within the sample). I investigate the properties of a sample statistic, \(Z_{ns}\), that averages \(\delta_{ij}\) over all pairwise comparisons of \(S\) polymorphic sites in a sample of \(n\) sequences:
\[
Z_{ns} = \frac{2}{S(S-1)} \sum_{i=1}^{S-1} \sum_{j=i+1}^{S} \delta_{ij}.
\]
The probability distribution of \(Z_{ns}\), conditional on \(n\) and \(S\), is investigated via computer simulations. The simulations are limited to neutral evolution within sequences of completely linked sites. The consequences of various deviations from the neutral model on \(Z_{ns}\) are discussed with specific attention to the effects of selection at linked sites. Finally, \(Z_{ns}\) is estimated from previously published genetic data of *Drosophila melanogaster* from the *yellow-ac-scuta* region (Martin-Campos *et al.* 1992) and the *Adh* locus (Kreitman 1983; Laurie *et al.* 1991).
SAMPLE GENEALOGIES
The expected patterns of genetic variation within a set of gene sequences from a natural population can
lineages will “coalesce” at points of shared ancestry. Eventually, all sequences lineages will converge on a single common ancestor (Kingman 1982).
This sample genealogy implies certain relationships between sequences and constrains the patterns of genetic variation that may be observed in that sample. There is an extensive mathematical theory describing the stochastic properties of sample genealogies for sequences evolving via neutral mutations (Watterson 1975; Kingman 1982; Tavare 1984). I will mention two important features of this theory for the special case of a population governed by a Wright-Fisher demographic model (random mating, diploidy, population size constancy, discrete non-overlapping generations, and binomially distributed individual reproductive success) and an infinite sites mutation model (all mutations occur at previously monomorphic sites and the number of mutations introduced during production of a progeny sequence is Poisson distributed).
Let $T_k$ denote the total time (measured in units of $2N$ generations) during which the genealogy has exactly $k$ lineages, where $k$ ranges from 2 to $n$. Under the neutral model described above, the $T_k$ are exponentially distributed and stochastically independent. The total branch length of the genealogy, denoted $T_{cum}$, is equal to
$$T_{cum} = \sum_{j=2}^{n} j \ T_j.$$
Second, the mutational process is stochastically independent of the genealogical process. In figurative terms, mutations are randomly “sprinkled” onto the sample genealogy and each mutation is present in all descendants of the sequence onto which it is dropped (Hudson 1990). The total number of mutations on the genealogy, which equals the total number of polymorphic sites in the sample, is a Poisson random variable with the mean equal to the product of the mutation rate and $T_{cum}$.
**Effect of population genealogy on $\delta ij$:** In the absence of recombination, $\delta ij$ is a measure of allele frequency equivalency across loci and will equal 1 only if two of the four possible two-locus haplotypes are present in a sample ($p_i = p_j = p_{ij}$). It is less than 1 if three haplotypes are present. Imagine that mutations occur on the genealogy in Figure 1A at the points denoted a, b, c, d, e, and f. Fifteen contrasts between polymorphic sites are averaged to determine $Z_{as}$. Two of these contrasts, $\delta ab$ and $\delta ef$, are equal to 1. The remaining 13 contrasts yield $\delta_{ij}$ that are less than 1.
All contrasts between polymorphic sites occurring on lineages that go back in time, unbifurcated, directly to the common ancestor of the entire sample, give $\delta_{ij}$ values of 1. This “critical region” of the genealogy includes both lineages during the $T_2$ time interval and some subsequent segments. For this reason, we expect larger values of $Z_{as}$ for genealogies with a long period of history with only two ancestors (if $T_2$ is large, the
critical region should constitute a high proportion of the genealogy). The time $T_2$ has the largest expectation and the largest variance of all time intervals under the coalescent model (Hudson 1990). Thus, we expect random variation in $T_2$ to generate much of the variation in $Z_{ns}$.
Deviations from the neutral genealogy can arise from a number of factors and theoretical studies have explored how different evolutionary processes, including selection at linked sites, affect sample genealogies (Hudson and Kaplan 1988; Kaplan et al. 1988, 1989; Takahata 1988; Slatkin 1989; Tajima 1989b). Figure 1B illustrates a sample genealogy for a neutral sequence if it is completely linked to a site where an advantageous mutation has recently swept to fixation (Maynard Smith and Haigh 1974; Kaplan et al. 1989; Braverman et al. 1995). The same type of genealogy may result if the population has recently experienced a severe bottleneck in size (Tajima 1989b).
These circumstances are likely to lead to a "star genealogy" with all sequences separated by an approximately equal amount of evolutionary time. In a star genealogy, $\delta_{ij} = 1$ if and only if the two mutations occur on the same branch. This quantity equals $1/(n-1)^2$ if the two mutations occur on different branches. Weighing these two values by their relative likelihood, the expected value of $\delta_{ij}$, and also of $Z_{ns}$, is $1/(n-1)$. If $n > 2$, the probability that $Z_{ns}$ equals 1 is $(1/n)^{s-1}$. Despite that each lineage of a star genealogy goes back in time, unbifurcated, directly to the common ancestor of the entire sample, there is no critical region. Thus, we expect smaller values of $Z_{ns}$ under a star genealogy than under a neutral genealogy.
Selection at linked sites will affect the genealogy of a neutral sequence in a different way if recombination occurs between the selected site and the neutral sequence. If recombination is infrequent, we expect that most sequences within a sample will have a recent common ancestor in the sequence that was initially linked to the selectively favored mutation. However, if a recombination event occurs between the selected site and the neutral sequence and this recombination occurred during the sojourn of the beneficial mutation, the most recent common ancestor between this recombinant sequence and the rest of the sample may be far more ancient (Figure 1C; see also Figure 2 in Braverman et al. 1995). Most sequences in the sample of Figure 1C are closely related to each other (linked by a star genealogy). However, the sample also contains one (or a few) sequences that are distantly related to the entire set of sequences in the star genealogy. This will significantly inflate $Z_{ns}$ because all mutations over a high proportion of the genealogy will yield $\delta_{ij} = 1$ (there is large critical region of the genealogy). Thus, while selective sweeps with no recombination (Figure 1B) should reduce $Z_{ns}$, sweeps with recombination may give $Z_{ns}$ values that exceed the neutral expectation.
Balancing selection of two alternative alleles at a linked locus (Figure 1D) will produce a genealogy that is qualitatively different from the others in Figure 1. Sequences within an "allelic class," where allele refers to the alternatives at the selected locus, are likely to have a recent common ancestor (the top or bottom set of sequences in Figure 1D). However, recombination between the neutral sequence and the selected site is necessary for coalescence of sequences from distinct allelic classes. This is likely to take much longer. Again, because the topology is dominated by the period where only two sequences are present, $Z_{ns}$ is likely to be high.
**THEORY**
The cumulative distribution function of $Z_{ns}$ can be investigated by conditioning on the genealogy of the sample
$$\text{Prob}[Z_{ns} < c] = \sum_G \text{Prob}[Z_{ns} < c | G] \text{ Prob}[G], \quad (5)$$
where $c$ ranges from 0 to 1 and $G$ denotes the characteristics of the sample genealogy including branch lengths and topology. The sum is taken over all possible genealogies and $\text{Prob}[G]$ denotes the likelihood of any specific genealogy. $\text{Prob}[Z_{ns} < c | G]$ is the probability that $Z_{ns} < c$ given the genealogy $G$.
This probability is approximated by averaging over a large number of simulations:
$$\text{Prob}[Z_{ns} < c] \approx \frac{1}{W} \sum_{i=1}^{W} \text{Ind}[Z_{ns}(i) < c], \quad (6)$$
where $Z_{ns}(i)$ is the calculated value of $Z_{ns}$ from the $i$th simulation and $W$ is the number of simulations. $\text{Ind}[Z_{ns}(i) < c]$ is a function that equals 1 if $Z_{ns}(i) < c$ and equals 0 if not.
Each simulation of evolution was obtained from the following algorithm based on the general methodology described by Hudson (1990, 1993): (1) establish a random genealogy via simulation (see below), (2) randomly drop $S$ mutations onto the genealogy, (3) determine the mutations present on each sample sequence given the position of mutations on the genealogy, (4) calculate $Z_{ns}$ and store result. A random genealogy was established by first simulating the branch lengths, the $T_i$ in Equation 4, by drawing exponential random numbers with the appropriate parameter. The topology was established by randomly coalescing lineages until the common ancestor of the entire sample was obtained. The topological information was stored in an array that noted, for each branch in the genealogy, whether or not it is ancestral to each of the $n$ sample sequences (Hudson 1990). Mutations occurring on a branch will be present on each sample sequence to which it is ancestral.
### TABLE 1
**Upper bounds for $z_{ns}$**
| Sample size ($n$) | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 12 | 14 | 16 | 18 | 20 | 25 | 30 | 40 | 50 | 60 | 80 |
|-------------------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
| **S** | | | | | | | | | | | | | | | | | | | |
| 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 3 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 4 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 5 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 6 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 8 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 10 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 12 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 14 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 16 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 18 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 20 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 25 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 35 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 50 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
The top number in each position is $F_{95}$ (~95% of simulations were below the listed for that $n$ and $S$) and the bottom number is $F_{0.5}$.
This distribution of $Z_{ns}$ is characterized by $F_{95}$ and $F_{97.5}$, where ~5 and 2.5% of the simulated values of $Z_{ns}$ are above these bounds, respectively (Table 1). It is probably most appropriate to consider $Z_{ns}$ a “one-tailed” test of the neutral model [higher than expected values for $Z_{ns}$ suggest some form of selection at linked sites (Figure 1), whereas lower than expected values of may be caused by nothing more than intragenic recombination]. In a one-tailed test, the 95th and 97.5th percentiles denote $P$ values of 0.05 and 0.025, respectively. Table 1 gives these percentiles for $n$ ranging from 2 to 80 and $S$ ranging from 2 to 50. A library of the full cumulative distribution functions for each case (from which $P$ values can be assigned to any specific value of $Z_{ns}$) has been compiled on computer files and are available from the author upon request.
The distribution of $Z_{ns}$ is typically skewed with its most likely values less than the mean (Figure 2). The distribution is not smooth and often exhibits “spikes” at specific values, especially when $S$ is small (compare Figure 2, A–C). These small scale changes in probability are not caused by sampling error associated with the limited number of simulated genealogies (they emerge at the same points in distinct sets of simulations with the same values for $n$ and $S$).
The position of these discontinuities can be pre**TABLE 2**
The percentage of type I errors in 20,000 simulations of evolution for given values of $n$ and $\theta$
| $\theta$ | Sample size ($n$) |
|----------|------------------|
| | 5 | 10 | 25 | 50 |
| $5/f_n$ | 1.9 | 4.4 | 5.1 | 5.3 |
| $10/f_n$ | 3.7 | 5.4 | 5.2 | 5.5 |
| $20/f_n$ | 4.7 | 5.1 | 5.3 | 5.5 |
Here $f_n = \Sigma 1/i$ for $i$ ranging from 1 to $n - 1$.
with $n = 30$ and $S = 5$ has spikes at each of the points predicted by the extreme model (Figure 2A). With higher numbers of polymorphic sites (e.g., Figure 2, B and C), the spikes corresponding to high numbers of mutations in the critical region are discernable, but spikes at lesser values are smoothed away.
**Simulation testing of upper bounds:** The objective of the prior simulations was to determine the distribution of $Z$ conditional on $n$ and $S$. The first step of the simulation algorithm was to generate a genealogy by the standard coalescent method given a sample of size $n$ (Kingman 1982). The second step, where $S$ mutations are randomly sprinkled onto the sample genealogy, is the means by which the distribution is conditioned on $S$. This procedure is based on the idea that $S$ alone provides no information about genealogy (Hudson 1993).
To determine whether $F_{35}$, as determined in this way, represents a valid test statistic, I conducted a second set of simulations. Specifically, we need to determine if a population of a given size and mutation rate that is undergoing neutral evolution produces data that is inconsistent with the neutral model only 5% of the time. In this set of simulations, the sample size and scaled mutation rate, $\theta$, were specified in advance (not the number of polymorphic sites). Given these two quantities, standard coalescent simulations were performed by the following algorithm: (1) establish a random genealogy via simulation, (2) simulate the number of mutations on the genealogy given $\theta$ and the total branch length of the simulated genealogy ($T_{\text{runs}}$), (3) randomly drop mutations onto the genealogy, (4) determine the mutations present on each sample sequence given the position of mutations on the genealogy, (5) calculate $Z_{ns}$ and store with $S$ value. In contrast to the previous procedure, each simulation may differ from others in $S$ as well as $Z_{ns}$.
I performed 20,000 simulations for each of a range of values of $n$ and $\theta$ (Table 2). All simulations where there were less than two polymorphic sites were discarded because $Z_{ns}$ is undefined for these cases (and would not be used on such data). The set of simulations for each value of $n$ and $\theta$ were subdivided by $S$ and, for each $S$, the fraction of “false positives” were calculated ($Z_{ns}$ values above $F_{35}$). The overall false positive percentage is an average over all values of $S$ weighted by the fraction of simulations resulting in $S$ polymorphic sites.
In most cases, the false positive rate was $\sim 5\%$ (Table 2). However, for the lowest values of $n$ and $\theta$, the percentage was substantially less (the test becomes too conservative). The low rate of false positives is due to the fact that, with low values of $n$ and $S$, no result is inconsistent with the neutral model (Table 1). For example, with $n = 5$ and $\theta = 5/f_n = 2.40$, 16,447 of 20,000 simulations yielded two to eight polymorphic sites for which even $Z_{ns} = 1$ is not inconsistent with the neutral model.
This second set of simulations generally support $F_{95}$ as an upper bound for testing of the neutral model. This provides some justification for the general method suggested by HUDSON (1993) for generating distributions conditional on $S$: simulate a random genealogy and then sprinkle on $S$ mutations. I have also investigated two alternative methods for estimating Prob[$Z_{ns} < c$]. In these methods, probabilities were determined by conditioning on $\theta$ as well as on $S$. First, a standard coalescent simulation was performed with a random number of mutations. All simulations that did not result in exactly $S$ mutations were discarded (which is the means by which conditioning affects the distribution). I found that for a given value of $S$, the expected value of $Z_{ns}$ tends to decrease as $\theta$ increases. The first method, described by BERGER and BOOS (1994) and employed in coalescent models by SIMONSEN et al. (1995), is to perform the simulations with the lowest value of $\theta$ that is consistent with $S$. A second method, which can be justified from BAYES theorem (J. K. KELLY, unpublished results), involves simulating evolution over a range of values of $\theta$. The distribution of $Z_{ns}$ was then obtained by averaging the conditional probabilities, Prob[$Z_{ns} < c | S, \theta = x$], weighted by the relative likelihood of observing $S$ polymorphic sites given that $\theta = x$. However, I found that neither of these alternative methods performed as well the simpler procedure described above. The BERGER and BOOS (1994) method yields $F_{95}$ that are too conservative. The Bayesian model typically yielded false positive rates that exceeded 5%.
### APPLICATIONS TO GENETIC DATA FROM *D. melanogaster*
**Yellow-acute region:** MARTIN-CAMPOS et al. (1992) surveyed 10 *D. melanogaster* populations (seven in Europe, two in the USA, and one in Japan) for restriction site variation in a 23.1-kb region on the X chromosome and identified 14 polymorphic sites. This area, the *y-acute* region, is close to the telomere and is known to have a very low rate of recombination (DUBININ et al. 1937; BEECH and LEIGH-BROWN 1989). Four of the seven European populations had sufficient variation ($S > 1$) to calculate $Z_{ns}$ and three of four have significantly higher values of $Z_{ns}$ than expected under the neutral model (Table 3). Both American populations have $Z_{ns}$ values that are slightly higher than expected but well within range consistent with the neutral model (Table 3). Finally, the Japanese population has the highest possible value for the test statistic, $Z_{ns} = 1$. However, given the small sample size ($n = 8$) and number of polymorphic sites ($S = 3$), we expect this result under the neutral model $\sim 14\%$ of the time.
**Adh locus:** A balanced polymorphism of alternative electrophoretic alleles in the Alcohol Dehydrogenase gene (*Adh*) of *D. melanogaster* has been suggested by geographical (OAKESHOTT et al. 1982), population genetic (HUDSON et al. 1987; KREITMAN and HUDSON 1991), and biochemical analyses (AQUADRO et al. 1986; LAURIE et al. 1991; LAURIE and STAMS 1994). A collection of *Adh* sequences is presented in Figure 3. Sequences numbered 1–6 and 10–14 were obtained by KREITMAN (1983). Sequences numbered 7–9 and 15 were obtained by LAURIE et al. (1991). Finally, the alleles numbered 16–18 are previously unpublished sequences generously provided by MARTIN KREITMAN.
Figure 3 lists all polymorphic sites in this sample within the third intron and the translated region of the fourth exon (which includes the allozyme polymorphism) of the *Adh* gene. The position numbers in Figure 3 follow the scheme in Figure 2 of LAURIE et al. (1991) and increase from the 5' to 3' direction. The consensus sequence is derived from KREITMAN (1983). The allozyme polymorphism is located at site 1490 where the consensus nucleotide A denotes the "slow" allele and the C nucleotide denotes the "fast" allele.
The *Adh* gene is on the second chromosome of *D. melanogaster* and there is a higher rate of recombination in this area than in the *y-acute* region. Thus, we expect that linkage disequilibrium will decay more rapidly with distance in *Adh*. For this reason, an "expanding window" analysis was applied to the sequence data. The statistic $Z_{ns}$ was calculated by contrasting sequence variation within a window of sites around the fast/slow polymorphism (e.g., HUDSON and KAPLAN 1988; KREITMAN and HUDSON 1991), where a window of width $2k$ in-
### TABLE 3
**Summary of restriction site data**
| Population | $n$ | $S$ | $Z_{ns}$ | $P$ |
|-----------------------------|-----|-----|----------|-----|
| 1. Groningen, Holland | 25 | 7 | 0.33 | 0.21|
| 2. Canary Islands | 25 | 7 | 0.70 | 0.029|
| 3. Barcelona, Spain | 50 | 7 | 0.52 | 0.041|
| 4. Huelva, Spain | 23 | 5 | 1.00 | 0.015|
| 5. Texas, United States | 27 | 8 | 0.27 | 0.28 |
| 6. North Carolina, United States | 20 | 8 | 0.34 | 0.24 |
| 7. Fukuoka, Japan | 8 | 3 | 1.00 | 0.14 |
From MARTIN-CAMPOS et al. (1992). $P$ denotes the fraction of simulations equal to or greater than observed value of $Z_{ns}$. The populations from Requena, Oviedo, and Leon did not have $S > 1$ and are not included here.
includes all polymorphisms within $k$ sites of the fast/slow polymorphism. The width $k$ was progressively expanded from 0 to 140 sites and the statistics recalculated whenever a new polymorphism was included.
Each point in Figure 4 denotes the value of $Z_{ns}$ in a window of the width given by the x-axis. High values of $Z_{ns}$ are observed for small windows around the fast/slow polymorphism, but $Z_{ns}$ decays rapidly with window size. The windows, including four and five polymorphisms, respectively ($Z_{ns} = 0.82$, $Z_{ns} = 0.78$; denoted by two asterisks in Figure 4), are significantly higher than expected (the fraction of simulations equal to or greater than the observed values were 0.04 and 0.03, respectively). The windows including three or six polymorphisms ($Z_{ns} = 0.76$, $Z_{ns} = 0.57$; denoted by one asterisk in Figure 4) are borderline significant ($P = 0.08$ for each).
**DISCUSSION**
There is considerable interest in the patterns of genetic variability within genomic regions of low recombination (Aguade et al. 1989; Stephan and Langley 1989; Begun and Aquadro 1992; Charlesworth et al. 1993; Charlesworth 1994). The distribution of $Z_{ns}$ obtained here is based on the assumption of no recombination between sites. This assumption is also essential to other single population/single locus tests of neutrality (Watterson 1978; Tajima 1989a; Fu and Li 1993; Braverman et al. 1995; Simonsen et al. 1995). The expected evolutionary patterns under the extreme condition of no recombination provide a basis for comparison when more complicated and realistic models that include recombination are considered.
In the absence of recombination, $Z_{ns}$ is a measure of allele frequency equivalency across polymorphic sites. This measure declines in value as asymmetry among loci increases. The neutral model predicts a certain level of allele frequency asymmetry among polymorphic sites. However, when natural selection acts on a polymorphism that is closely linked to neutral sites, allele frequency asymmetries may be reduced. For this reason, higher than expected values of $Z_{ns}$ may represent a molecular signature of natural selection.
Unfortunately, relatively high values of $Z_{ns}$ are necessary to reject the neutral model (Table 1). The high variance of $Z_{ns}$ is a consequence of the stochastic nature of sample genealogies under neutrality. This also limits the power of other single population/single locus tests. While such tests may not be very powerful alone, their combined application may prove quite useful, especially if different tests are sensitive to distinct types of deviation from the neutral model (see next section). Direct studies of specific evolutionary models (e.g., Braverman et al. 1995; Simonsen et al. 1995) are required to assess the statistical power of $Z_{ns}$ relative to other tests (e.g., Tajima 1989a; Fu and Li 1993) in different biological circumstances.
**Applications:** There have been numerous surveys of restriction site variation in the yellow-acute region of *D. melanogaster* (Aguade et al. 1989; Beech and LeighThree general features of these data are (1) reduced levels of nucleotide variation relative to other genomic regions in *D. melanogaster*, (2) an excess of rare alleles at polymorphic sites, and (3) linkage disequilibrium among polymorphic sites.
*Begun* and *Aquadro* (1991) and *Martin-Campos et al.* (1992) suggest that selectively advantageous mutations have recently occurred in the *y-ac-sc* region with hitch-hiking (*sensu Maynard Smith* and *Haigh* 1974) affecting the patterns of genetic variation at linked sites. This contention is supported by two aspects of the *Martin-Campos et al.* (1992) data. The first is based on the observed allele frequencies at polymorphic sites. In the four European populations with more than one polymorphic site, there is an excess of rare alleles relative to the neutral expectation (Tajima’s $D$ is negative in each case, with statistically significant values for the Barcelona and Huelva populations).
Second, hitch-hiking reduces the level of intraspecific neutral variation by reducing $T_{\text{sum}}$, the total length of the sample genealogy. It should not affect the expected divergence among species however (*Kimura* 1983; *Hudson et al.* 1987). The level of nucleotide variation within the *yellow-ac* region is lower than expected given the level of interspecific divergence between *D. melanogaster* and either *D. simulans* or *D. sechellia* under neutrality (*Begun* and *Aquadro* 1991; *Martin-Campos et al.* 1992). This observation supports a hitch-hiking model.
The present study extends these analyses by considering whether or not the associations between polymorphic sites observed in the data of *Martin-Campos et al.* (1992) can be explained by mutation-drift balance among linked sites. Higher than expected values for $Z_{ns}$ were observed in the European samples from Barcelona, Huelva, and the Canary islands (Table 3). Thus, the pattern of associations among polymorphic loci is also inconsistent with a neutral model.
These calculations of $Z_{ns}$ complement the previous application of Tajima’s test to these data [table 7 in *Martin-Campos et al.* (1992)]. For example, the rarer allele at each of the five polymorphic sites in the Huelva sample appears in only one of the 23 sequences. This yields the most negative value possible for Tajima’s $D$ (given five polymorphic sites) and suggests a sample genealogy like Figure 1, B or C. However, it is notable that all of the sequence variation in the Huelva sample is concentrated on a single sample allele [haplotype 38 in Table 4 of *Martin-Campos et al.* (1992)]. This pattern of interlocus association among polymorphisms is indicated by the significantly high value of $Z_{ns}$ for this sample and suggests that the genealogy in Figure 1B is much less likely than the alternative model allowing recombination between selected and neutral loci (Figure 1C).
Selective neutrality has also been rejected in previous analyses of the *Adh* gene of *D. melanogaster* (*Hudson et al.* 1987; *Kreitman* and *Hudson* 1991). These analyses have contrasted the amount of polymorphism within *D. melanogaster* with the amount of divergence between *D. melanogaster* and closely related species in the *Adh* region. The amount of polymorphism around site 1490 is greater than expected, which suggests a balanced polymorphism.
Linkage disequilibrium is expected among selectively neutral polymorphic sites closely linked to a balanced polymorphism and strong linkage disequilibrium is observed close to the fast/slow polymorphism in *Adh* (Kreitman 1983). However, tests demonstrating significant linkage disequilibrium between sites generally do not indicate the cause of interlocus associations (Lewontin 1995). Linkage disequilibrium is expected among closely linked sites as a result of mutation/drift balance without any contribution from selection (Ohta and Kimura 1969, 1971; Weir and Cockerham 1974; Griffiths 1981).
The substantial pattern of allele frequency equivalency across polymorphic sites near the fast/slow site in *Adh* is perhaps more informative. For example, the mutant allele has the same frequency at the sites numbered 9–13 and 15 in Figure 5. This equivalency across sites yields values of $Z_{ns}$ within small windows around site 1490 that are too large to be consistent with neutrality (Figure 6).
The rapid decay of $Z_{ns}$ values with increasing window size in Figure 6 indicates the sensitivity of this measure to recombination. This will limit the utility of $Z_{ns}$ as a test unless there is a very low rate of recombination (as in mitochondrial DNA or telomeric regions of the nuclear genome) or a large number of polymorphic sites in close proximity to each other (as in *Adh*).
With intragenic recombination, $Z_{ns}$ will be determined only partially by the sample genealogy. For this reason, it may prove useful to modify the statistic to hedge against the effects of recombination. One potential method involves contrasting $Z_{ns}$, which is sensitive to both recombination and genealogy, with another statistic that only depends on recombination. A standardized measure of linkage disequilibrium that is relatively independent of allele frequency was proposed by Lewontin (1964):
$$D'_g = \frac{D_g}{\text{Min} \left[ p_i(1 - p_j), p_j(1 - p_i) \right]}, \quad \text{if } D_g > 0$$
$$D'_g = \frac{D_g}{\text{Min} \left[ p_i p_j, (1 - p_i)(1 - p_j) \right]}, \quad \text{if } D_g < 0,$$
where $\text{Min} \left[ a, b \right]$ denotes the smaller of $a$ and $b$. Lewontin’s $D'$ ranges from −1 to 1 and assumes intermediate values only when all four two-locus haplotypes are present in a sample {00, 01, 10, and 11}. When the two polymorphic sites are completely linked and derived from a single ancestor, there can be at most three of these haplotypes present. Thus, in the absence of recombination, $(D'_g)^2$ equals 1 for all comparisons between mutations and is insensitive to genealogy.
Two potential measures of interlocus association that compare the squared correlation among sites and $(D'_g)^2$ are as follows:
$$Z^*_nS = Z_{ns} + 1 - D^*_nS$$
and
$$Z^{**}_nS = \frac{Z_{ns}}{D^*_nS},$$
where $D^*_nS$ is the averaged squared value of Lewontin’s $D'$:
$$D^*_nS = \frac{2}{S(S - 1)} \sum_{i=1}^{s-1} \sum_{j=i+1}^{s} (D'_g)^2.$$
The distributions of $Z^*_nS$ and $Z^{**}_nS$ are equal to the distribution of $Z_{ns}$ when sites within the neutral sequence are completely linked because then $D^*_nS$ must equal 1. When intragenic recombination occurs, we expect that $Z^*_nS$ and $Z^{**}_nS$ will be greater than $Z_{ns}$. These modified statistics will be useful if the genealogical information provided by $Z_{ns}$ when there is no intragenic recombination is preserved in $Z^*_nS$ and $Z^{**}_nS$ when intragenic recombination does occur.
Patterns of linkage disequilibria are now being used to isolate disease loci in humans (Feder et al. 1996; Little 1996). Present analyses are largely nonstatistical and do not distinguish between the possible causes for linkage disequilibria. The “sliding window” methodology developed by Hudson and Kaplan (1988) provides a statistical means to isolate selected loci based on the amount of variation within a genomic region. The present study suggests that a similar method based on the patterns of interlocus associations within a region may also prove a useful tool of genetic analysis.
This article benefitted from comments by C. Bergman, M. Wade, D. Kelly, K. F. Malloy, R. Hudson, and an anonymous reviewer, and from discussions with J. Thorne, M. Kreitman, E. Stalh, B. Charlesworth, D. Charlesworth, and M. Nordborg. The author received financial support from a National Institutes of Health (NIH) genetics training grant, a NIH National Research Service Award postdoctoral grant, and a Hughes Foundation postdoctoral award.
**LITERATURE CITED**
AguaDe, M., N. Miyashita and C. H. Langley, 1989 Reduced variation in the yellow-achate-scutle region in natural populations of *Drosophila melanogaster*. Genetics 122: 607–615.
Aquadro, C. F., S. F. Desse, M. Bland, C. H. Langley and C. C. Laurie-Ahlberg, 1986 Molecular population genetics of the alcohol dehydrogenase gene region of *Drosophila melanogaster*. Genetics 114: 1165–1190.
Beech, R. N., and A. J. Leigh-Brown, 1989 Insertion-deletion variation at the yellow, achate-scutle region in two natural populations of *Drosophila melanogaster*. Genet. Res. 53: 7–15.
Begun, D. J., and C. F. Aquadro, 1991 Molecular population genetics of the distal portion of the X chromosome in Drosophila: evidence for genetic hitchhiking of the yellow-achate region. Genetics 129: 1147–1158.
Begun, D. J., and C. F. Aquadro, 1992 Levels of naturally occurring DNA polymorphism correlate with recombination rates in *D. melanogaster*. Nature 356: 519–520.
Berger, R. L., and D. D. Boos, 1994 P values maximized over a confidence set for the nuisance parameter. J. Am. Stat. Assoc. 89: 1012–1016.
Braverman, J. M., R. R. Hudson, N. L. Kaplan, C. H. Langley and W. Stephan, 1995 The hitchhiking effect on the site frequency spectrum of DNA polymorphisms. Genetics 140: 783–796.
Charlesworth, B., 1994 The effect of background selection against deleterious mutations on weakly-selected, linked variants. Genet. Res. 63: 213–227.
Charlesworth, B., M. T. Morgan and D. Charlesworth, 1993 The effect of deleterious mutations on neutral molecular variation. Genetics 134: 1289–1303.
Dubinin, N. P., N. N. Sokolov and G. G. Tinjakov, 1937 Crossover
between the genes "yellow," "achaete," and "scute." Dros. Inf. Serv. 8: 76.
EANES, W. F., J. LABATE and J. W. AJIKA, 1989 Restriction-map variation with the yellow-achaete-scute region in five populations of *Drosophila melanogaster*. Mol. Biol. Evol. 6: 492–502.
EWENS, W. J., 1990 Population genetics theory—the past and the future, pp. 177–227 in *Mathematical and Statistical Developments of Evolutionary Theory*, edited by S. LESSARD. Kluwer Academic Publishers, Dordrecht, The Netherlands.
FEDER, J. N., A. GNIRKE, W. THOMAS, Z. TSUCHIHASHI et al., 1996 A novel *MHC class II* gene is mutated in patients with hereditary haemochromatosis. Nat. Genet. 13: 399–408.
FU, Y.-X., and W.-H. LI, 1993 Statistical tests of neutrality of mutations. Genetics 133: 693–709.
GRIFFITHS, R. C., 1981 Neutral two-locus multiple allele models with recombination. Theor. Popul. Biol. 19: 169–186.
HARTL, D. L., and A. G. CLARK, 1989 *Principles of Population Genetics*. Sinauer Associates, Sunderland, MA.
HUDSON, R. R., 1990 Gene genealogies and the coalescent process. Oxford Surv. Evol. Biol. 7: 1–44.
HUDSON, R. R., 1993 The how and why of generating gene genealogies, pp. 23–36 in *Mechanisms of Molecular Evolution*, edited by N. TAKAHATA and A. G. CLARK. Sinauer Associates, Sunderland, MA.
HUDSON, R. R., and N. L. KAPLAN, 1988 The coalescent process in models with selection and recombination. Genetics 120: 831–840.
HUDSON, R. R., M. KREITMAN and M. AGUADE, 1987 A test of neutral molecular evolution based on nucleotide data. Genetics 116: 155–159.
KAPLAN, N. L., T. DARDEN and R. R. HUDSON, 1988 The coalescent process in models with selection. Genetics 120: 819–829.
KAPLAN, N. L., R. R. HUDSON and C. H. LANGLEY, 1989 The hitchhiking effect revisited. Genetics 123: 887–899.
KIMURA, M., 1983 *The Neutral Theory of Molecular Evolution*. Cambridge University Press, Cambridge.
KINGMAN, J. F. C., 1982 The coalescent. Stochast. Proc. Appl. 13: 235–248.
KREITMAN, M., 1983 Nucleotide polymorphism at the alcohol dehydrogenase locus of *Drosophila melanogaster*. Nature 304: 412–417.
KREITMAN, M., 1990 Detecting selection at the level of DNA, pp. 204–221 in *Evolution at the Molecular Level*, edited by R. K. SELANDER, A. G. CLARK and T. S. WHITTAM. Sinauer Associates, Sunderland, MA.
KREITMAN, M., and R. R. HUDSON, 1991 Inferring the evolutionary histories of the *Adh* and *Adh-dup* loci in *Drosophila melanogaster* from patterns of polymorphism and divergence. Genetics 127: 565–582.
LAURIE, C. C., and L. F. STAM, 1994 The effect of an intronic polymorphism on alcohol dehydrogenase expression in *Drosophila melanogaster*. Genetics 138: 379–385.
LAURIE, C. C., J. T. BRIDGHAM and M. CHOUDHARY, 1991 Associations between DNA sequence variation and variation in expression of the *Adh* gene in natural populations of *Drosophila melanogaster*. Genetics 129: 489–499.
LEWONTIN, R. C., 1964 The interaction of selection and linkage. I. General considerations; heterotic models. Genetics 49: 49–67.
LEWONTIN, R. C., 1995 The detection of linkage disequilibrium in molecular sequence data. Genetics 140: 377.
LITTLE, P., 1996 Woman's meat, a man's poison. Nature 382: 494–495.
MACPHERSON, J. N., B. S. WEIR and A. J. LEIGH-BROWN, 1990 Extensive linkage disequilibrium in the *achaete-scute* complex of *Drosophila melanogaster*. Genetics 126: 121–129.
MARTIN-CAMPOS, J. M., J. M. COMERON, N. MIYASHITA and M. AGUADE, 1992 Intraspecific and interspecific variation in the *yac-sce* region of *Drosophila simulans* and *Drosophila melanogaster*. Genetics 130: 805–816.
MAYNARD SMITH, J., and J. HAIGH, 1974 The hitch-hiking effect of a favorable gene. Genet. Res. 23: 23–35.
OAKESHOTT, J. G., J. B. GIBSON, P. R. ANDERSON, W. R. KNIBB, D. G. ANDERSON et al., 1982 Alcohol dehydrogenase and glycerol-3-phosphate dehydrogenase clines in *Drosophila melanogaster* on different continents. Evolution 36: 86–96.
OHTA, T., and M. KIMURA, 1969 Linkage disequilibrium due to random genetic drift. Genet. Res. 13: 47–55.
OHTA, T., and M. KIMURA, 1971 Linkage disequilibrium between two segregating nucleotide sites under the steady flux of mutations in a finite population. Genetics 68: 571–580.
SIMONSEN, K. L., G. A. CHURCHILL and C. F. AQUADRO, 1995 Properties of statistical tests of neutrality for DNA polymorphism data. Genetics 141: 413–429.
SLATKIN, M., 1989 Detecting small amounts of gene flow from phylogenies of alleles. Genetics 121: 609–612.
STEPHAN, W., and C. H. LANGLEY, 1989 Molecular variation in the centromeric region of the X chromosome in three *Drosophila ananassae* populations. I. Contrasts between the vermilion and forked loci. Genetics 121: 89–99.
TAJIMA, F., 1989a Statistical method for testing the neutral mutation hypothesis by DNA polymorphism. Genetics 123: 585–595.
TAJIMA, F., 1989b The effect of change in population size on DNA polymorphism. Genetics 123: 597–601.
TAKAHATA, N., 1988 The n coalescent in two partially isolated diffusion populations. Genet. Res. 52: 213–222.
TAVARE, S., 1984 Line-of-descent and genealogical processes and their applications in population genetics models. Theor. Popul. Biol. 26: 119–164.
WATTERSON, G. A., 1975 On the number of segregating sites in genetical models without recombination. Theor. Popul. Biol. 7: 256–276.
WATTERSON, G. A., 1978 The homozygosity test of neutrality. Genetics 88: 405–417.
WEIR, B. S. and C. C. COCKERHAM, 1974 Behavior of pairs of loci in finite monoecious populations. Theor. Popul. Biol. 6: 323–354.
Communicating editor: R. R. HUDSON
|
Molecular Recognition of Bridged Bis (β-cyclodextrin)s Linked by Phenylendiseleno Tether on the Primary or Secondary Side with Fluorescent Dyes†
LI, Li (李莉) HE, Song (何松) LIU, Yu* (刘育)
Department of Chemistry, State Key Laboratory of Elemento-Organic Chemistry, Nankai University, Tianjin 300071, China
A novel β-cyclodextrin dimer, 2,2′-o-phenylendiseleno-bridged bis(β-cyclodextrin) (2), has been synthesized by reaction of mono-[2-O-(p-tolysulfonyl)]-β-cyclodextrin and poly(o-phenylendiselenide). The complexation stability constants ($K_s$) and Gibbs free energy changes ($-\Delta G^\circ$) of dimer 2 with four fluorescence dyes, that is, ammonium 8-anilino-1-naphthalenesulfonate (ANS), sodium 6-(p-toluidino)-2-naphthalenesulfonate (TNS), Acridine Red (AR) and Rhodamine B (Rhb) have been determined in aqueous phosphate buffer solution (pH = 7.2, 0.1 mol·L\(^{-1}\)) at 25 °C by means of fluorescence spectroscopy. Using the present results and the previously reported corresponding data of β-cyclodextrin (1) and 6,6′-o-phenylendiseleno-bridged bis(β-cyclodextrin) (3), binding ability and molecular selectivity are compared, indicating that the bis(β-cyclodextrin)s 2 and 3 possess much higher binding ability toward these dye molecules than parent β-cyclodextrin 1, but the complex stability constant for 2 linked from the primary side is larger than that of 3 linked from the secondary side, which is attributed to the more effective cooperative binding of two hydrophobic cavities of host 3 and the size/shape-fit relationship between host and guest. The binding constant ($K_s$) upon inclusion complexation of host 3 and AR is enhanced by factor of 27.3 as compared with that of 1. The 2D \(^1\)H NOESY spectrum of host 2 and Rhb is performed to confirm the binding mode and explain the relative weak binding ability of 2.
Keywords organoselenium-bridged bis(β-cyclodextrin), molecular recognition, inclusion complexation, binding mode
Introduction
As a kind of typical molecular receptors, cyclodextrin and its derivatives can bind a series of substrates (guests) to form host-guest complexes in aqueous solution.\(^{1-3}\) Therefore, they can be extensively used in many fields of science and technology, serving as drug carriers,\(^{4,5}\) artificial enzymes\(^{6,7}\) and chemical sensors,\(^{8,9}\) etc. To extend the binding ability of mono-modified cyclodextrins, the design and synthesis of novel bridged bis(β-cyclodextrin)s have been taken on to achieve the cooperative binding of the dual hydrophobic cavities to one guest molecule.\(^{10-12}\)
In previous works,\(^{13,14}\) we prepared a series of organoselenium-bridged bis(β-cyclodextrin)s and examined their molecular recognition behaviors with organic dyes. The results indicated that the bridged bis(β-cyclodextrin)s, possessing dual hydrophobic cavities in a close vicinity, significantly enhance the original molecular binding ability of the parent β-cyclodextrin through the cooperative binding of one guest molecule in the closely located two cyclodextrin cavities. Simultaneously, the comparison of complexation behavior of the 6,6′- and 2,2′-bridged bis(β-cyclodextrin)s linked by trimethylenediseleno tether indicated that the molecular binding ability of dimer tethered from the secondary side is stronger than that tethered from the primary side.
In present work, we have prepared a novel 2,2′-o-phenylendiseleno-bridged bis(β-cyclodextrin) (2) tethered by phenylendiseleno moiety and investigated the inclusion complexation behavior of 2 and 6,6′-o-phenylendiseleno-bridged bis(β-cyclodextrin) (3) with some organic dye guests (Chart 1) by using spectrofluorometric titrations at 25 °C in aqueous phosphate buffer solution (pH = 7.2). It is of our special interesting to examine and compare the molecular binding ability of bridged bis(β-cyclodextrin)s linked from primary and secondary side, respectively. The result indicate that differing from the trimethylenediseleno-bridged bis(β-cyclodextrin)s reported previously by us,\(^{14}\) 6,6′-bridged dimer 3 linked from the primary side by o-phenylendiseleno tether can form more stable complex with organic dyes than 2,2′-bridged dimer 2 linked from the secondary side.
†Dedicated to Professor ZHOU Wei-Shan on the occasion of his 80th birthday.
• E-mail: email@example.com; Tel.: + 86-022-23503625; Fax: + 86-022-23504853
Received January 26, 2003; revised April 18, 2003; accepted May 5, 2003.
Project supported by the National Natural Science Foundation of China (Nos. 29992590-8 and 20272028), the Natural Science Fund of Tianjin (No. 013613511) and Special Fund for Doctoral Program from the Ministry of Education of China (No. 20010055001).
1064-1069
**Experimental**
**General procedure**
Elemental analyses were performed on a Perkin-Elmer 2400C instrument. $^1$H NMR spectra were recorded on a Varian INVOA300 instrument at 300 MHz in D$_2$O. FT-IR and UV spectra were recorded on a Nicolet FT-IR SDX and Shimadzu UV-2401 spectrometer, respectively. Fluorescence spectra were measured in a conventional quartz cell (10 mm × 10 mm × 45 mm) at 25 °C on a JASCOFP-750 spectrometer equipped with a temperature controller and with excitation and emission slits of 5 nm width.
**Materials**
All guest dyes, *i.e.*, ammonium 8-anilino-1-naphthalenesulfonate (ANS), sodium 6-(p-toluidinyl)-2-naphthalenesulfonate (TNS), Acridine Red (AR), and Rhodamine B (RhB), were commercially available and used without further purification. $\beta$-Cyclodextrin of reagent grade (Shanghai Reagent Works) was recrystallized twice from water and dried *in vacuo* at 95 °C for 24 h prior to use. $N,N$-Dimethylformamide (DMF) was dried over calcium hydride for 2 d and then distilled under a reduced pressure prior to use. All other chemicals were of reagent grade and were used without further purification. 6,6'-o-Phenymenediseleno-bridged bis($\beta$-cyclodextrin) (3) were synthesized according to the reported procedures.$^{13}$ Disodium hydrogen phosphate and sodium dihydrogen phosphate were dissolved in distilled, deionized water to make a 0.1 mol·L$^{-1}$ phosphate buffer solution (pH = 7.2) for spectral titration.
**Synthesis**
2,2'-o-Phenymenediseleno bridged bis($\beta$-cyclodextrin) (2) was prepared from mono-[2-O-(p-tolylsulfonyl)]-$\beta$-cyclodextrin (2-OTs-$\beta$-CD)$^{15}$ and poly ($o$-phenylenediselenide)$^{16}$ according to the following procedure. A solution of poly ($o$-phenylenediselenide) (0.117 g, 0.5 mmol), NaOH (0.06 g, 1.5 mmol) and NaBH$_4$ (0.06 g, 1.5 mmol) in dry ethanol (25 mL) was stirred under nitrogen at 85 °C for 15 min. When the color of the mixture disappeared, a solution of 2-OTs-$\beta$-CD (1.32 g, 1 mmol) in dry DMF (40 mL) was added dropwise into the clear solution over 1 h with magnetic stirring under N$_2$. The solution was heated to reflux for 36 h, and then the resultant mixture was evaporated under a reduced pressure, leaving a yellow solid. The residue was dissolved in water, and then acetone was added to the solution to give a yellow precipitate. The crude product was purified by column chromatography over Sephadex G-25 with distilled, deionized water to give a pure sample 130 mg, yield 10%. $^1$H NMR (D$_2$O, TMS) $\delta$: 3.0—4.0 (C$_2$—C$_6$H, 84H), 4.7—4.9 (C$_1$H, 14H), 7.0—8.0 (ArH, 4H); $^{13}$C NMR (DMSO-d$_6$, TMS) $\delta$: 127.4, 101.8, 81.4, 72.9, 72.3, 72.0,
69.7, 59.8, 48.4; FT-IR (KBr) ν: 3315, 2920, 1641, 1594, 1511, 1435, 1362, 1245, 1152, 1029, 931, 848, 755, 707, 579 cm⁻¹. Anal. calcd for C₉₀H₁₄₂O₆₈Se₂·10H₂O: C 40.79, H 6.16; found C 40.60, H 6.15.
Results and discussion
Synthesis
2,2'-o-Phenylenediseleno-bridged bis(β-cyclodextrin) (2) was prepared according to Scheme 1.
Scheme 1
Spectral titrations
In the titration experiments using fluorescence spectrometry, the fluorescence intensity of dyes (5—10 μmol·L⁻¹) gradually increased upon the addition of varying concentrations of hosts 2 and 3, while the emission peak progressively shifts to the blue. The emission peak changes of the organic dyes with the addition of host cyclodextrins are listed in Table 1. The pronounced hypochromic shift of the original fluorescence maximum of guests in the presence of added 2 can be observed obviously in Table 1, i.e., from 524 nm to 490 nm for ANS, which may be attributed to the cooperative binding of two hydrophobic cavities of 2, since the hypochromic shift observed upon the addition of natural β-cyclodextrin is less (524→510 nm for ANS). As shown in Fig. 1A, the fluorescence intensity of RhB was greatly enhanced upon stepwise addition of bridged bis(β-cyclodextrin) 2, indicating that the reaction of the cyclodextrin 2 and ANS has formed the host-guest inclusion complex.
Assuming 1:1 stoichiometry for the inclusion complexation of guest dyes (G) with cyclodextrins (H), where the two cyclodextrin moieties in bridged bis(β-cyclodextrin) are treated as a single unit, the complexation can be expressed by Eq. (1).
\[ \text{H} + \text{G} \xrightarrow{K_s} \text{H} \cdot \text{G} \]
Table 1 Emission peak changes of four guest dyes upon the addition of hosts 1 and 2
| Guest | c (μmol·L⁻¹) | Host | λₑₓ (nm) | λₑₘ (nm) |
|-------|--------------|------|----------|----------|
| ANS | 10 | None | 350 | 524 |
| | | β-CD | 510 | |
| | | 2 | 490 | |
| TNS | 10 | None | 350 | 496 |
| | | β-CD | 483 | |
| | | 2 | 438 | |
| AR | 10 | None | 490 | 561 |
| | | β-CD | 553 | |
| | | 2 | 557 | |
| RhB | 5 | None | 520 | 575 |
| | | β-CD | 573 | |
| | | 2 | 573 | |
Figure 1 (A) Fluorescence spectral changes of ANS (10.1 μmol·L⁻¹) upon the addition of bridged bis(β-cyclodextrin) 2 in phosphate buffer solution (pH = 7.2) at 25 °C; the concentration of 2 (from a to k): 0, 0.02, 0.05, 0.10, 0.14, 0.19, 0.24, 0.29, 0.33, 0.38 and 0.43 mmol·L⁻¹, respectively; excitation at 350 nm. (B) Least-squares curve-fitting analyses for the above inclusion complexation.
The stability constants (\( K_s \)) of the inclusion complex formed can be calculated from the analysis of the sequential changes in fluorescence intensity (\( \Delta I_f \)) at varying host concentrations by using a non-linear least squares curve-fitting method according to the Eq. (2).\(^{17}\)
\[
\Delta I_f = \frac{1}{2} \left[ a \left( [H]_0 + [G]_0 + 1/K_s \right) \pm \sqrt{a^2 \left( [H]_0 + [G]_0 + 1/K_s \right)^2 - 4a^2[H]_0[G]_0} \right]
\]
Here \([G]_0\) and \([H]_0\) refer to the total concentration of the organic dyes and host cyclodextrins, respectively; \(a\) is the proportionality coefficient, which may be taken as a sensitivity factor for the fluorescence change upon complexation.
For all host compounds examined, the \( \Delta I_f \) values as a function of \([H]_0\) give excellent fits, verifying the validity of the 1:1 complex stoichiometry as assumed above. The typical curve-fitting analyses result for the inclusion complexation of host 2 with ANS is shown in Fig. 1B, where no serious deviations are found. When repeated measurements were made, the $K_s$ value was reproducible within an error of $\pm 5\%$, which corresponds to an estimated error of 0.15 kJ/mol in the free energy change of complexation ($-\Delta G^\circ$). The complex stability constants ($K_s$) and the Gibbs free energy changes ($-\Delta G^\circ$) obtained are listed in Table 2.
**Table 2** Stability constants ($K_s$) and Gibbs free energy changes ($-\Delta G^\circ$) for the inclusion complexation of hosts (1—3) with four guest dyes at 25 °C in phosphate buffer solution (pH = 7.2)
| Guest | Host | $K_s$ (L·mol$^{-1}$) | $\lg K_s$ | $-\Delta G^\circ$ (kJ/mol) | Ref. |
|-------|------|---------------------|-----------|--------------------------|------|
| ANS | 1 | 103 | 2.01 | 11.5 | 18 |
| TNS | | 3670 | 3.56 | 20.3 | 18 |
| AR | | 2630 | 3.42 | 19.5 | 10 |
| RhB | | 4240 | 3.63 | 20.7 | 10 |
| ANS | 2 | 909 | 2.96 | 16.9 | this work |
| TNS | | 7980 | 3.90 | 22.3 | this work |
| AR | | 7600 | 3.88 | 22.2 | this work |
| RhB | | 7910 | 3.90 | 22.3 | this work |
| ANS | 3 | 1280 | 3.11 | 17.7 | 14 |
| TNS | | 23800 | 4.38 | 25.0 | 14 |
| AR | | 71800 | 4.86 | 27.7 | this work |
| RhB | | 27200 | 4.43 | 25.3 | this work |
**Molecular binding ability**
It has been demonstrated that several possible weak interactions, such as van der Waals, hydrophobic interactions, as well as hydrogen bonding, contribute to the inclusion complexation behavior of cyclodextrins and most of these interactions depend on the size/shape-fit relationship of host and guest. Therefore, bridged bis($\beta$-cyclodextrin) can significantly enhance the complex stability by the cooperative binding of one guest with two closely located hydrophobic cavities. As can be seen from Table 2, the binding ability of bridged bis($\beta$-cyclodextrin)s 2 and 3 toward all guests investigated are increased greatly as compared with parent $\beta$-cyclodextrin, and the bridged bis($\beta$-cyclodextrin) 3 gives the highest enhancement factor for AR as 27.3.
It is well known that the two water-soluble fluorescent dyes of ANS and TNS were barely fluoresce in aqueous solution but give intense fluorescence in a hydrophobic environment such as the cavity of cyclodextrin, and then were chosen as representative guests to investigate the inclusion complexation with bis($\beta$-cyclodextrin)s. Though all hosts examined form less stable complex with ANS than with TNS, the enhanced molecular binding ability by the cooperative binding of cyclodextrin dimers for ANS is more remarkable than that for TNS. The enhancement factors for ANS are 8.8 (2) and 12.4 (3), which are much higher than 2.2 (2 for TNS) and 6.5 (3 for TNS). This may be attributed to the shape difference of linear guest TNS and bent guest ANS. Examinations of CPK molecular models indicate that ANS can penetrate only in part into the cavity of $\beta$-cyclodextrin due to the steric hindrance while TNS can be embedded deeply into the cavity of bridged bis($\beta$-cyclodextrin) in the longitudinal direction, so the cooperative binding of the second cavity of $\beta$-cyclodextrin dimer is more effective and obvious to ANS than to TNS. These results further demonstrate that the size and shape of guest molecules are very important factors for enhancing the binding ability of bridged bis($\beta$-cyclodextrin)s.
For further investigation of the difference of binding ability about 2,2'-bridged bis($\beta$-cyclodextrin) (2) and 6, 6'-bridged bis($\beta$-cyclodextrin) (3), we compare the stability constants of hosts 1—3 with four guest dyes in Fig. 2. It can be seen clearly that host 1 displays a selectivity sequence; RhB > TNS > AR > ANS, and host 3 with the tether linked from the primary side give a different selectivity sequence, i.e., AR > RhB > TNS > ANS. Comparing the relative binding ability of bridged bis($\beta$-cyclodextrin)s, it is found that host 3 possessing the rigid conformation linked by phenylenediseleno tether gives the highest stability constant ($\lg K_s = 4.86$) for inclusion complexation with AR. One possible explanation for the strongest binding ability of host 3 is that the rigid phenylenediseleno tether leads to the relative fixed distance of two hydrophobic cavities, which makes it easy for binding with the linear guest of appropriate size. Since host 3 could include AR without large conformational change, the inclusion complexation process might be favorable from the entropic point of view.\(^{19}\)

**Fig. 2** Complex stability constants ($K_s$) for the inclusion complexation of cyclodextrins 1—3 with four guest dye molecules.
On the other hand, host 2 linked from the secondary side and host 3 linked from the primary sides possess the same tether, but display the entirely different inclusion complexation behavior for binding with guest molecules. As illustrated in Fig. 2, host 2 shows the obviously lower binding ability toward all guests than host 3 does, and gives the almost equal $K_s$ values to guests TNS, AR and...
The probably explanation for the lower molecular binding ability and molecular selectivity is that, the guest molecules penetrate into the cavities of host 2 from the larger secondary sides, making the operation of strict size/shape fitting difficult and leading to the weak interaction between guest and host 2 consequently. It is known that the primary side of cyclodextrin is more suitable with benzene ring in both size and shape than the secondary side of cyclodextrin, so if the guest just shallowly longitudinal penetrated into the hydrophobic cavity by a phenyl moiety from the secondary side, it may not achieve the strong interaction like penetration from the primary side. This statement could be proved by the different binding sequence of hosts 2 and 3 toward TNS and AR. It has been mentioned that host 3 gives the strongest binding ability toward AR, however, the highest stability constant of host 2 is obtained by the complexation with TNS. Since TNS possesses the large longitudinal size and then could be include into the cavity more deeply than AR does, the stronger interaction of TNS with host 2 linked from secondary side is obtained reasonable though it is still not as strong as that for host 3.
It is interesting to note that the 2, 2'-trimethylenediseleno-bridged bis (β-cyclodextrin) with wider openings gives significantly higher $K_a$ values than the 6, 6'-trimethylenediseleno-bridged bis (β-cyclodextrin) as we reported previously. However, the present investigation gave the entirely opposite results as the relative weak binding ability for 2, 2'-bridged bis (β-cyclodextrin) 2 linked by rigid phenylenediseleno tether. This may be attributed to that the flexible trimethylenediseleno tether can adjust two hydrophobic cavities to fit in with the size and shape of guest, which makes the guest molecules to be deeply embedded into the cavities from the secondary side of 2, 2'-bridged bis(β-cyclodextrin) and form the stable inclusion complex.
**2D NMR spectra**
2D NMR spectroscopy is an important and effective method to investigate the interaction between host cyclodextrins and guest molecules, when two protons are located closely enough, a NOE cross-peak between the relevant protons can be produced in NOESY or ROESY spectrum. In our previous work, we have confirmed the cooperative sandwich binding mode of bis(β-cyclodextrin) with RhB. In order to further investigate the inclusion complexation behavior and deduce the molecular recognition mechanism of fluorescent dyes by bridged bis(β-cyclodextrin), the 2D NMR experiment has been performed. Fig. 3 gives the $^1$H NOESY spectrum of an equimolar mixture of 2 with RhB (5 mmol·L$^{-1}$ each) and three clear NOE cross-peaks are shown as peaks A, B and C. Peaks A illustrates the interaction between the H-3 and H-5 of cyclodextrin and the methyl protons of diethylamino fragments in RhB. The cross-peaks between the H-3 and H-5 of cyclodextrin and the aromatic protons of diethylaminophenyl groups in RhB, and the cross-peaks between the aromatic protons on the benzenediseleno tether of host 2 and the aromatic protons of the benzoate moiety in RhB are marked as peaks B and peaks C, respectively. All these NOE correlative signals confirmed the sandwich binding mode, that is to say, the diethylaminophenyl groups of RhB are accommodated in the cavities from the secondary side of β-cyclodextrin. However, compared with peaks A and B, peaks C is not strong enough, which indicates that the benzenediseleno tether of 2 gives much little contribution in the binding progress with RhB. Therefore, the moderate stability constants of host 2 with guests seem to be reasonable, which is consistent with the results obtained by spectral titration.

**Conclusion**
In conclusion, the molecular recognition behavior of 2, 2'-phenylene diseleno-bridged bis (β-cyclodextrin) linked from secondary side for some fluorescent dyes are compared with that of 6, 6'-phenylene diseleno-bridged bis (β-cyclodextrin) linked from primary side and the opposite binding ability sequence is obtained relative to our previous report, indicating that the flexibility of the organoselenium tether possesses the important contribution to the inclusion complexation. The rigid phenylene diseleno linker appears to help fixed the conformation of bridged bis(β-cyclodextrin) rather than to give a new binding site, which is further demonstrated by $^1$H NOESY spectrum. Therefore, the functional tethers attached to the primary or secondary side of two β-cyclodextrins can significatively alter the molecular binding ability and is taken as a useful tool for designing higher selective bridged bis(β-cyclodextrin)s with specific model substrate.
References
1 Szejtli, J.; Osa, T. Comprehensive Supramolecular Chemistry, Vol. 3, Eds.: Atwood, J. L.; Davies, J. E. D.; MacNicol, D. D.; Vogtle, F.; Elsevier, Oxford, U.K., 1996.
2 Connors, K. A. Chem. Rev. (Washington, D. C.) 1997, 97, 1325.
3 Rekharsky, M.; Inoue, Y. Chem. Rev. (Washington, D. C.) 1998, 98, 1875.
4 Uekama, K.; Hirayama, F.; Irie, T. Chem. Rev. (Washington, D. C.) 1998, 98, 2045.
5 Szejtli, J. Cyclodextrin Technology, Kluwer, Dordrecht, 1988.
6 Zhang, B.; Breslow, R. J. Am. Chem. Soc. 1997, 119, 1676.
7 Cao, F.; Ren, Y.; Hua, W.-Y.; Ma, K.-F.; Guo, Y.-L. Chin. J. Org. Chem. 2002, 22, 827 (in Chinese).
8 Lee, J.-Y.; Park, S.-M. J. Phys. Chem. B 1998, 102, 9940.
9 Buegler, J.; Engbersen, J. F. J.; Reinhoudt, D. N. J. Org. Chem. 1998, 63, 5339.
10 (a) Liu, Y.; Chen, Y.; Li, B.; Wada, T.; Inoue, Y. Chem. -Eur. J. 2001, 7, 2528; (b) Liu, Y.; You, C.-C. Chin. J. Chem. 2001, 19, 533.
11 Michels, J. J.; Huskens, J.; Reinhoudt, D. N. J. Am. Chem. Soc. 2002, 124, 2056.
12 Song, L.-X. Acta Chim. Sinica 2001, 59, 1201 (in Chinese).
13 Liu, Y.; Li, B.; Wada, T.; Inoue, Y. Supramol. Chem. 1999, 10, 279.
14 Liu, Y.; Chen, Y.; Wada, T.; Inoue, Y. J. Org. Chem. 1999, 64, 7781.
15 Shen, B.-J.; Tong, L.-H.; Zhang, H.-W.; Jin, D.-S. Chin. J. Org. Chem. 1991, 11, 265 (in Chinese).
16 Sandman, D. J.; Allen, G. W.; Acampora, L. A.; Stark, J. C.; Jansen, S.; Jones, M. T.; Ashwell, G. J.; Foxman, B. M. Inorg. Chem. 1987, 26, 1664.
17 Liu, Y.; Han, B.-H.; Sun, S.-X.; Wada, T.; Inoue, Y. J. Org. Chem. 1999, 64, 1487.
18 Liu, Y.; You, C.-C.; Wada, T.; Inoue, Y. Tetrahedron Lett. 2000, 41, 6869.
19 Liu, Y.; Han, B.-H.; Li, B.; Zhang, Y.-M.; Zhao, P.; Chen, Y.-T.; Wada, T.; Inoue, Y. J. Org. Chem. 1998, 63, 1444.
(E0301264 LU, Y. J.; LU, Z. S.)
|
Is there a lexicality component in the word superiority effect?
LESLIE HENDERSON
The Hatfield Polytechnic
Hatfield AL10 9AB, England
Much attention has been lavished recently on word superiority effects in the hope that these phenomena can be made to yield information about the organization of skilled information processing in reading. In one important approach, an attempt is made to isolate and to specify those structural attributes of words that give rise to the advantage of words over random letter arrays that is found in various recognition tasks. A variety of positions have been taken on this question of the structural basis (or bases) of word superiority. We can usefully distinguish between those accounts that center upon lexicality, that is, the possession by a string of letters of an entry in a mental lexicon that is visually addressable, and those, on the other hand, that center upon the orthographic structuredness of a string. Orthographic factors can, in turn, be subdivided into visible structure, such as is represented by positional or sequential redundancy, and pronounceability. Adjudication between these positions is a matter of some theoretical importance, since it is reasonable to suppose that success in identifying the structural factors upon which word superiority depends would allow us to place strong constraints upon the possibilities in terms of underlying mechanisms.
Since the studies of tachistoscopic report by Gibson, Pick, Osser, and Hammond (1962) and of same-different judgments by Barron and Pittenger (1974), it has become clear that some aspect of orthographic structures can act as a sufficient cause of word superiority, since nonlexical pseudowords show a performance advantage over unstructured letter strings. The question of whether lexicality (real words vs. pseudowords) also exerts an effect on performance evidently remains in dispute.
A strong position on the lexicality question has recently been taken by Carr, Posner, Pollatsek, and Snyder (1979) on the basis of experiments using the same-different task. They assert that the efficiency of judgments based on visual codes is not influenced by lexicality ("familiarity," in their nomenclature). They argue: "It now appears that earlier findings of visual familiarity effects may be attributed to response biases resulting from the activation of higher-level codes sensitive to familiarity, and to the use of small sets of training stimuli that allowed subjects to induce orthographic-like rules." Furthermore, they go on to suggest that the negative results that they obtain "reconcile an inconsistent literature." Finally, they draw some very strong and general implications from their experiments, asserting that their conclusions "may lead to changes in notions of how effective various kinds of visual training are likely to be at different stages in the acquisition of reading skill." (All quotations are taken from the summary of Carr et al., 1979.)
It is the purpose of this note to examine the conclusions drawn by Carr et al. from their visual matching task. First, I shall attempt to show that their tests of a lexical familiarity effect are inadequate. Their selection of stimuli renders the tests insensitive. Aspects of their design may have minimized the efficacy of a lexical strategy. Moreover, one of their manipulations, in which various amounts of experimental familiarization are provided for nonwords, has no necessary bearing on lexical familiarity. In a second section, I consider Carr et al.'s claim to have resolved apparent inconsistencies in the previous literature. In opposition to their conclusions, I show that the previous literature on same-different judgments, together with that on tachistoscopic thresholds, requires the conclusion that lexicality is a sufficient cause of perceptual facilitation. Many studies report lexicality effects that cannot be ascribed to response bias. The inconsistencies across studies as to whether an effect of lexicality is obtained can indeed be resolved. This is achieved by a systematic account of the conditions that need to be fulfilled for an effect to be obtained.
Testing for an Effect of Lexicality
In this section, I advance three criticisms of the adequacy of Carr et al.'s tests for a lexicality effect. These criticisms focus, in turn, on Carr et al.'s failure to control word frequency, the restriction of their test to three-letter stimuli, and their assumption that repetition within an experiment can be treated as equivalent to lexical frequency.
In a sense, the contrast between words and pseudowords is based on word frequency, insofar as the ideal pseudoword can be regarded as a word of zero frequency. The point of this way of expressing lexicality is that it helps us to see that any adequate test of a lexicality effect requires a maximization of word frequency. It is well known that within the population of real words, frequency effects are difficult to detect at low scale values. The finding that words of medium or low frequency behave like pseudowords is therefore of very little interest. Yet the word stimuli on which Carr et al. base their main test consist of 30 items, with a median frequency of only 13 occurrences/million, as assessed by the Thorndike-Lorge (1944) G scale (these stimuli are listed in Table 1 of their paper). Furthermore, 5 of their 30 word stimuli have frequency values of less than 1 occurrence/million. In the case of the items BOP and BAM, this would appear to be because they are not, in fact, lexical words. Moreover, there already exists evidence that the control of lexical frequency is critical, since Bruder (1978) has clearly shown that, in a matching experiment in which high-frequency words show an advantage over pseudowords, low-frequency words (less than 10/million) do not. Bruder's low-frequency cut point would include 43% of the Carr et al. word stimuli.
Carr et al. employed four conditions so that a lexicality effect could be tested both by comparing words with pseudowords and by comparing meaningful, but unpronounceable, abbreviations with meaningless, unpronounceable strings. They arrived at their word stimuli by permutating the letters from their meaningful, but unpronounceable, stimuli so that FIB was derived from FBI. This latter class of stimuli is difficult to control for frequency, due to both the absence of systematic norms and the restrictions of the natural set from which such items must be drawn. Despite these difficulties, we require some reassurance that the abbreviations are sufficiently familiar to serve as the basis of a test of the lexicality hypothesis. Carr et al. offer no evidence on this point. However, inspection of their stimuli suggests that several of these supposedly familiar and meaningful stimuli can be discriminated from meaningless control stimuli only after considerable deliberation. These include IRS vs. SRI, GPA vs. APG, OSU vs. SUO, UFW vs. FWU, CPA vs. APC, and WPA vs. APW. Previous investigators using these sorts of stimuli have pretested them for familiarity (e.g., Henderson & Chard, 1976), since, in the absence of reassurance that they are immediately identifiable, any failure to find an effect of the experimenter-defined lexicality manipulation is uninterpretable.
In short, the absence of any control of frequency and the fact that many of the words and almost all of the abbreviations used were of low frequency void the authors' claim to have carried out an adequate test of the lexicality hypothesis.
As a minor additional point, it can be argued that the selection of stimuli is deficient in another respect. Consistent interactions, often substantial, have been shown between familiarity effects and string length in same-different judgments (Bruder, 1978; Chambers & Forster, 1975; Eichelman, 1970; Henderson, 1974). These take the form of a diminution of the word superiority effect as letter length decreases. The three letter strings to which Carr et al. restrict their attention may therefore be an unfavorable condition for testing the effect of lexicality.
So far, we have seen that, by choosing short stimulus strings and by failing to ensure that the word strings were of high frequency, Carr et al. have accepted the null hypothesis for a lexicality effect on the basis of an insensitive test. This holds for their Experiments 1 and 2. In Experiment 3, they turn to a different manipulation, one in which massed practice with meaningless letter strings is used to induce a simulation of lexical familiarity. They show that this experimentally induced frequency effect has no influence on the efficiency with which strings are matched in the visual comparison task. This, they argue, represents further evidence against the hypothesis that "familiarity" affects the efficiency of visual matching.
What is odd about this experiment is the assumption that the property of lexicality, as embodied in the contrast of FIB with BIF or of FBI with IBF, can be treated as a factor equivalent to that arising from repeated experimental exposure to a nonword stimulus. Yet, Carr et al. discuss lexicality, and thus by implication word frequency, as if it could be simulated by manipulating the frequency of presentation of nonword stimuli in typing and matching tasks. Whether or not these two types of familiarity have equivalent effects and whether they are represented by the same structures in memory is a matter for investigation (see, for example, Scarborough, Cortese, & Scarborough, 1977). However, it cannot be right to presume this equivalence in order to use the lack of effect of one factor as evidence of the ineffectiveness of the other.
Even in studies in which experimental repetition and word frequency are found to interact, as in Scarborough et al.'s (1977) work on lexical decision latencies, the evidence does not suggest that their effects are interchangeable. It is notable that in this work the full repetition effect is induced by a single prior trial. Furthermore, the repetition effect is larger for words than for nonwords.
Moreover, if repetition of nonwords had the effect of investing them with a familiarity that was equivalent to lexical status, then repeated trials with a small set of words and nonwords should abolish any effects of actual lexical status. Yet, Henderson's (1974) lexicality effect in visual matching survived several sessions of practice with a small, fixed set of stimuli.
**Previous Evidence of a Lexicality Effect**
Carr et al. claim to have effected a reconciliation of an inconsistent previous literature on lexicality effects. In what follows, I attempt to show that the weight of previous evidence implies the existence of a lexicality effect. Furthermore, the minority of anomalous findings in which a lexicality effect has not been obtained are generally those in which the test has been insensitive due to failure to control word frequency, as is the case in Carr et al.'s own study.
Carr et al. dismiss lexicality effects, where they occur, as the products of bias in criteria for responding. Yet, it can be shown that in many previous demonstrations of a lexicality effect, criterion bias cannot be the cause. Such demonstrations can be found in studies of same-different judgments in which facilitatory effects in "same" judgments are not achieved at the cost of a deleterious effect in "different" judgments. They can also be found in studies of tachistoscopic report accuracy, to which Carr et al.'s criterion bias account does not apply. Moreover, large lexicality effects are most likely to be found when the lexicality factor is blocked, a manipulation that Carr et al. predict will diminish criterion bias as a source of a spurious lexicality effect.
Finally, I review the distinction between lexicality as a sufficient basis for the facilitation of performance and lexicality as a necessary basis. It is the former that serves as the appropriate test of the existence of a lexicality mechanism.
There exists a considerable literature that can be consulted as to the existence of lexicality effects. In Table 1, I present a summary of the evidence derived from nine previous papers in which a test of the lexicality effect is available. In six of these, the critical contrast is between words (e.g., F1B) and pseudowords (e.g., B1F). In the remaining three, the contrast is between meaningful compounds (e.g., F1B) and nonsense strings (e.g., IBF).
Considering first the latencies for "same" decisions, we find that lexical membership has a facilitatory effect in all of the 12 experiments listed. In only three of the studies does the effect fall below 50 msec. Of these, the only one to test this difference as a simple effect is the study by Barron and Henderson (1977). In their experiment, the effect was highly reliable.
Turning to the "different" decision latencies, we find less consistency. In all but one case, the lexicality effect is reduced, and in five cases, the lexicality effect is actually reversed.
Let us now consider this evidence in relation to the claim by Carr et al. that where a lexicality effect obtains, it is due simply to criterion bias, such that subjects are disposed toward responding "same" to any familiar stimuli. The expected consequence of such criterion bias is that an advantage won on "same" decisions should be lost on "different" decisions. Accordingly, a very minimal requirement of this hypothesis is that there should be a reversed familiarity effect for "different" decisions. Yet, this reversal obtains in only five of the experiments. Of these, we can confidently exclude criterion bias as a complete account of the lexicality effect in Chambers and Forster's (1975) Experiment 1, since the reversal for "different" decisions is trivial compared with the substantial positive effect on "same" decisions, and, indeed, the lexicality effect was reliable when the RTs were collapsed over type of response. In the remaining four experiments, in which a crossover of the lexicality effect occurs, it is not possible to preclude an explanation exclusively based on criterion bias, although it is notable that in each case the facilitatory effect of lexicality on "same" decisions is greater than the inhibitory effect on "different" decisions.
Taking this considerable body of evidence as a whole, it is difficult to avoid the conclusion that a lexicality effect exists which cannot be dismissed as due to the criterion bias. This conclusion holds firmly for seven out of eight experiments in which words are compared with pseudowords. When abbrevia-
### Table 1
**Summary of Previous Findings in Investigations of the Lexicality Effect in Same-Different Judgments**
| Experiment | Number of Letters in the Stimuli | Familiarity Blocked or Randomized | Magnitude of the Facilitatory Effect of Lexicality |
|-----------------------------|----------------------------------|-----------------------------------|---------------------------------------------------|
| | | | Same RT | Different RT |
| Barron & Pittenger (1974) | 5 | Both | 55 | 12 |
| Baron (1975) | 1 | 3-6 | Randomized| 23 | -10 |
| Chambers & Forster (1975) | 1 | 4-5 | Randomized| 106 | -8 |
| Chambers & Forster (1975) | 2 | 5 | Randomized| 126 | 3 |
| Taylor et al. (1977) | 2 | 6 | Randomized| 30 | 60 |
| Taylor et al. (1977) | 3 | 5 | Blocked | 240 | 200 |
| Bruder (1978) | 3 | 3-6 | Randomized| 172 | 58 |
| Barron & Henderson (1977) | 3 | 5 | Randomized| 42 | 19 |
| Henderson (1974) | 2-4 | Blocked | 50 | 37 |
| Henderson & Chard (1976) | 1 | 3 | Randomized| 51 | -21 |
| Henderson & Chard (1976) | 2 | 3 | Randomized| 51 | -35 |
| Seymour & Jack (1978) | 3 | 3 | Randomized| 60 | -45 |
*Note.—These data are drawn only from conditions in which same-case stimuli were compared and in which the types of stimuli were equiprobable. Where word frequency was manipulated, only high-frequency words were considered. In the Barron and Henderson (1977) experiment, matching was on the basis of initial letter only. In the Taylor et al. (1977) study, the data were estimated from graphs of the original authors' results. Reaction times are given in milliseconds.*
tions are compared with nonsense strings, the question is more open to doubt. Clearly, however, there are further questions to be answered. The perceptual effect of lexicality would be more securely established if we could specify some of the systematic variables determining the considerable variations in the magnitude of the lexicality effect. Moreover, even though the lexicality effect cannot be wholly accounted for by criterion bias, it must be conceded that facilitation is invariably reduced and occasionally reversed for "different" decisions. Before turning to these questions, however, we must examine a further source of evidence against the criterion bias account of lexicality effects.
Carr et al. include the tachistoscopic threshold task in their discussion of familiarity effects, and there seem to be at least two reasons for including this paradigm in our review of the influence of lexical status. First, it has often been remarked that there is considerable parallelism between word superiority effects in these two paradigms (e.g., Henderson, 1977). Second, it is difficult to conceive how a criterion bias of the sort postulated by Carr et al. (1979) could enter into tachistoscopic report.
The literature on lexicality effects in tachistoscopic report is summarized in Table 2. Seven papers yield a total of 11 experiments in which words were compared with pseudowords. In seven of these, a reliable facilitatory effect of lexicality was obtained. The balance of evidence therefore favors the existence of a facilitatory effect of lexicality in a task in which criterion bias of the sort postulated by Carr et al. cannot occur.
Let us now consider whether some of the variations in the magnitude of the obtained lexicality effects can be accounted for by interactions with other systematic variables. There appear to be four candidates, namely, word frequency, blocking vs. randomizing of the lexicality factor, length of the letter string, and orthographic structuredness of the pseudowords. These are reviewed, in turn, below.
Table 2
Summary of Previous Findings in Investigations of Lexicality Effect (LE) in Tachistoscopic Recognition Tasks
| Experiment | LE* |
|------------|-----|
| Baron & Thurston (1973) | 1 Randomized No |
| Baron & Thurston (1973) | 2 Randomized No |
| Maneis (1974) | 1 Blocked Yes |
| Maneis (1974) | 2 Randomized No |
| Maneis (1974) | 3 Both Yes |
| Spoehr & Smith (1975) | 1 Randomized Yes |
| Spoehr & Smith (1975) | 2 Randomized No |
| Juola, Leavitt, & Choe (1974) | 1 Randomized Yes |
| McClelland (1976) | 1 Blocked Yes |
| McClelland & Johnston (1977) | 1 Blocked Yes |
| Adams (1979) | 1 Blocked Yes |
*Yes = reliable; no = not reliable.
We have already seen that the logic of the lexicality hypothesis requires that any test of it employ word stimuli in which lexical frequency is maximized. At an empirical level, the requirement is reinforced, since Bruder (1978) and Chambers and Forster (1975) have shown that lexical frequency determines the magnitude of the word advantage in the same-different task. Similarly, McClelland and Johnston (1977) found that the word advantage in tachistoscopic report held only for high-frequency words.
Turning now to studies in which a lexicality effect was not reliably obtained or in which the obtained effect might possibly be ascribed entirely to criterion bias, we find that these are generally studies in which word frequency was not controlled. We have already seen this to be true of Carr et al.'s stimuli. This deficiency obtains equally in the studies by Baron (1975) and Baron and Thurston (1973). These included "words" with very low frequencies, such as WRITHE, COON, and BOAS, and "nonwords" which in some cases turn out to have lexical status (VOLE, BRAD, PONS) and in other cases can be assigned a meaning (SORED, BONK, CONT, CONS). The remaining studies of same-different judgment in which there is some doubt about the existence of a truly perceptual effect of lexicality are those of Henderson and Chard (1976) and Seymour and Jack (1978). In these studies, control of lexical frequency was not possible, since they depended upon the FBI/IBF sort of contrast in which the meaningful items have to be drawn from a small natural set containing few high-frequency members. Again turning to the threshold task, we find that the negative finding of Spoehr and Smith (1975) was obtained in the absence of control of word frequency.
Blocking vs. randomizing appears to exert an equally clear influence on the detectability of the perceptual lexicality effect. Eight of the experiments listed in Tables 1 and 2 utilized blocking of the lexical status factor, and in none of these is the lexicality effect in doubt. It is noteworthy that, whereas Henderson and Chard (1976) and Seymour and Jack (1978) obtained weak evidence of the lexicality effect, with reversal for "different" decisions, Henderson (1974), using the same type of FBI/IBF contrast but with blocking of the lexical factor, found a clear lexicality effect for both response types. Moreover, Maneis (1974) specifically compared blocking and randomizing within a single study and found that, with randomization of the lexicality factor, its facilitatory effect was either reduced or eliminated. In a related manipulation, Taylor, Miller, and Juola (1977) varied the relative proportions of words and pseudowords within a randomized design using the same-different task. The lexicality effect for "same" decisions appears to be unaffected by this manipulation, but the lexical facilitation of "different" decisions
increased with increases in the proportion of stimuli that were real words.
Carr et al.'s attempt to account for the vagaries of the lexicality effect consists of the assertion that lexicality effects may occur with randomized presentation but that, in such cases, they are attributable to criterion bias, whereas blocking may be expected to eliminate criterion bias and thus remove any sort of effect of lexicality. The present survey of the literature accords with the conclusion that blocked presentation reduces or eliminates criterion bias. However, from this point, our conclusions diverge sharply, since it is precisely the blocked condition that yields the most unambiguous evidence of a perceptual facilitation by lexicality (Henderson, 1974; Manelis, 1974; Taylor et al., 1977).
It is generally assumed that the effect of blocking is to allow the subject to preselect a strategy appropriate to the type of stimulus that is expected. There is, however, another possible way in which the sequencing of trials may also influence performance. It appears to be the case that many of the features of stimulus construction in these experiments conspire together to rob real words of their appearance of familiarity. Often the stimuli in experiments comparing performance on words with performance on nonwords share a common orthographic structure. (For example, 83% of the words employed by Carr et al., 1979, Experiments 1, 2, and 4, had CVC structure.) Furthermore, the pseudowords are commonly generated by changing a single vowel in the words. A consequence of this is that the experimental set of stimuli repeatedly sample a very restricted zone in orthographic (visual similarity) space. When a restricted zone of orthographic space is repeatedly sampled, this may result in a level of continuous activation of the members of that zone. Such a spread of activation is frequently postulated for semantic space, and at least one recent theory extends the notion to orthographic space (Glushko, 1979). Now, a consequence of this continuous activation may be satiation, by analogy with semantic satiation due to repeated presentation of a particular word, and this attenuation of lexical familiarity may be greater when the similarly structured stimuli consist of a random mixture of words and pseudowords. This effect is simulated in Table 3, from which the reader may get some experience of the induced attenuation of lexical familiarity by reading through the list of structurally similar words and pseudowords.
Another factor that appears to interact with the word superiority effect in same-different judgments is the number of letters in the stimuli (Bruder, 1978; Chambers & Forster, 1975; Eichelman, 1970; Henderson, 1974). It is noteworthy that those studies in which an overall facilitation by lexicality is in doubt are also those that use short stimulus strings. Thus, Henderson (1974) found no effect of lexicality for two-letter abbreviations, and the studies by Carr et al. (1979), Henderson and Chard (1976), and Seymour and Jack (1978) used only three-letter stimuli. Baron (1975) used stimuli of three, four, and five letters but did not analyze letter length.
So far, we have concentrated on factors that can explain the insensitivity of some studies to a lexicality effect, but, to balance matters, it should be pointed out that in the study by Taylor et al. (1977, Experiment 3), in which the lexicality effect was extraordinarily large, the pseudowords may have been less orthographically "legal" than the words (e.g., TAGRN and SNEPD). Nevertheless, this problem is by no means a general one, since most of the studies used simple rules such as vowel substitution to transform words into pseudowords, thereby controlling for orthographic structuredness.
Conclusions
Carr et al. (1979) report an absence of a lexicality effect, except when it can be dismissed as due to criterion bias. They argue that these negative conclusions are consistent with the previous literature. In the foregoing analysis, I have argued that the literature on same-different judgments, as well as the literature on tachistoscopic report accuracy, supports the alternative conclusion that lexical status exerts a perceptual facilitatory effect in these tasks. Furthermore, a detailed investigation of the previous literature reveals that the magnitude of this facilitatory effect depends on three other systematic variables—word frequency, blocking vs. randomizing of the lexicality factor, and number of letters in the stimulus. Armed with this realization, we can see why Carr et al. were unable to detect evidence of the operation of a processing mechanism that makes use of lexicality, since their study employed values of all three variables that would be expected to minimize the lexicality effect.
Moreover, in two ancillary arguments, I have speculated that Carr et al.'s method of generating stimuli so that they occupy a small zone in orthographic space may lead to a satiation effect in which lexical familiarity is attenuated, and I have rejected repeated presentation of nonwords as a technique for inducing lexical familiarity.
There are several general implications that follow from this analysis. Of these, the nature of the lexicality mechanism lies outside the scope of this note.
(but see, for example, Henderson & Chard, 1980; McClelland & Johnston, 1977; Seymour & Jack, 1978). Also, I shall not attempt an account of the vagaries of the effect of lexicality on "different" decisions. However, one point that can be briefly made here is that the currently prevalent fascination with the word superiority effect has tended to restrict consideration of such factors as lexicality to their potential role as agents of word superiority. However, it is likely that the word superiority effect can be sustained by a number of factors, lexical and orthographic. The word advantage is of interest in itself only insofar as it can provide existence proofs for mechanisms that utilize these various kinds of structure. Accordingly, the question of interest is whether a given factor serves as a sufficient cause, rather than a necessary cause, of word superiority. (It is for this reason that the FBI/IBF type of comparison, in which orthographic structure is absent as a factor, is so theoretically interesting, despite the practical difficulties of controlling word frequency, etc.) The moral in this is that if a recognition advantage is being used as a test of the existence of a lexicality mechanism, then the investigation should be designed so as to maximize the likelihood of any effect. Conversely, the absence of a lexicality effect has no certain bearing on the availability, in principle, of a lexical strategy. It merely establishes that, in the given circumstances, the performance was obtained by other strategies.
REFERENCES
Adams, M. J. Models of word recognition. *Cognitive Psychology*, 1979, 11, 133-175.
Baron, J. Successive stages in word recognition. In P. M. A. Rabbit & L. Dornic (Eds.), *Attention and performance V*. London: Academic Press, 1975.
Baron, J., & Thurston, I. An analysis of the word superiority effect. *Cognitive Psychology*, 1973, 4, 207-228.
Barron, R. W., & Henderson, L. The effects of lexical and semantic information on same-different visual comparison of words. *Memory & Cognition*, 1977, 5, 566-579.
Barron, R. W., & Pittenger, J. B. The effect of orthographic structure and lexical meaning on same-different judgements. *Quarterly Journal of Experimental Psychology*, 1974, 26, 566-581.
Bruder, G. A. Role of visual familiarity in the word-superiority effects obtained with the simultaneous-matching task. *Journal of Experimental Psychology: Human Perception and Performance*, 1978, 4, 88-90.
Carr, T. H., Posner, M. I., Pollatsek, A., & Snyder, C. R. R. Orthography and familiarity effects in word processing. *Journal of Experimental Psychology: General*, 1979, 108, 389-414.
Chambers, S. M., & Forster, K. I. Evidence for lexical access in a simultaneous matching task. *Memory & Cognition*, 1975, 3, 549-559.
Eichelman, W. H. Familiarity effects in a simultaneous matching task. *Journal of Experimental Psychology*, 1970, 86, 275-282.
Gibson, E. J., Pick, A. D., Osser, H., & Hammond, M. The role of grapheme-phoneme correspondence in the perception of words. *American Journal of Psychology*, 1962, 75, 554-570.
Glushko, R. J. The organization and activation of orthographic knowledge in reading aloud. *Journal of Experimental Psychology: Human Perception and Performance*, 1979, 5, 674-691.
Henderson, L. A word superiority effect without orthographic assistance. *Quarterly Journal of Experimental Psychology*, 1974, 26, 301-311.
Henderson, L. Word recognition. In N. S. Sutherland (Ed.), *Tutorial essays in psychology* (Vol. 1). Hillsdale, N.J.: Erlbaum, 1977.
Henderson, L., & Chard, M. J. On the nature of the facilitation of visual comparisons by lexical membership. *Bulletin of the Psychonomic Society*, 1976, 7, 432-434.
Hirshman, L., & Grainger, M. A. Children's implicit knowledge of orthographic structure. In U. Frith (Ed.), *Cognitive processes in spelling*. London: Academic Press, 1980.
Jugla, J. F., Leavitt, D. D., & Chor, C. S. Letter identification in word, nonword, and single-letter displays. *Bulletin of the Psychonomic Society*, 1974, 4, 278-280.
Manelis, L. The effect of meaningfulness in tachistoscopic word perception. *Perception & Psychophysics*, 1974, 16, 182-192.
McClelland, J. L. Preliminary letter identification in the perception of words and nonwords. *Journal of Experimental Psychology: Human Perception and Performance*, 1976, 2, 80-91.
McClelland, J. L., & Johnston, J. C. The role of familiar units in perception of words and nonwords. *Perception & Psychophysics*, 1977, 22, 249-261.
Novik, N. Parallel processing in a word-nonword classification task. *Journal of Experimental Psychology*, 1974, 102, 1015-1020.
Scarborough, D. L., Cortese, C., & Scarborough, H. S. Frequency and repetition effects in lexical memory. *Journal of Experimental Psychology: Human Perception and Performance*, 1977, 3, 1-17.
Seymour, P. H. K., & Jack, M. V. Effects of visual familiarity on "same" and "different" decision processes. *Quarterly Journal of Experimental Psychology*, 1978, 30, 455-470.
Spoehr, K. T., & Smith, E. E. The role of orthographic and phonotactic rules in perceiving letter patterns. *Journal of Experimental Psychology: Human Perception and Performance*, 1975, 1, 21-24.
Taylor, J. A., Miller, T. J., & Jugla, J. F. Isolating visual units in the perception of words and nonwords. *Perception & Psychophysics*, 1977, 21, 377-386.
Thordindike, E. L., & Lorge, I. *The teacher's word book of 30,000 words*. New York: Bureau of Publications, Teachers College, Columbia University, 1944.
NOTE
1. The fact that Carr et al. were able to obtain reliable differences between their words and pseudowords in other tasks calling for lexical or semantic judgments does not, of course, establish that the contrast between these stimuli was adequate to test for a lexicality effect in the visual matching task. Indeed, the extraordinary high error rates obtained in these tasks indicate the automaticity of the lexicality contrast was salient. For instance, in the lexical decision task, word-affirmative responses led to a 24% error rate. Furthermore, two sorts of evidence suggest that the word-affirmative decisions required extended lexical search. The speed advantage for word affirmation compared with pseudoword rejection is anomalously slight, and there is a failure to replicate Novik's (1974) finding of slower rejection of unpronounceable stimuli when they are meaningful abbreviations.
(Received for publication March 4, 1980; accepted April 8, 1980.)
|
IMPERIAL GAZETTEER OF INDIA
PROVINCIAL SERIES
CENTRAL PROVINCES
SUPERINTENDENT OF GOVERNMENT PRINTING
CALCUTTA
Price Rs. 3, 1908
The tribe have a language of their own, called after them Korkū, which belongs to the Mundā family. It was returned by 88,000 persons in 1901, of whom 59,000 belonged to the Central Provinces. The number of Korkū speakers is 59 per cent. of the total of the tribe, and has greatly decreased during the last decade.
**Vindhya Hills (Ouindion of Ptolemy).**—A range of hills separating the Gangetic basin from the Deccan, and forming a well-marked chain across the centre of India. The name was formerly used in an indefinite manner to include the Sātpurā Hills south of the Narbādā, but is now restricted to the ranges north of that river. The Vindhyas do not form a range of hills in the proper geological sense of the term, that is, possessing a definite axis of elevation or lying along an anticlinal or synclinal ridge. The range to the north of the Narbādā, and its eastern continuation the Kaimur to the north of the Son valley, are merely the southern scarps of the plateau comprising the country known as Mālwā and Bundelkhand. The features of the Vindhyas are due to sub-aerial denudation, and the hills constitute a dividing line left undenuded between different drainage areas. From a geographical point of view the Vindhyan range may be regarded as extending from Jobat (22° 27' N. and 74° 35' E.) in Gujarāt on the west to Sasarām (24° 57' N. and 84° 2' E.) in the south-western corner of Bihār on the east, with a total length of nearly 700 miles. Throughout the whole length as thus defined the range constitutes the southern escarpment of a plateau. The Rājmahāl hills, extending from Sasarām to Rājmahāl and forming the northern escarpment of the Hazāribāgh highlands, cannot be correctly considered as a part of the Vindhyas.
The range commencing in Gujarāt crosses the Central India Agency from Jhābuā State in the west, and defines the southern boundary of the Saugor and Damoh Districts of the Central Provinces. From here the Kaimur branch of the range runs through Baghelkhand or Rewah and the United Provinces into Bihār. The Kaimur Hills rise like a wall to the north of the Son valley, and north of them a succession of short parallel ridges and deep ravines extends for about 50 miles. At Amarkantak the Vindhyas touch the Sātpurā Hills at the source of the Narbādā. Westward from Jubbulpore District they form the northern boundary of the valley of that river. Their appearance here is very distinctive, presenting an almost uninterrupted series of headlands with projecting promontories and receding bays like a weather-beaten coast-line. In places the
Narbada washes the base of the rocks for miles, while elsewhere they recede and are seen from the river only as a far-off outline with the plains of Bhopal or Indore spread out below them. The rocks are sandstone of a pinkish colour and lie in horizontal slabs, which commonly testify to their origin by curious ripple marks plainly formed by the lapping of water on a sandy shore. To the north of this escarpment lies the Bundelkhand or Malwa plateau, with a length of about 250 miles and a width at its broadest part of about 225 miles. The plateau is undulating and is traversed by small ranges of hills, all of which are considered to belong to the Vindhyan system.
The most northerly of these minor ranges, called the Bindhāchāl, cuts across the Jhansi, Banda, Allahabad, and Mirzapur Districts of the United Provinces, nowhere rising above 2,000 feet. The range presents the appearance of a series of plateaux, each sloping gently upward from south to north, and ending abruptly in the steep scarp which is characteristic of these hills. Many outlying isolated hills are found in these Districts standing out on the plains beyond the farthest scarp. One small hill, called Pabhosā, stands on the left bank of the Jumna, the only rock found in the Doāb. The Bhānrer or Pannā hills form the south-eastern face of the Vindhyan escarpment, and bound the south of Saugor and Damoh Districts and the north of Maihar State in continuation of the Kaimur, thus being a part of the main range. They run from north-west to south-east for about 120 miles. Their highest peak is that of Kalumar (2,544 feet). Two other branches of the range lie in Malwa, starting respectively near Bhilsa and Jhābuā with a northerly direction, and bounding the plateau to the east and west.
The general elevation of the Vindhyan range is from 1,500 to 2,000 feet, and it contains a few peaks above 3,000, none of which is of any special importance. The range forms with the Sātpurās the watershed of the centre of India, containing the sources of the Chambal, Betwā, Sonār, Dhasān, and Ken rivers, besides others of less importance. The Son and Narbadā rise at Amarkantak, where the Vindhyan and Sātpurā ranges join. The rivers generally rise near the southern escarpment and flow north and north-east.
Geologically, the hills are formed principally of great massive sandstones of varying consistency, alternating with softer flags and shales, the whole formation covering an area not greatly inferior to that of England. The range has given its name to
the Vindhyan system of geological nomenclature. Over a great part of the Malwa plateau the sandstone is covered by the overflowing Deccan trap, while from Ganugargh fort in Bhopal to near Jobat the range itself is of basaltic formation, and the last 60 miles to the west from Jobat to near Jambhughora consist of metamorphic rocks. In the north the underlying gneiss is exposed in a great gulf-like expanse. Economically, the Vindhyan rocks are of considerable value, the sandstone being an excellent building material which has been extensively used for centuries; the Buddhist topes of Sanchi and Bharhut, the eleventh-century temples of Khajuraho, the fifteenth-century palaces of Gwalior, and numerous large forts at all important positions on the plateau having been constructed of this material. At Nagod and other places limestone is found in some quantity, the pretty coralline variety, extracted from the Bagh cretaceous beds, having been extensively employed in the palaces and tombs at Mandu; and at Panna, in the conglomerate which underlies the shales, diamonds are met with, though none of any great value is known to have been extracted. Manganese, iron, and asbestos are also found in various parts of the range. The lofty flat-topped hills and bold scarps which are such a marked feature of this range were early recognized as ideal sites for fortresses; and, besides the historical strongholds of Gwalior, Narwar, Chanderi, Mandu, Ajaigarh, and Bandogarh, the hills are studded with the ruined castles of marauding Girasia and Bundela chiefs.
The hills are generally covered with a stunted forest growth of the species found in the dry forests of Central India. Teak occurs only in patches and is of small size, while the forests are generally noticeable for their poverty in valuable timbers.
The term Vindhya in Sanskrit signifies 'a hunter'; and the range occupies a considerable place in the mythology of India, as the demarcating line between the Madhya Desa or 'middle land' of the Sanskrit invaders and the non-Aryan Deccan. The Vindhyas are personified in Sanskrit literature, where they appear as a jealous monarch, the rival of king Himalaya, who called upon the sun to revolve round his throne as he did round the peak Meru. When the sun refused, the mountain began to rear its head to obstruct that luminary, and to tower above Himalaya and Meru. The gods invoked the aid of Agastya, the spiritual guide of Vindhya. This sage called upon the Vindhya mountain to bow down before him, and afford him an easy passage to and from the South. It obeyed and
Agastya passed over. But he never returned, and so the mountain remains to the present day in its humbled condition, far inferior to the Himālaya. Another legend is that when Lakshmana, the brother of Rāma, was wounded in Ceylon by the king of the demons, he wished for the leaves of a plant which grew in the Himālayas to apply them to his wound. Hanūman, the monkey-god, was sent to get it, and not knowing which plant it was, he took up a part of the Himālayas and carried them to Ceylon. He happened to drop a portion of his load on the way, and from this the Vindhyan Hills were formed.
Kaimur Hills.—The eastern portion of the Vindhyan range, commencing near Katangi in the Jubbulpore District of the Central Provinces (23° 26' N. and 79° 48' E.). It runs a little north of east for more than 300 miles to Sasaram in Bihār (24° 57' N. and 84° 2' E.). The range, after traversing the north of Jubbulpore District and the south-east of Maihar State, turns to the east and runs through Rewah territory, separating the valleys of the Son and Tons rivers, and continues into Mirzapur District of the United Provinces and Shāhabād in Bengal. Its maximum width is 50 miles. In the Central Provinces the appearance of the range is very distinctive. The rock formation is metamorphic and the strata have been upheaved into an almost vertical position, giving the range the appearance of a sharp ridge. In places the range almost disappears, being marked only by a low rocky chain, and in this portion it never rises more than a few hundred feet above the plain. The range enters Central India at Jukehi in Maihar State (23° 29' N. and 80° 27' E.), and runs for 150 miles in a north-easterly direction, forming the northern wall of the Son valley and overhanging the river in a long bold scarp of sandstone rock, from which near Govindgarh a branch turns off to the north-west. The range here attains an elevation of a little over 2,000 feet. In Mirzapur the height of the range decreases in the centre to rise again to over 2,000 feet at the rock of Bijaigarh with its ancient fort. Interesting relics of prehistoric man have been found in the caves and rock-shelters of the hills here, in the form of rude drawings and stone implements. In Shāhabād District the summit of the hills consists of a series of saucer-shaped valleys, each a few miles in diameter, containing a deposit of rich vegetable mould in the centre and producing the finest crops. The general height of the plateau is here 1,500 feet above sea-level. The sides are precipitous, but there are several
passes, some of which are practicable for beasts of burden. The ruined fort of Rohtās is situated on these hills. The rocks throughout consist principally of sandstones and shales.
Sātpurās (or Satpurās).—A range of hills in the centre of India. The name, which is modern, originally belonged only to the hills which divide the Narbādā and Tāpti valleys in Nimār (Central Provinces), and which were styled the sātputra or ‘seven sons’ of the Vindhyan mountains. Another derivation is from sātpura (‘sevenfolds’), referring to the numerous parallel ridges of the range. The term Sātpurās is now, however, customarily applied to the whole range which, commencing at Amarkantak in Rewah, Central India (22° 41' N. and 81° 48' E.), runs south of the Narbādā river nearly down to the western coast. The Sātpurās are sometimes, but incorrectly, included under the Vindhya range. Taking Amarkantak as the eastern boundary, the Sātpurās extend from east to west for about 600 miles, and in their greatest width, where they stretch down to Berār, exceed 100 miles from north to south. The shape of the range is almost triangular. From Amarkantak an outer ridge (see Maikala) runs south-west for about 100 miles to the Sāletekri hills in Bālāghāt District (Central Provinces), thus forming as it were the head of the range which, shrinking as it proceeds westward from a broad table-land to two parallel ridges, ends, so far as the Central Provinces are concerned, at the famous hill fortress of Asīrgarh. Beyond this point the Rājipipla hills, which separate the valley of the Narbādā from that of the Tāpti, complete the chain as far as the Western Ghāts. On the table-land comprised between the northern and southern faces of the range are situated the Central Provinces Districts of Mandlā, part of Bālāghāt, Seoni, Chhindwāra, and Betūl.
The superficial stratum covering the main Sātpurā range is trappean, but in parts of the Central Provinces crystalline rocks are uppermost, and over the Pachmarhi hills sandstone is also uncovered. In Mandlā the higher peaks are capped with laterite. On the north and south the approaches to the Sātpurās are marked as far west as Turanmāl by low lines of foot-hills. These are succeeded by the steep slopes leading up to the summit of the plateau, traversed in all directions by narrow deep ravines, hollowed out by the action of the streams and rivers, and covered throughout their extent with forest.
Portions of the Sātpurā plateau consist, as in Mandlā and the north of Chhindwāra, of a rugged mass of hills hurled together by volcanic action. But the greater part is an undulating table-land, a succession of bare stony ridges and narrow fertile valleys, into which the soil has been deposited by drainage. In a few level tracts, as in the valleys of the Māchna and Sāmpna near Betūl, and the open plain between Seoni and Chhindwāra, there are extensive areas of productive land. Scattered over the plateau, isolated flat-topped hills rise abruptly from the plain. The scenery of the northern and southern hills, as observed from the roads which traverse them, is of remarkable beauty. The drainage of the Sātpurās is carried off on the north by the Narbadā, and on the south by the Waingangā, Wardhā, and Tāpti, all of which have their source in these hills.
The highest peaks are contained in the northern range, rising abruptly from the valley of the Narbadā, and generally sloping down to the plateau, but towards the west the southern range has the greater elevation. Another noticeable feature is a number of small table-lands lying among the hills at a greater height than the bulk of the plateau. Of these Pachmarhī (3,539 feet) and Chikalda in Berār (3,664 feet) have been formed into hill stations: while Raigarh (2,200 feet) in Bālaghāt District and Khāmla in Betūl (3,800 feet) are famous grazing and breeding grounds for cattle. Dhūpgarh (4,454 feet) is the highest point on the range, and there are a few others of over 4,000 feet. Among the peaks that rise from 3,000 to 3,800 feet above sea-level, the grandest is Turanmāl (Bombay Presidency), a long, rather narrow, table-land 3,300 feet above the sea and about 16 square miles in area. West of this the mountainous land presents a wall-like appearance towards both the Narbadā on the north and the Tāpti on the south. On the eastern side the Tāsīnd Vali (Central India) commands a magnificent view of the surrounding country. The general height of the plateau is about 2,000 feet.
The hills and slopes are clothed with forest extending over some thousands of square miles; but much of this is of little value, owing to unrestricted fellings prior to the adoption of a system of conservancy, and to the shifting cultivation practised by the aboriginal tribes, which led to patches being annually cleared and burnt down. The most valuable forests are those of sāl (Shorea robusta) on the eastern hills, and teak on the west.
The Sātpurā Hills have formed in the past a refuge for Hill tribes. aboriginal or Dravidian tribes driven out of the plains by the advance of Hindu civilization. Here they retired, and occupied the stony and barren slopes which the new settlers, with the
rich lowlands at their disposal, disdained to cultivate; and here they still rear their light rains crops of millets which are scarcely more than grass, barely tickling the soil with the plough, and eking out a scanty subsistence with the roots and fruits of the forests, and the pursuit of game. The Baigās, the wildest of these tribes, have even now scarcely attained to the rudiments of cultivation, but the Gonds, the Korkūs, and the Bhīls have made some progress by contact with their Hindu neighbours.
The open plateau has for two or three centuries been peopled by Hindu immigrants; but it is only in the last fifty years that travelling has been rendered safe and easy, by the construction of metalled roads winding up the steep passes, and enabling wheeled traffic to pass over the heavy land of the valleys. Till then such trade as existed was conducted by nomad Banjārās on pack-bullocks. The first railway across the Sātpurā plateau, a narrow-gauge extension of the Bengal-Nāgpur line from Gondiā to Jubbulpore, has recently been opened. The Great Indian Peninsula Railway, from Bombay to Jubbulpore, runs through a breach in the range just east of Asirgarh, while the Bombay-Agra road crosses farther to the west.
Maikala (or Mekala).—A range of hills in the Central Provinces and Central India, lying between 21° 11' and 22° 40' N. and 80° 46' and 81° 46' E. It is the connecting link between the great hill systems of the Vindhyas and Sātpurās, forming respectively the northern and southern walls of the Narbādā valley. Starting in the Khairāgarh State of the Central Provinces, the range runs in a general south-easterly direction for the first 46 miles in British territory, and then entering the Sohāgpur pargana of Rewah State, terminates 84 miles farther at Amarkantak, one of the most sacred places in India, where the source of the Narbādā river is situated. Unlike the two great ranges which it connects, the Maikala forms a broad plateau of 880 square miles in extent, mostly forest country inhabited by Gonds. The elevation of the range does not ordinarily exceed 2,000 feet, but the Lāpha hill, which is a detached peak belonging to it, rises to 3,500 feet. The range is best known for the magnificent forests of sāl (Shorea robusta) which clothe its heights in many places. These are mainly situated in zamindāri estates or those of Feudatory chiefs and hence are not subject to any strict system of conservation, and have been much damaged by indiscriminate fellings. The hills are mentioned in ancient Hindu literature as the place of Maikala Rishi's penance, though Vyāsa, Bhrigu, Agastya, and other sages are also credited with having
meditated in the forests. Their greatest claim to sanctity lies, however, in the presence upon them of the sources of the Narbadā and Son rivers. The Mārkandeya Purāṇa relates how, when Siva called successively on all the mountains of India to find a home for the Narbadā, only Maikala offered to receive her, thus gaining undying fame; and hence the Narbadā is often called Maikala-Kanyā or 'daughter of Maikala.' The Mahānādi and Johillā, as well as many minor streams, also have their sources in these hills. Local tradition relates that in the fourth and fifth centuries A.D., during the Gupta rule, this plateau was highly populated; and the Rāmāyana and the Purāṇas mention the Mekhalās as a tribe of the Vindhya range, the former work placing them next the Utkalas or people of Orissa. The Rewah State has lately begun to open up the plateau. Iron ore is met with in some quantity, and is still worked at about twenty villages to supply the local demand.
Sonār.—A river in the Central Provinces, the centre of the drainage system of the Vindhyān plateau comprising the Districts of Saugor and Damoh, with a northward course to the Jumna. It rises in the low hills in the south-west of Saugor (23° 22' N. and 78° 37' E.), and flowing in a north-easterly direction through that District and Damoh, joins the Ken in Bundelkhand, a short distance beyond the boundary of Damoh. Of its total course of 116 miles, all but the last four miles are within the Central Provinces. The river does not attain to any great breadth and flows in a deep channel, its bed being usually stony. It is not navigable and no use is made of its waters for irrigation. The valley of the Sonār lying in the south of Saugor and the centre of Damoh is composed of fertile black soil formed from the detritus of volcanic rock. The principal tributaries of the Sonār are the Dehār joining it at Rehli, the Gadheri at Garhākotā, the Bewas near Narsinghgarh, the Koprā near Sitānagar, and the Beārma just beyond the Damoh border. Rehli, Garhākotā, Hattā, and Narsinghgarh are the most important places situated on its banks. The Indian Midland Railway (Bina-Katni branch) crosses the river between the stations of Pathariā and Aslāna.
Son (Sanskrit Suvarna or 'gold'; also called Hiranya-Vāha or Hiranya-Vāhu; the Sonos of Arrian; also identified with the Erannoboas of Arrian).—A large river of Northern India, which, flowing from the Amarkantak highlands (22° 42' N., 82° 4' E.), first north and then east, joins the Ganges 10 miles above Dinapore, after a course of about 487 miles.
The Son rises near the Narbadā at Amarkantak in the Maikala range, the hill on which its nominal source is located being called Son-bhadra or more commonly Son-mundā. It possesses great sanctity, the performance of sandhyā on its banks ensuring absolution and the attainment of heaven even to the slayer of a Brāhman. Legends about the stream are numerous, one of the most picturesque assigning the origin of the Son and Narbada to two tears dropped by Brāhma, one on either side of the Amarkantak range. The Son is frequently mentioned in Hindu literature, in the Rāmāyanas of Vālmīki and Tulsi Dās, the Bhagwat, and other works.
Soon after leaving its source, the Son falls in a cascade over the edge of the Amarkantak plateau amid the most picturesque surroundings, and flows through the Bilāspur District of the Central Provinces till it enters Rewah State at 23° 6' N. and 81° 59' E. From this point till it leaves the Central India Agency after a course of 288 miles, the stream flows through a maze of valley and hill, for the most part in a narrow rocky channel, but expanding in favourable spots into magnificent deep broad reaches locally called dahār, the favourite resorts of the fisher caste. Following at first a northerly course, near its junction with the Mahānadi river at Sarsī it meets the bold scarp of the Kaimur range and is turned into a north-easterly direction, finally leaving the Agency 5 miles east of Deorā village. In Central India three other affluents of importance are received: one on the left bank, the Johilla, which likewise rises at Amarkantak and joins it at Barwālū village; and two which join it on the right bank, the Banās at 23° 17' N. and 81° 31' E., and the Gopat near Bardī. In the United Provinces the Son flows for about 55 miles from west to east across Mirzapur District, in a deep valley never more than 8 or 9 miles broad, often narrowing to a gorge, and receives from the south two tributaries, the Rihand and the Kanhar. During the dry season it is shallow but rapid, varying in breadth from 60 to 100 yards, and is easily fordable. The Son enters Bengal in 24° 31' N. and 83° 24' E., and flows in a north-westerly direction, separating the District of Shahabad from Palamau, Gayā, and Patna till, after a course within Bengal of 144 miles, it falls into the Ganges in 25° 40' N. and 84° 59' E.
So far as regards navigation, the Son is mainly used for floating down large rafts of bamboos and a little timber. During the rainy season, native boats of large tonnage occasionally proceed for a short distance up stream; but navigation is then rendered dangerous by the extraordinary violence of the flood, and
throughout the rest of the year becomes impossible, owing to the small depth of water. The great irrigation system known as the Son Canals is served by this river, the water being distributed west to Shahabad and east to Gayā and Patna from a dam constructed at Dehri. In the lower portion of its course the Son is marked by several striking characteristics. Its bed is enormously wide, in some places stretching for three miles from bank to bank. During the greater part of the year this broad channel is merely a waste of drifting sand, with an insignificant stream that is nearly everywhere fordable. The discharge of water at this time is estimated to fall as low as 620 cubic feet per second. But in the rainy season, and especially just after a storm has burst on the plateau of Central India, the river rises with incredible rapidity. The entire rainfall of an area of about 21,300 square miles requires to find an outlet by this channel, which frequently proves unable to carry off the total flood discharge, calculated at 830,000 cubic feet per second. These heavy floods are of short duration, seldom lasting for more than four days; but in recent years they have wrought much destruction in the low-lying plains of Shahabad. Near the site of the great dam at Dehri the Son is crossed by the grand trunk road on a stone causeway; and lower down, near Koelwār, the East Indian Railway has been carried across on a lattice-girder bridge. This bridge, begun for a single line of rails in 1855, and finally completed for a double line in 1870, has a total length of 4,199 feet from back to back of the abutments.
The Son possesses historical interest as being probably identical with the Erannobas of Greek geographers, which is thought to be a corruption of Hiranya-vāhu, or ‘the golden-armed’ (a title of Siva), a name which the Son anciently bore. The old town of Pālibothrā or Pātaliputra, corresponding to the modern Patna, was situated at the confluence of the Erannobas and the Ganges; and, in addition, we know that the junction of the Son with the Ganges has been gradually receding westwards. Old channels of the Son have been found between Bankipore and Dinapore, and even below the present site of Patna. In the Bengal Atlas of 1772 the junction is marked near Maner, and it would seem to have been at the same spot in the seventeenth century; it is now about 10 miles higher up the Ganges.
Narbādā (Narmada; the Namados of Ptolemy; Namnadios of the Periplus).—One of the most important rivers of India. It rises on the summit of the plateau of Amarkantak c.v.
(22° 41' N. and 81° 48' E.), at the north-eastern apex of the Satpurā range, in Rewah (Central India), and enters the sea below Broach in the Bombay Presidency after a total course of 801 miles.
The river issues from a small tank 3,000 feet above the sea, surrounded by a group of temples and guarded by an isolated colony of priests, and falls over a basaltic cliff in a descent of 80 feet. After a course of about 40 miles through the State of Rewah, it enters the Central Provinces and winds circuitously through the rugged hills of Mandlā, pursuing a westerly course until it flows under the walls of the ruined palace of Rāmnagar. From Rāmnagar to Mandlā town it forms, for some 15 miles, a deep reach of blue water, unbroken by rocks and clothed on either bank by forest. The river then turns north in a narrow loop towards Jubbulpore, close to which town, after a fall of some 30 feet called the dhuāndhāra or 'fall of mist,' it flows for two miles in a narrow channel which it has carved out for itself through rocks of marble and basalt, its width here being only about 20 yards. Emerging from this channel, which is well known as the 'Marble Rocks,' and flowing west, it enters the fertile basin of alluvial land forming the Narbadā valley, which lies between the Vindhyan and Satpurā Hills, and extends for 200 miles from Jubbulpore to Handiā, with a width of about 20 miles to the south of the river. The Vindhyan Hills rise almost sheer from the northern bank along most of the valley, the bed of the river at this part of its course being the boundary between the Central Provinces and Central India (principally the States of Bhopāl and Indore). Here the Narbadā passes Hoshangābād and the old Muhammadan towns of Handiā and Nimāwar. The banks in this part of its valley are about 40 feet high, and the fall in its course between Jubbulpore and Hoshangābād is 340 feet. Below Handiā the hills again approach the river on both sides and are clothed with dense forests, the favourite haunts of the Pindāris and other robbers of former days. At Mandhār, 25 miles below Handiā, there is a fall of 40 feet, and another of the same height occurs at Punāsa. The bed of the river in its whole length within the Central Provinces is one sheet of basalt, seldom exceeding 150 yards in absolute width, and, at intervals of every few miles, upheaved into ridges which cross it diagonally, and behind which deep pools are formed. Emerging from the hills beyond Māndhāta on the borders of the Central Provinces, the Narbadā now enters a second open alluvial basin, flowing through Central India (principally the
State of Indore) for nearly 100 miles. The hills are here well away from the river, the Satpurās being 40 miles to the south and the Vindhyas about 16 miles to the north. In this part of its course the river passes the town of Maheshwar, the old capital of the Holkar family, where its northern bank is studded with temples, palaces, and bathing ghāts, many of them built by the famous Ahalya Bai whose mausoleum is here. The last 170 miles of the river's course are in the Bombay Presidency, where it first separates the States of Baroda and Rajpipla and then meanders through the fertile District of Broach. Below Broach City it gradually widens into an estuary, whose shores are 17 miles apart as it joins the Gulf of Cambay.
The drainage area of the Narbadā, estimated at about 36,000 square miles, is principally to the south and comprises the northern portion of the Satpurā plateau and the valley &c. Districts. The principal tributaries are the Banjār in Mandā, the Sher and Shakkar in Narsinghpur, and the Tawā, Ganjāl, and Chhotā Tawā in Hoshangābād District. The only important tributary to the north is the Hiran, which flows in beneath the Vindhyan Hills, in Jubbulpore District. Most of these rivers have a short and precipitous course from the hills, and fill with extraordinary rapidity in the rains, producing similarly rapid floods in the Narbadā itself. Owing to this and to its rocky course, the Narbadā is useless for navigation except by country boats between August and February, save in the last part of its course, where it is navigable by vessels of 70 tons burden up to the city of Broach, 30 miles from its mouth. It is crossed by railway bridges below Jubbulpore, at Hoshangābād, and at Mortakka. The influence of the tides reaches to a point 55 miles from the sea. The height of the banks throughout the greater part of its course makes the river useless for irrigation.
The Narbadā, which is referred to as the Rewā (probably Sacred from the Sanskrit root rew, 'to hop,' owing to the leaping of the stream down its rocky bed) in the Mahābhārata and river, Rāmāyana, is said to have sprung from the body of Siva and is one of the most sacred rivers of India, local devotees placing it above the Ganges, on the ground that whereas it is necessary to bathe in the Ganges for forgiveness of sins, this object is attained by mere contemplation of the Narbadā. 'As wood is cut by a saw (says a Hindu proverb), so at the sight of the holy Narbadā do a man's sins fall away.' Gangā herself, so local legend avers, must dip in the Narbadā once a year. She
comes in the form of a coal-black cow, but returns home quite white, free from all sin. The Ganges, moreover, was (according to the Rewā Purāna) to have lost its purifying virtues in the year 1895, though this fact has not yet impaired its reputation for sanctity. At numerous places on the course of the Narbadā, and especially at spots where it is joined by another river, are groups of temples, tended by Narmdeo Brāhmans, the special priests of the river, where annual gatherings of pilgrims take place. The most celebrated of these are Bherāghāt, Barmhān, and Onkār Māndhāta in the Central Provinces, and Barwānt in Central India, where the Narbadā is joined by the Kapilā. All of these are connected by legends with saints and heroes of Hindu mythology, and the description of the whole course of the Narbadā, and of all these places and their history, is contained in a sacred poem of 14,000 verses (the Narmadā Khandā), which, however, has been adjudged to be of somewhat recent origin. Every year 300 or more pilgrims start to perform the pradakshina of the Narbadā, that is, to walk from its mouth at Broach to its source at Amarkantak on one side, and back on the other, a performance of the highest religious efficacy. The most sacred spots on the lower course of the river are Suklatirtha, where stands an old banyan-tree that bears the name of the saint Kabīr, and the site of Rājā Bali's horse-sacrifice near Broach.
The Narbadā is commonly considered to form the boundary between Hindustān and the Deccan, the reckoning of the Hindu year differing on either side of it. The Marāthās spoke of it as 'the river,' and considered that when they had crossed it they were in a foreign country. During the Mutiny the Narbadā practically marked the southern limit of the insurrection. North of it the British temporarily lost control of the country, while to the south, in spite of isolated disturbances, their authority was maintained. Hence, when, in 1858, Tāntia Topi executed his daring raid across the river, the utmost apprehension was excited, as it was feared that on the appearance of the representative of the Peshwā, the recently annexed Nāgpur territories would rise in revolt. These fears, however, proved to be unfounded and the country remained tranquil.
Tāpti.—One of the great rivers of Western India. The name is derived from tāp, 'heat,' and the Tāpti is said by the Brāhmans to have been created by the sun to protect himself from his own warmth. The Tāpti is believed to rise in the sacred tank of Multai (multāpī, 'the source of the Tāpti') on the Sātpurā plateau, but its real source is two miles distant
(21° 48' N. and 78° 15' E.). It flows in a westerly direction through the Betul District of the Central Provinces, at first traversing an open and partially cultivated plain, and then plunging into a rocky gorge of the Satpurā Hills between the Kālibhit range in Nimār (Central Provinces) and Chikalda in Berār. Its bed here is rocky, overhung by steep banks, and bordered by forests. At a distance of 120 miles from its source it enters the Nimār District of the Central Provinces, and for 30 miles more is still confined in a comparatively narrow valley. A few miles above Burhānpur the valley opens out, the Satpurā Hills receding north and south, and opposite that town the river valley has become a fine rich basin of alluvial soil about 20 miles wide. In the centre of this tract the Tāpti flows between the towns of Burhānpur and Zainābād, and then passes into the Khāndesh District of Bombay. In its upper valley are several basins of exceedingly rich soil; but they have long been covered by forest, and it is only lately that the process of clearing them for cultivation has been undertaken.
Shortly after entering Khāndesh the Tāpti receives on the left bank the Pūrna from the hills of Berār, and then flows for about 150 miles through a broad and fertile valley, bounded on the north by the Satpurās and on the south by the Satmālas. Farther on the hills close in, and the river descends through wild and wooded country for about 80 miles, after which it sweeps southward to the sea through the alluvial plain of Surat, and becomes a tidal river for the last 30 miles of its course. The banks (30 to 60 feet) are too high for irrigation, while the bed is crossed at several places by ridges of rock, so that the river is navigable for only about 20 miles from the sea. The Tāpti runs so near the foot of the Satpurās that its tributaries on the right bank are small; but on the left bank, after its junction with the Pūrna, it receives through the Girnā (150 miles long) the drainage of the hills of Bāglān, and through the Borī, the Pānjhra, and the Borai, that of the northern buttress of the Western Ghāts. The waters of the Girnā and the Pānjhra are dammed up in several places and used for irrigation. On the lower course of the Tāpti floods are not uncommon, and have at times done much damage to the city of Surat. The river is crossed at Bhusāwal by the Jubbulpore branch of the Great Indian Peninsula Railway, at Savalda by the Bombay-Agra road, and at Surat by the Bombay, Baroda, and Central India Railway. The Tāpti has a local reputation for sanctity, the chief firthas or holy places
being Chāngdeo, at the confluence with the Pūrṇa, and Bodhān above Surat. The fort of Thālner and the city of Surat are the places of most historic note on its course, the total length of which is 436 miles. The port of Suvali (Swally), famous in early European commerce with India, and the scene of a famous sea-fight between the British and the Portuguese, lay at the mouth of the river, but is now deserted, its approaches having been silted up.
Wardhā River.—A river in the Central Provinces, which rises in the Multai plateau of Betul District, at 21° 50' N. and 78° 24' E., about 70 miles north-west of Nagpur city, and flowing south and south-east, separates the Nagpur, Wardhā, and Chānda Districts of the Central Provinces from Amracti and Yeotmāl of Berār and Sirpur Tandur of the Nizām's Dominions. After a course of 290 miles from its source, the Wardhā meets the Waingangā at Seoni in Chānda District, and the united stream under the name of the Prānhita flows on to join the Godāvari. The bed of the Wardhā, from its source to its junction with the Pengangā at Jugād in the south-east corner of Yeotmāl, is deep and rocky, changing from a swift torrent in the monsoon months to a succession of nearly stagnant pools in the summer. For the last hundred miles of its course below Chānda, it flows in a clear channel broken only by a barrier of rocks commencing above the confluence of the Waingangā and extending into the Prānhita. The project entertained in the years 1866-71 for rendering the Godāvari and Wardhā fit for navigation included the excavation of a channel through this expanse of rock, which was known as the Third Barrier. The scheme proved impracticable; and except that timber is sometimes floated from the Ahiri forests in the monsoon months, no use is now made of the river for navigation. The area drained by the Wardhā includes Wardhā District, with parts of Nagpur and Chānda in the Central Provinces and the eastern and southern portion of Berār. The principal tributaries of the Wardhā are the Wunnā and Erāi from the east, and the Bembla and Pengangā which drain the southern and eastern portions of the plain of Berār. The banks of the river are in several places picturesquely crowned by small temples and tombs, and numerous ruined forts in the background recall the wild period of Marāthā wars and Pindāri raids. Kundalpur (Dewalwāra) on the Berār bank opposite to Wardhā District is believed to represent the site of a buried city, celebrated in the Bhagavad Gītā as the metropolis of the kingdom of Vidarbha (Berār). A large religious fair is
|
THE CORPORATION OF THE TOWNSHIP OF MANITOUWADGE
BY-LAW NO. 2017 - 08
Being a By-Law to regulate and prohibit
the discharge of firearms within the
Township of Manitouwadge
WHEREAS the Council of the Corporation of the Township of Manitouwadge, in the interest of public safety, deems it necessary to prohibit the discharge of firearms within certain areas of the Township of Manitouwadge;
AND WHEREAS under Section 8 of the Municipal Act, 2001, S.O. 2001, c. 25, as amended, the powers of a municipality shall be interpreted broadly to enable it to govern its affairs as it considers appropriate and to enhance the municipality's ability to respond to municipal issues;
AND WHEREAS under Section 9 of the Municipal Act, 2001, S.O. 2001, c. 25, as amended, a municipality has the capacity, rights, powers and privileges of a natural person for the purpose of exercising its authority under this or any other Act;
AND WHEREAS under Section 10 (1) of the Municipal Act, 2001, S.O. 2001, c. 25, as amended, a single-tier municipality may provide any service or thing that the municipality considers necessary or desirable for the public;
AND WHEREAS under Section 10 (2) 6 of the Municipal Act, 2001, S.O. 2001, c. 25, as amended, provides that a municipality may pass by-laws with respect to matters of health, safety and well-being of persons;
AND WHEREAS Section 119 of the Municipal Act, 2001, S.O. 2001, c. 25, as amended, provides that a municipality may for the purpose of public safety, prohibit or regulate the discharge of guns or other firearms, air-guns, spring-guns, cross-bows, long bows or any other weapon;
AND WHEREAS Section 425(1) of the Municipal Act, 2001, S.O. 2001, c. 25, as amended, provides that a municipality may pass by-laws providing that a person who contravenes a by-law of the municipality passed under the Act is guilty of an offence;
AND WHEREAS Section 429(1) of the Municipal Act, 2001, S.O. 2001, c. 25, as amended, provides that a municipality may establish a system of fines for offences under a by-law of the municipality passed under the Act;
AND WHEREAS Section 429 (2) (d) of the Municipal Act, 2001, S.O. 2001, c. 25 as amended, provides that a municipality may establish special fines in addition to the regular fine for an offence which are designed to eliminate or reduce any economic advantage or gain from contravening the by-law.
NOW THEREFORE the Council of The Corporation of the Township of Manitouwadge hereby enacts the following as a by-law:
1. That Council adopts a by-law to regulate and control the discharge of firearms within certain areas of the Township of Manitouwadge identified as Schedule "A", attached hereto and forming part of this by-law;
2. That all by-laws respecting firearms and the discharge of firearms enacted by the Corporation of the Township of Manitouwadge (more specifically By-law 2004-28 and By-law 94-39), are hereby repealed.
3. That the Clerk of the Corporation of the Township of Manitouwadge is hereby authorized to make minor modifications or corrections of a grammatical or typographical nature to the by-law and schedule, after the passage of this by-law, where such modifications or corrections do not alter the intent of the by-law.
4. That the special fine in addition to the set fine to eliminate or reduce any economic advantage or gain from contravening the by-law will be:
In the case of a Moose $2000.00.
In a case of a Black Bear $2000.00
In a case of a Timber Wolf or Coyote $1000.00
In a case of a Ruffed Grouse $300.00
5. That this by-Law shall come into force and take effect on the date of its final passing.
READ A 1st and 2nd TIME this 22 day of February, 2017 and READ A THIRD TIME AND FINALLY ENACTED this 22 day of February, 2017.
Mayor Andy Major
Margaret Hartling, CAO/Clerk/Treasurer
## INDEX
### PART 1 – GENERAL PROVISIONS
| SECTION | PAGE |
|---------|------|
| 1.1 | Short Title | 4 |
| 1.2 | Scope | 4 |
| 1.3 | Enforcement | 4 |
| 1.4 | Conflicts with other By-law | 4 |
### PART 2 – DEFINITIONS
| SECTION | PAGE |
|---------|------|
| 2.1 | Agent of the Municipality | 5 |
| 2.2 | Bear Technician | 5 |
| 2.3 | By-Law Enforcement Officer| 5 |
| 2.4 | Conservation Officer | 5 |
| 2.5 | Council | 5 |
| 2.6 | Firearm | 5 |
| 2.7 | Loaded Firearm | 5 |
| 2.8 | Municipality | 5 |
| 2.9 | Person | 6 |
| 2.10 | Police Officer | 6 |
| 2.11 | Prohibited Area | 6 |
| 2.12 | Prohibited Zone | 6 |
| 2.13 | Provincial Offences Act | 6 |
| 2.14 | Radially | 6 |
| 2.15 | Township of Manitouwadge | 6 |
### PART 3 – PROHIBITION
| SECTION | PAGE |
|---------|------|
| 3.1 | Discharge Firearm in Prohibited Zone | 6 |
| 3.2 | Loaded Firearm in Prohibited Zone | 6 |
| 3.3 | Discharge Firearm in Prohibited Area | 6 |
| 3.4 | Loaded Firearm in Prohibited Area | 6 |
### PART 4 – EXEMPTIONS
| SECTION | PAGE |
|---------|------|
| 4.1 | Police Officer, Conservation Officer, By-Law Enforcement Officer, Agent of the Municipality, Bear Technician | 7 |
| 4.2 | Trapper | 7 |
| 4.3 | Indoor Archery | 7 |
PART 1
GENERAL PROVISIONS
SECTION
1.1 Short Title
This By-law shall be cited as the “Firearms By-Law”.
1.2 Scope
The provision of this By-law shall apply to all property in the Township of Manitouwadge within the prohibited zone and where permanent or temporary signs prohibiting firearms or their discharge are erected.
1.3 Enforcement
This By-law shall be enforced by a By-Law Enforcement Officer, and may be enforced by a Police Officer or Conservation Officer.
1.4 Conflicts with other By-Law
Where a provision of this By-law conflicts with a provision of another by-law in force in the Municipality the provisions that establishes the higher standard in terms of protecting the health, safety and welfare of the general public and the environmental well-being of the municipality, shall prevail to the extent of the conflict.
PART 2
DEFINITIONS
Definitions of words, phrases and terms used in this By-Law that are not included in the list of definitions in this section shall have the meanings which are commonly assigned to them in the context in which they are used in this By-Law.
The words, phrases and terms defined in this section have the following meaning for the purposes of this By-Law.
SECTION
2.1 "Agent of the Municipality" means an employee of the Township of Manitouwadge in the performance of his or her duties.
2.2 "Bear Technician" means an employee of the Ministry of Natural Resources Bear Wise Program in the performance of his or her duties.
2.3 "By-Law Enforcement Officer" means the person or persons duly appointed by Council as Municipal Law Enforcement Officers for the purpose of enforcing regulatory by-laws of the Municipality.
2.4 "Conservation Officer" means a Conservation Officer under the Fish and Wildlife Conservation Act
2.5 "Council" means the Council of The Corporation of the Township of Manitouwadge
2.6 "Firearm" means any type of gun or other firearms, shotgun, rifle, air-gun, pellet gun, spring gun, cross-bow, long-bow, compound bow or any class or type thereof or anything that can be adapted for use as a firearm.
2.7 "Loaded Firearm" for the purposes of this By-law, a firearm is loaded if,
(a) in the case of a firearm that uses shells or cartridges, there is an unfired shell or cartridge in the chamber or in a magazine attached to the firearm;
(b) in the case of a percussion muzzle-loading gun, there is a charge of powder and a projectile in the barrel and a percussion cap on the nipple;
(c) in the case of a muzzle-loading gun to which clause (b) does not apply, there is a charge of powder and a projectile in the barrel and the vent is unplugged;
(d) in the case of a gun to which clauses (a), (b) and (c) do not apply, there is a projectile in the gun or in a magazine attached to the gun;
(e) in the case of a crossbow, the bow is cocked and there is a bolt in the crossbow; and
(f) in the case of a bow other than a crossbow, the bow is strung and an arrow is nocked. 1997, c. 41, s. 1 (7).
2.8 "Municipality" means the same as the Township of Manitouwadge
2.9 "Person" means an individual, firm or corporation.
2.10 "Police Officer" means a member of the Ontario Provincial Police Service.
2.11 "Prohibited Area" means an area where temporary or permanent signs have been erected prohibiting loaded firearms or their discharge in any portion of the Municipality not contained within section 2.12 for the purpose of ensuring the safety of people.
2.12 "Prohibited Zone" means the areas in the Municipality delineated as such in Appendix "2" attached hereto and forming part of this Schedule and defined as all that portion of the Corporation of the Township of Manitouwadge comprising parts of the geographic township of Leslie, Gertrude, Mapledoram and Gemmel, contained within a circle which is radially distant:
(a) three (3) kilometers from a concentric point defined as the centre of the traffic circle located at the intersection of Manitou Roads East and West, Ohsweken Road and Mississauga Drive, within the Manitouwadge Townsite.
(b) two (2) kilometers from a concentric point defined as the rotating beacon light atop the Manitouwadge Municipal Airport Terminal located approximately four (4) kilometers south of the Manitouwadge Townsite Boundaries off Highway 614.
(c) two (2) kilometers from a concentric point defined as the center of the entrance gate at the Manitouwadge Municipal Landfill site located approximately seven (7) kilometers west on the Caramat Industrial Road from its intersection with highway 614.
2.13 "Provincial Offences Act" means the *Provincial Offences Act*, R.S.O. 1990, c. P.33, as amended.
2.14 "Radially" means made in the direction of a radius; going from the center outward.
2.15 "Township of Manitouwadge" means The Corporation of the Township of Manitouwadge.
PART 3
PROHIBITIONS
SECTION
3.1 No person shall discharge a firearm within the prohibited zone.
3.2 No person shall possess a loaded firearm within the prohibited zone.
3.3 No person shall discharge a firearm within a prohibited area.
3.4 No person shall possess a loaded firearm within a prohibited area.
PART 4
EXEMPTIONS
SECTION
4.1 The provision of this By-law shall not apply to a Police Officer, By-Law Enforcement Officer, Agent of the Municipality or a Bear Technician during the lawful execution of their duties.
4.2 The provision of this By-law shall not apply to a licensed trapper while engaged in trapping activities on property which they have legal authority.
4.3 The provision of this By-law shall not apply to persons engaged in indoor archery.
PART 5
NOTICE REQUIREMENT
SECTION
5. In accordance with Section 5(1) of the Trespass to Property Act, R.S.O. 1990, c. T.21, notice of the prohibitions set out in sections 3.1, 3.2, 3.3 and 3.4 and defined in section 2.11 and 2.12 is hereby given. In addition, the Chief Administrative Officer of the Township of Manitouwadge shall post Appendix 2 of Schedule "A" on the Township of Manitouwadge website and by posting same in a conspicuous location in the Municipal Office, the Post Office, Service Ontario Office and in the Manitouwadge Detachment of the Ontario Provincial Police.
PART 6
PENALTIES
SECTION
6. "Every person who contravenes any provision of this by-law is guilty of an offence and on conviction is liable to a fine as provided for in the Provincial Offences Act, R.S.O. 1990,c.P.33".
PART 7
VALIDITY
SECTION
7. If any section, clause, or provision of this By-law, is for any reason declared by a court of competent jurisdiction to be invalid, the same shall not effect the validity of the By-law as a whole or any part thereof, other than the section, clause or provision so declared to be invalid and it is hereby declared to be the intention that all remaining sections, clauses or provisions of this By-law shall remain in full force and effect until repealed, notwithstanding that one or more provisions thereof shall have been declared to be invalid.
| Item | COLUMN 1 Short form wording | COLUMN 2 Provision creating or defining offence | COLUMN 3 Set fine |
|------|----------------------------|-----------------------------------------------|------------------|
| 1 | Discharge a firearm within prohibited zone. | Sch. A, section 3.1 | $300.00 |
| 2 | Possess a loaded firearm within prohibited Zone | Sch. A, section 3.2 | $200.00 |
| 3 | Discharge a firearm within prohibited area. | Sch. A, section 3.3 | $300.00 |
| 4 | Possess a loaded firearm within prohibited area | Sch. A, section 3.4 | $200.00 |
Note: The general penalty provision for the offences listed above is Schedule “A”, Section 6 of By-law No. 2017-08, a certified copy of which has been filed.
THE CORPORATION OF THE TOWNSHIP OF MANITOUWADGE
Appendix “2” OF Schedule “A”
TO FIREARMS BY-LAW NO. 2017-08
PROHIBITED ZONE
Prohibited Firearms Zone
Highway
Primary Road
Municipal Road
Operational Road
Former Railway
River
Lake
Township
2 km
3 km
2 km
|
The present invention provides a training apparatus and method for improving the stance of a golfer throughout the golfer's swing. The apparatus comprises a shaft pivotally connected to a base so that the shaft pivots about a pivot point along a plane to increase or decrease the angle formed by the base, the shaft, and the pivot point. A golfer stands on the base and straddles the shaft so that the leading foot is behind the shaft nearest the pivot point, and the non-leading leg is in front of the shaft thus preventing an over-rotation of a golfer's hips during the performance of a back swing and keeping the golfer's arms in front of the golfer's chest so that the golf club does not become trapped behind the golfer.
1 Claim, 4 Drawing Sheets
FIG. 1
FIG. 2
FIG. 3
FIG. 4
FIG. 5
GOLF SWING TRAINING APPARATUS
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of the filing of U.S. Provisional Patent Application Ser. No. 60/610,335, entitled “Golf Swing Training Aid”, filed on Sep. 16, 2004, and the specification of that application is incorporated herein by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention (Technical Field)
The present invention relates to a training device for improving a golfer’s swing.
2. Description of Related Art
Note that the following discussion refers to a number of publications by author(s) and year of publication, and that due to recent publication dates certain publications are not to be considered as prior art vis-a-vis the present invention. Discussion of such publications herein is given for more complete background and is not to be construed as an admission that such publications are prior art for patentability determination purposes.
In golf, a player’s stance is important to ensure for an optimal swing. The golfer’s stance through the swing is so important that considerable attention is given to the stance both during initial training and throughout the golfer’s development.
Although there are numerous devices available to improve a golfer’s swing, there remains a need to improve the swing by focusing on the golfer’s stance throughout the swinging motion.
BRIEF SUMMARY OF THE INVENTION
The present invention provides a golf training apparatus and method. An embodiment of the apparatus comprises a substantially flat base, a shaft pivotally connected from an end of the shaft to a pivot point at the base to permit movement of the shaft about the pivot point along a plane for varying an angle defined by a longitudinal axis of the base and a longitudinal axis of the shaft, the shaft having a length along the longitudinal axis of the shaft of at least approximately 14 inches, wherein the shaft is placeable against a front of a leading leg of a golfer proximal to the pivot point and against a back of a non-leading leg of the golfer while the golfer stands on the base.
The base and shaft are of sufficient length to accommodate a golfer’s stance including, but not limited to, a length along the longitudinal axis of the base of from between approximately 14 and 40 inches, and wherein the shaft has a length along the longitudinal axis of the shaft of from between approximately 14 and 40 inches. The shaft preferably comprises an average diameter of at least 0.3 inches. The base preferably comprises an average width of from between approximately 3 and 5 inches. Preferably, the length of the shaft is adjustable. The shaft preferably comprises a padding material at an area of the shaft proximal to the pivot point.
In an embodiment, the apparatus further comprises a structure at the pivot point to prevent the shaft from pivoting along a horizontal plane.
In another embodiment, the apparatus further comprises a column connected from an end to the base, and in which the pivot point is located.
In another embodiment, the pivot point is located in the column at a distance of from between approximately 1.5 and 4.0 inches from the base.
Another embodiment provides a method comprising providing a substantially flat base, pivotally connecting a shaft having a length along the longitudinal axis of the shaft of at least approximately 14 inches from an end of the shaft to a pivot point at an end of the base to permit movement of the shaft about the pivot point along a plane for varying an angle defined by a longitudinal axis of the base, a longitudinal axis of the shaft, and the pivot point, placing the shaft against a front of a leading leg of a golfer proximal to the pivot point and against a back of a non-leading leg of the golfer as the golfer stands on the base, and preventing over-rotation of the golfer’s hips as the golfer executes a back-swing.
Other objects, advantages and novel features, and further scope of applicability of the present invention will be set forth in part in the detailed description to follow, taken in conjunction with the accompanying drawings, and in part will become apparent to those skilled in the art upon examination of the following, or may be learned by practice of the invention. The objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
The accompanying drawings, which are incorporated into, and form a part of, the specification, illustrate one or more embodiments of the present invention and, together with the description, serve to explain the principles of the invention. The drawings are only for the purpose of illustrating one or more preferred embodiments of the invention and are not to be construed as limiting the invention. In the drawings:
FIG. 1 is a front view of an embodiment of the present invention as used by a golfer prior to taking a backswing;
FIG. 2 is a front view of the embodiment shown in FIG. 1 with the golfer taking a backswing.
FIG. 3 is a side view of the embodiment of FIG. 1;
FIG. 4 is a top view of the column to which the shaft and base attach; and
FIG. 5 is a front view of the column of FIG. 3.
DETAILED DESCRIPTION OF THE INVENTION
The present invention provides a training apparatus and method for improving the stance of a golfer throughout the golfer’s swing. As used herein, “a” and “an” mean one or more.
An embodiment of the present invention provides an apparatus comprising a shaft pivotally connected to a base. The base is preferably substantially flat. The connection is such that the shaft pivots about a pivot point along a plane to increase or decrease the angle formed by the longitudinal axis of the base and the longitudinal axis of the shaft. In use, a golfer stands on the base and places an end of the shaft, proximal to the pivot, in front of the leading leg (i.e. the leg toward the golf ball’s anticipated direction of flight) at an opposite end behind the non-leading, other leg. The means or structure of connection between the shaft and the base preferably restricts or eliminates lateral movement of the shaft as a golfer rotates to swing a golf club.
The golf training apparatus/aid of the present invention prevents an over-rotation of a golfer’s hips during the performance of a back swing. In preventing an over-rotation of the hips, the invention creates a wider gap between the golfer’s hip turn and the golfer’s shoulder turn. This results in an improved motion that creates more distance through a better coil given that the more a golfer can keep his/her hip from turning while simultaneously allowing the shoulders to turn as much as possible and as needed. Thus, the aforementioned gap and coil for power is created. In addition to preventing a golfer’s hips from over-rotating, the invention keeps the golfer’s arms in front of the golfer’s chest so that the golf club does not become trapped behind the golfer, which often produces off-line shots. Keeping the golfer’s arms in front of the chest produces more solid, straighter shots.
Further, an over-rotation of the hips can create what is referred to as a “reverse pivot”. A reverse pivot results when the leg that is being restricted straightens and the upper body moves toward the target during the back swing (which leg is restricted depends on whether the golfer is taking a right or left-handed stance). This reverse pivot motion creates an over-the-top move or motion of the hands. Compensating in this manner is a common flaw in the swing of many golfers. The present invention helps to prevent the defects noted above in golfers’ swings and thus helps to accomplish a more reliable and powerful golf swing.
Turning now to the figures, which describe an embodiment of the present invention, FIG. 1 shows apparatus 30 comprising base 32 attached (in a preferably fixed manner) to column 38. Shaft 34 is pivotally connected to column 38 so that the angle between shaft 34 and base 32 is adjustable to fit golfers of different dimensions. Also, the length of shaft 34 is adjustable through any means known in the art such as, for example, telescoping means. It should be noted that an important feature of the invention is that shaft 34 is pivotally attached to base 32 and that any means known in the art for accomplishing that relationship may be utilized. In the preferred embodiment described herein, column 38 is provided as an interface for that connected relationship. Also, the connected relationship between shaft 34 and base 32 is accomplished by any means or fastener known in the art that, while allowing shaft 34 to pivot, preferably holds shaft 34 in a given position unless moved by the user from that position. An embodiment providing for such a connection is described herein in reference to FIGS. 3 and 4. Also, it is understood that the present invention can be carried out so that base 32 is integral to, or is, the ground.
Therefore, column 38 is an example of a connecting structure that provides a point of pivot such as pivot point 41 for moving shaft 34 about pivot point 41 along one plane thereby varying the angle defined by longitudinal axis 22 of base 32 and longitudinal axis 24 of shaft 34. The embodiment therefore prevents lateral, or side-to-side, movement of shaft 34. Although column 32 is an example of such a structure, any connecting structure known in the art may be utilized in accordance with the present invention. Such a connecting structure may be attached, or be integral, to either base 32 or shaft 34. Such a connecting structure may, for example, simply comprise apertures (not shown) in an end of base 32 and an end of shaft 34 for inserting a pin (not shown) through the apertures to pivotally base 32 to shaft 34.
In practice, either left leg 62 or right leg 64 of golfer 60 is placed on base 32 behind, and against, shaft 30 nearest column 38. The choice of which leg to place in that position depends on whether golfer 60 will take a left-handed or a right-handed stance. In FIGS. 1 and 2, golfer 60 is shown placing left leg 62 near column 38 to take a right-handed swing of golf club 70 against golf ball 72. As such, left leg 62 is the “leading leg” in FIGS. 1 and 2, and it is understood that this is the leg facing toward the direction golf ball 72 will travel and that the opposite leg is the “non-leading leg”. Pad 36 is preferably disposed on shaft 34 near pivot point 41 to cushion shaft 34 against leg 62 (or leg 64). Right leg 64 (i.e., the non-leading leg) is positioned so that shaft 34 is pressed against leg 64 behind the knee of leg 64.
FIG. 3 shows in detail an embodiment in which shaft 34 is inserted into, and pivotally attached to, column 38. Column 38 is preferably fixed onto base 32. Base 32 may have anchors 44 and 46 to help secure base 32 to the ground (ground not shown). Any means known in the art may be used to secure base 32 to the ground, and an example is provided in FIG. 3 wherein anchors 44 and 46 are fixed, or removably attached, to base 32. In another embodiment, no pins are used, but as described below, the dimensions of base 32 provide for stability during use.
Any means known in the art may be utilized to attach shaft 34 to column 38. In one embodiment, as shown in FIG. 4, a pin such as screw 48 is utilized and spacers 50, 50' are disposed between side walls 80, 82 and shaft 34. Screw 48 is inserted through hole 40 (shown in FIG. 3) in wall 82 of column 38, through another hole (not shown) in opposite wall 80, and through connector 52 which is attached to shaft 34. FIG. 5 shows opening 42 in wall 84 of column 38 through which shaft 34 is inserted. Opening 42 is of sufficient length to allow the pivoting movement of shaft 34 when the angle between shaft 34 and base 32 is adjusted.
Spacers 50, 50' may be made of any material known in the art suitable for use as spacers such as, for example, nylon. Pad 38 may be made of any material known in the art such as, for example, a foam material.
Base 32, anchors 44, column 38, and screw 48 may be made of any rigid material known in the art including, but not limited to, metal such as steel or aluminum. Base 32 is of sufficient length to allow golfer 60 to place legs 62 and 64 within the span of the length of base 32, and shaft 34 is of sufficient length to span the distance between legs 62 and 64. Shaft 34 also may be made of any material known in the art that is strong enough to constrain golfer 60 in the desired stance. Some flexibility may, but need not, be provided to help the upper body (not shown) of golfer 60 to fully rotate while preventing the hips (not shown) of golfer 60 from over-rotating; thus, apparatus 30 may function like a spring.
Shaft 34 may be made of any rigid material such as, but not limited to, a metal such as tubular stainless steel.
To provide for stability, and as an alternative to using anchors, base 32 comprises a sufficient area to provide a stable platform on which the golfer may stand. The area may therefore be large to provide such a stable platform, or, preferably, the dimensions of base 32 are such that the golfer’s foot straddles a width of base 32 so that a front and a rear of the golfer’s foot makes contact with the ground.
An embodiment exemplified by the figures (but without the use of anchors) provides, therefore, the following: (1) a base between approximately 14 and 40 inches long and between approximately 3 and 5 inches wide; (2) a shaft between approximately 14 and 40 inches long and with an average diameter of between approximately 0.3 and 2 inches; (3) a foam pad disposed at the end of the shaft proximal the pivot point; (4) a column connected to the base with a width of between approximately 0.8x0.8 and 1.5x1.5 inches and a length (height) of between approximately 2.0
and 5.0 inches; (5) a screw hole of between approximately 1.0 and 3.0 inches in diameter, the center of which is located from between approximately 1.0 and 4.5 inches from the base; (6) an opening for receiving the shaft into the column of between approximately 0.3 and 2.1 inches wide and between approximately 0.5 and 2.0 inches in length, the bottom of the opening being located between approximately 0.5 and 4.0 inches from the base; (7) spacers within the column of between approximately 0.1 and 0.5 inches in diameter; and (8) a screw connecting the shaft to the column.
Therefore, apparatus 30 helps increase the gap or measured difference between the rotation of the upper body and the hips of golfer 60. Ultimately, the purpose and result is that apparatus 30 helps to keep the shaft of club 70 on the correct plane during the backswing and to remain on plane through the downswing to produce a straighter ball 72 during flight.
In typical use, shown in FIGS. 1 and 2, golfer 60 places base 32 on the ground and pulls shaft 34 so that it pivots up to approximately a 45° angle. Golfer 60 then steps onto base 32 with foot 63 near pivot point 41 so that the end of shaft 34 proximal to pivot point 41 is firmly up against shin 61. Golfer 60 places foot 65 on the part of base 32 distal to pivot point 41 so that shaft 34 fits behind leg 64, behind the knee. Golfer 60 then takes golf club 70 back and apparatus 30 helps golfer 60 to make a better shoulder turn and keep the turn of hips 66 restricted (i.e., prevent over-rotation) so that as the backswing is completed, golfer 60 will be in the correct position to let the arms drop naturally. As club 70 comes down, it will be in an effective path to ball 72 so that the swing results in a square club face at impact and a solid hit.
EXAMPLE
An apparatus in accordance with the description provided herein was constructed and used successfully as follows:
1. The spacers were approximately 0.250 inches in diameter and made of nylon.
2. The screw was a cap screw with nut and its dimensions were \( \frac{3}{4} \times 20 \).
3. The shaft was of approximately 0.370 inches in diameter and approximately 35.0 inches in length.
4. The width of the column was approximately 1.0 x 1.0 inches, and the length was approximately 3.5 inches.
5. The center of the screw hole was approximately 2.125 inches above the base and approximately 0.45 inches from the wall of the column furthest from the shaft. The hole’s diameter was approximately 0.25 inch.
6. The opening for receiving the shaft into the column was approximately 0.375 inch wide and approximately 0.89 inch in length. The bottom of the opening was positioned approximately 1.96 inches from the base, and the top was positioned approximately 2.85 inches from the base.
7. A foam pad was disposed at the bottom (portion connected to the column) portion of the shaft to cushion the golfer’s shin.
8. The length of the base was approximately 30 inches, and the width of the base was approximately 4 inches.
The preceding examples can be repeated with similar success by substituting the generically or specifically described compositions, biomaterials, devices and/or operating conditions of this invention for those used in the preceding examples.
Although the invention has been described in detail with particular reference to these preferred embodiments, other embodiments can achieve the same results. Variations and modifications of the present invention will be obvious to those skilled in the art and it is intended to cover in the appended claims all such modifications and equivalents. The entire disclosures of all references, applications, patents, and publications cited above are hereby incorporated by reference.
What is claimed is:
1. A golf training method comprising:
providing a substantially flat base;
pivotally connecting an end of a shaft to a pivot point at an end of the base to permit movement of the shaft about the pivot point along a plane for varying an angle defined by the longitudinal axis of the base, the longitudinal axis of the shaft, and the pivot point;
placing the shaft against a front of a leading leg of a golfer proximal to the pivot point and against a back of a non-leading leg of the golfer as the golfer stands on the base; and
the shaft preventing over-rotation of the golfer's hips as the golfer executes a back-swing.
* * * * *
|
TRAIN MOOSE-KILL IN ALASKA: CHARACTERISTICS AND RELATIONSHIP WITH SNOWPACK DEPTH AND MOOSE DISTRIBUTION IN LOWER SUSITNA VALLEY
Ronald D. Modafferi
Alaska Department of Fish and Game, 1800 Glenn Highway, Suite 4, Palmer, Alaska 99645
ABSTRACT: Trends in moose (*Alces alces*) mortality (n = 3,054) due to train collisions along 756 km of railway in Alaska from 1963-90 are presented. Annual (May-April) mortality ranged from 9 to 725 moose. Winter (November-April) mortality varied from 7 to 705 moose, with more than 73% occurring from January through March. Mortality was greatest in sections of the railway transecting winter range. During the 1989-90 winter, 50 % (352 moose) of the train moose-kills occurred in a 64 km section of railway (8.5% of the railway length) in the lower Susitna Valley. There was a positive correlation among snowpack depth and train moose-kill, and moose numbers on winter range for the years when I studied the relationship. There was an inverse relationship between snowpack depth and moose density in alpine habitat, and between alpine density and train moose-kill for the years the relationship was studied. There was a relationship between the timing of deep snow and timing of moose occurrence on winter range, and timing of train moose-kill in two winters with greatly dissimilar patterns of snow accumulation. My results emphasize the importance of understanding moose movements in assessing and resolving the train-moose problem. Findings also identify the importance of alpine postrut concentration areas as a component of moose habitat.
ALCES VOL. 27 (1991) pp.193-207
STUDY AREA
Railway
The Alaska Railroad railway goes between Seward (milemark = 0), a marine port on the east coast of the Kenai Peninsula in south-central Alaska, and Fairbanks (milemark = 470), a major city in the interior of the state (Fig. 1). The 756 km railway, passes through cities, towns, rural settlements, and vast expanses of unsettled land. The route traverses a variety of habitats including: coastal spruce-hemlock forests, closed spruce-hardwood forests, open low-growing spruce forests, shrub thickets and treeless bogs (Viereck and Little 1972). Elevation of the route changes from sea level in Seward to a high point of 700 m in Broad Pass (milemark = 297), on the south side of the Alaska Mountain Range, to 130 m at Fairbanks. The Alaska Mountain Range divides Alaska into interior and south-central geographical regions. In south-central Alaska, about 160 km of the railway runs near major lowland river drainages, extensive active floodplains and large tracts of
unmaintained old homestead land clearings. Forest vegetation along the route in the lower Susitna Valley include mixtures of old growth white spruce (*Picea glauca*), black spruce (*Picea mariana*), paper birch (*Betula papyrifera*), aspen (*Populus tremuloides*), balsam poplar (*Populus balsamifera*) and black cottonwood (*Populus trichocarpa*). Willows (*Salix spp.*), alders (*Alnus spp.*) and young deciduous tree species are particularly common at lower elevations in river drainages and in active floodplains. Early successional deciduous species dominate landscapes in settlements, unmaintained homesteads and the railway corridor where the ground surface has been disturbed by man. Willows and young deciduous tree species are preferred winter moose browse in south-central Alaska (Spencer and Chatelain 1953). Consequently, in winter, large numbers of migratory moose concentrate in locations along the railway in south-central Alaska where local conditions favor growth of early successional deciduous browse species.
**Regional Conditions in Lower Susitna Valley**
Winter climate in the lower Susitna Valley region is more variable and inclement away from the maritime influence of Cook Inlet and at higher elevations. Mean monthly temperatures vary from about 16 C in July to -13 C in January; maximum and minimum temperatures of 25 and -35 C are common. Total annual precipitation varies from about 40 cm in the south to over 86 cm in the north and west.
Fig. 1. Location of the 756 km railway between Seward (milemark 0) and Fairbanks (milemark 470) and the lower Susitna Valley study area in Alaska, showing game management subunits (14A, 14B, 16A and 13E), winter concentration areas (WCA1 and WCA2) and postrut concentration areas (PCA). Talkeetna and Willow were where snowpack depth was measured.
Snow accumulation varies with location, elevation, and site characteristics. Maximum snow depth can vary from <20 cm in the south to >200 cm in the north and west. Snow depth is generally deeper at higher elevations. Strong northerly winds often redistribute snow in exposed alpine sites and open floodplains. Snow accumulation in river channels varies depending on where and when ice forms over open water. Avalanches redistribute snow that accumulates on steep slopes.
Elevations within the region range from sealevel to rugged mountain peaks well above 1500 m. Moose seldom use areas above 1100 m. Dominant habitat and canopy types in the region are characterized as: (1) floodplains dominated by willows, alders and poplars; (2) lowlands dominated by a mixture of wet bogs and closed or open mixed conifer-deciduous forests of paper birch, white spruce, black spruce, aspen; (3) mid-elevations dominated by mixed or pure stands of aspen, paper birch and white spruce; (4) higher elevations dominated by alder, willow, and birch shrub thickets (*Betula spp.*) or grasslands (*Calamagrostis spp.*); and (5) alpine tundras dominated by sedge (*Carex spp.*), ericaceous shrubs, prostrate willows, and dwarf herbs (Viereck and Little 1972).
**METHODS**
**Train Moose-Kills**
The ARC provided location and date for each moose-killed by the train on the railway between Seward and Fairbanks from October, 1963 through April, 1990. Accuracy in reporting train moose-kills in Alaska has greatly improved since 1980. Before which, numbers of were underreported (Rausch 1958). Data on train moose-kills before 1980 probably reflected month-to-month and year-to-year variations. Train moose-kill data form milemarks 0 to 470 were tabulated by year, season, month and location.
Train moose-kills were clustered in the section of the railway in the lower Susitna Valley. I explored relationships between snowpack depth, train moose-kill and moose distribution in a 145 km section of railway in the lower Susitna Valley from milemark 185 to 275. The high kill section of railway was divided into 2 segments. The train moose-kill on the segment extending from milemark 225 near Talkeetna to milemark 275 near Chulitna Pass was compared to moose counts in WCA1 and snowpack depth at Talkeetna. The train moose-kill on the segment extending from milemark 185 near Willow to milemark 225 was compared to moose counts in PCA and snowpack depth at Willow. The train moose-kill on the segment extending from milemark 185 to 275 was compared to moose counts in WCA1 + WCA2 and snowpack depth at Talkeetna.
**Aerial Surveys**
Numbers of moose were counted on aerial surveys in postrut concentration areas (postrut areas) and winter range in the lower Susitna Valley (Fig. 1). Survey areas were selected near railway sections with a high moose-kill.
Postrut areas (PCA) were located in the western foothills of the Talkeetna Mountains in Alaska Game Management Subunits 14A and 14B. This 240 km$^2$ area ranging in elevation from 600 to 1,200 m included 3 neighboring parcels of alpine habitat separated by lower elevation forested river drainages. This survey area was situated about 7 km east of the railway. In certain winters, moose were not found at higher elevations in the survey area. The area included portions of Bald Mountain Ridge, Moss Mountain and Willow Mountain.
Moose on winter range were surveyed in 2 areas of the Susitna River floodplain. One area was in Subunit 13E (WCA1); the other was in Subunit 16A (WCA2). The survey area in Subunit 13E was in the Susitna River floodplain between the Talkeetna River and Devil Canyon. This area encompassed 80 km of floodplain habitat ranging in elevation from 100 m at the Talkeetna River to 300 m at Devil
Canyon. Here, the floodplain was mostly <0.5 km wide with a scattering of islands. The railway from milemark 225-263 was mostly within 0.5 km of this survey area.
The survey area in Subunit 16A was located in the Susitna River floodplain between the Talkeetna River and the Yentna River. This area encompassed about 95 km of floodplain habitat ranging in elevation from 15 m at the Yentna River to 100 m at the Talkeetna River. In the survey area, the Susitna River floodplain was frequently >3 km wide where the river braids extensively around many small and large islands. The railway from milemark 185 to 225 was mostly within 2 km of this area.
Aerial surveys were conducted in winter, when snowcover was sufficient to observe moose, at 2- to 3-week intervals weather permitting. Surveys were conducted in WCA1 in 1981-85, WCA2 in 1982-84 and PCA in 1985-90. Survey flights were flown in Piper PA-18 aircraft at a search intensity of about 2.3 min per km$^2$. Low vegetative cover and good snow conditions in survey areas led to very high observability of moose.
**Snowpack Depth**
Snow depth data were obtained from Alaska Climatological Data Reports, U.S. Department of Commerce, NOAA, National Environmental Satellite, Data and Information Service, National Climate Data Center, Asheville, North Carolina. Snow depth data from Talkeetna were used as an index of snowpack depth in WCA1 and along the railway segment from milemark 225 to 275 in 1981-85, in WCA1+WCA2 and along the railway segment from milemark 185 to 275 in 1982-84, and along the railway segments from milemark 185 to 275 and milemark 225 to 275 in 1985-90. Snow depth data from Willow were used as an index of snowpack depth in PCA in 1985-90 and along the railway segment from milemark 185 to 225 in 1981-90. I presented the maximum snow depth recorded in each of 3, 10-day intervals (DIs) (1-10, 11-20 and 21-31 days) for each month. There were 21, DIs from October through April.
Snowpack depth was compared in relation to snowpack depth = 40 cm. Onset of fall-winter migrations of moose in Sweden (Sandegren et al. 1985) and Alaska (Van Ballenburgh 1977) were linked to snowpack depth of 42 and 40 cm, respectively.
**Relationship Between Snowpack Depth, Moose Distribution and Train Moose-kill**
To explore the relationship between snowpack depth, moose numbers on winter range, and train moose-kills, I used the Pearson correlation coefficient (Snedecor and Cochran 1980) to compare: (1) the maximum snowpack depth at Talkeetna (MSD-T) with the maximum number of moose counted in WCA1 (MMC-W) in 1981-85; (2) the MMC-W with the number of train moose-kills between milemarks 225 and 275 (TMK-T) in 1981-85; (3) the MSD-T with the TMK-T in 1981-85; and (4) the MSD-T with the TMK-T in 1981-90. Statistical significance was set at the 0.05 alpha level for all analyses in this paper.
To explore relationship between snowpack depth, moose numbers on winter range, and train moose-kills in 2 winters (1982-84) that differed greatly in the timing of snow accumulation, I used the Pearson correlation coefficient to compare: (1) the number of moose counted in WCA1 + WCA2 in each month (averaged by the number of counts per month) (AMC) with the number of train moose-kills between milemarks 185 and 275 in each month from November through March in the 1982-84 winters and (2) the AMC with the monthly maximum snowpack depth from November through March in the 1982-84 winters. I used a Chi-square analysis to compare the monthly number of train moose-kills between milemarks 185 and 275 from November through April in the winters, 1982-84. I used a Chi-square analysis with a Yates correction factor to compare the number of DIs with maximum snowpack depth <40 cm, and >40 cm in the 1982-83 and 1983-84
winters.
To explore the relationship between snowpack depth, moose numbers in postrut concentration areas and train moose-kills I used the Pearson correlation coefficient to compare: (1) the number of the DI when snowpack exceeded 40 cm at Willow (MIS-W) with the number of the DI when moose numbers in the PCA decreased by >75% in 1985-90; (2) the MIS-W with the number of train-moose kills between milemarks 185 and 225 (TMK-W) in 1985-90; (3) the number of the DI when moose numbers in the PCA decreased by>75% with the TMK-W in 1985-90; and (4) the maximum snowpack depth at Willow with the TMK-W in 1981-90.
RESULTS
Characteristics of the Train Moose-Kill
The ARC documented mortality of 3054 moose in train collisions in 756 km of railway between Seward and Fairbanks from May 1963 through April 1990. Numbers of train moose-kills ranged from 9 to 725 annually, May-April (Fig. 2). Numbers of train moosekills ranged from 7 to 705 in winter. More than 93% of the train moose-kills were between Nov through Apr, 73.3% were in Jan through Mar (Fig. 3). Although only 3.5% and 4.4% of the annual train moose-kill occurred in November or April, it was 2.5-3.1 times greater than in any month from May through October.
In the 4 winters with the largest reported number of train moose-kills (1984-85 and 1987-90), kill locations were clustered in Subunits 14A, 14B and 13E (Fig. 4). Kills were particularly numerous along a 193 km section of railway between milemarks 160 and 280 in the lower Susitna Valley. Other sections of the railway had few or no moose killed by trains. During the winters of 1984-85, and 1987-90, 204, 178, 88 and 352 moose were killed along a 64 km section of railway between milemarks 185 and 225. During these winters, 55, 56, 35 and 50 percent of the train moose-kills, respectively, occurred along 8.5

percent (64 km) of the railway.
**Snowpack Depth, Moose Counts in Winter Concentration Areas, and Train Moose-Kills**
Snowpack depth at Talkeetna, numbers of moose in winter concentration areas and numbers of train moose-kills varied greatly during 4 winters, 1981-85 (Fig. 5). Peak snowpack at Talkeetna, varied from 46 to 157 cm during these 4 winters. Snowpack generally increased from October through January, peaking in February or mid-March, and melting in late April. Thirty-four moose surveys were completed in WCA1 between November, 1981 and April, 1985, whereas 16 surveys were completed in WCA2 between November, 1982 and February, 1984. Thirty-seven surveys were conducted in PCA between October 1985 and March 1990.
The greatest number of moose counted in WCA1, in 34 surveys ranged from 36 to 132 during the 4 winters, 1981-85 (Fig. 5). Maximum numbers of moose counted was positively correlated with maximum snowpack depth during years 1981-85 ($r=0.976, P=0.024, n=4$). Thus the magnitude of moose movement to winter range was related to snowpack depth. The fewest number of moose counted before and after the winter peak was 7 and 4, respectively. Moose numbers increased during November and December, peaking in January to mid-February, and then decreasing to low levels in March to mid-April. Numbers of train moose-kills between railway milemarks 225 and 275 ranged from 0-87 during the winters of 1981-85. There was a high non-significant positive correlation between train moose-kills and maximum moose counts during the winters 1981-85 ($r=0.887, P=0.113, n=4$). Train moose-kills were high when moose concentrated in winter areas near the railways.
Number of train moose-kills (1984-85) and the peak moose count was greatest when snowpack depth was greatest (Fig. 5). Train moose-kills (1981-82) and peak moose count was lowest when snowpack depth was lowest. There was a high non-significant positive correlation between greatest snowpack depth and train moose-kills during the 1981-85 winFig. 4. Distribution of train moose-kills during winter, for 8-km sections along the railway from Seward (mi 0) to Fairbanks (mi 470), Alaska, during years, 1984, and 1987-89. Vertical lines below x-axis indicate milemark locations of Game Management Subunit (S-) boundaries. FBKS=Fairbanks.
ters ($r=0.901, P=0.099, n=4$). However, when the database was expanded including data from the 1985-90 winters, there was a significant positive correlation between snowpack depth and train moose-kills ($r=0.962, P=0.0001, n=9$). The train moose-kill was high when deep snow forced moose to migrate to winter concentration areas. Snowpack depth was bimodal in 1981-82, 1982-83 and 1984-85 (Fig. 5). Moose numbers in the WCA varied with this bimodal trend in snowpack depth (Fig. 5).
Snowpack depth, moose counts, and train moose-kills peaked earlier in 1982-83 than 1983-84 (Fig. 6). Snowpack depth increased from 23 to 81 cm between October and mid-January in 1982-83. During 1982-83, snowpack depth exceeded 40 cm by late OcFig. 5. Trimonthly maximum snowpack depth at Talkeetna (A), numbers of moose counted on aerial surveys in lowland winter concentration areas in the Susitna River floodplain between the Talkeetna River and Devil Canyon (B), and numbers of train moose-kills between railway milemarks 225 and 275, November-April (C), 1981-85, south-central Alaska. In other studies, onset of moose fall-winter migration coincided with snowpack depth = 40 cm.
Fig. 6. Trimonthly maximum snowpack depth at Talkeetna (A), numbers of moose counted on aerial surveys in lowland winter concentration areas in the Susitna floodplain between the Yentna River and the Talkeetna River (B), and monthly number of train moose-kills between railway milemarks 185 and 275, October-April (C). 1982-84, south-central Alaska. In other studies, onset of moose fall-winter migration coincided with snowpack depth = 40 cm.
In the 1983-84 winter, snowpack depth ranged from 5 to 94 cm. Snowpack depth exceeded 40 cm in early January, and peaked at 94 cm in early February. Snowpack depth exceeded 40 cm earlier and was >40 cm for a longer time in 1982-83 than 1983-84 ($X^2=12.22$, df=1, $P=0.005$). Trends in numbers of moose counted in WCA1 + WCA2 differed between 1982-83 and 1983-84 (Fig. 6B). In 1982-83, numbers of moose ranged from 78 to 622 and peaked in late December, 1982. In 1983-84, numbers of moose ranged from 132 to 481, and peaked in early February. Monthly numbers of moose counted (AMC) were correlated with monthly maximum snowpack depth during November through March.
Monthly numbers of train moose-kills were different between the 1982-83 and 1983-84 winters (Fig. 6) \((X^2=17.17, df=5, P=0.0042)\). In 1982-83, train moose-kills peaked in January and 64 percent occurred before February. In 1983-84, train moose-kills peaked in February and 78 percent occurred after January. Monthly numbers of train-moose-kills were positively correlated with monthly numbers of moose counted (AMC) \((r=0.815, P=0.008, n=9)\). The timing of snowpack accumulation influenced the timing of moose movements to winter concentration areas, and the timing of train moose kills.
**Snowpack Depth, Moose Counts in Postrut Concentration Areas, and Train Moose-Kills**
Snowpack depth at Willow, numbers of moose in postrut areas and numbers of train moose-kills varied among years 1985-90 (Fig. 7). Peak snowpack depth ranged from 43 to 234 cm. The greatest numbers of moose counted ranged from 626 to 938 moose, whereas the fewest number of moose counted before and after a winter peak was 42 and 12 moose, respectively. Numbers of moose counted in postrut concentration areas generally increased during October, peaked between late October and early December, and decreased from late December and mid-April.
In winter 1985-86, numbers of moose in postrut areas decreased by less than 50 percent between the peak count in early December and a count in late March. Snowpack depth first exceeded 40 cm in late March. In 1989-90, numbers of moose decreased precipitously in late October and early November, when 1989-90, snowpack depth first exceeded 40 cm in late October. Few moose were counted in late December, 1990, the year snowpack depth was greatest. During the winter of 1986-87, numbers of moose declined in December; snowpack exceeded 40 cm in early January. In the winters of 1987-89, moose numbers declined in mid-November to mid-December; snowpack depth exceeded 40 cm in late November.
The number of the DI when snowpack exceeded 40 cm was correlated positively with the number of the DI when numbers of moose counted in the PCA decreased to <75% of the peak count during the years 1985-90 \((r=0.928, P=0.023, n=5)\). Moose dispersed from postrut concentration areas when snowpack exceeded 40 cm.
Numbers of train moose-kills between milemarks 185 and 225 ranged from 4 to 352 for the 1985-90 winters. Numbers of train moose-kills in winter were lowest in 1985-86, highest in 1989-90, and intermediate in 1986-89. Kills varied among the 3 winters with intermediate numbers of train moose-kills. Train moose-kills were twice as common in 1987-88 than 1988-89, and 2.4 times more numerous in 1988-89 than 1986-87. In 1986-87, snowpack depth exceeded 40 cm in early January, whereas in 1987-89 it exceeded 40 cm in late November. Snowpack depth in 1987-88 exceeded snowpack depth 1988-89 from mid-December through April. The numbers of the DI when snowpack exceeded 40 cm was not significantly correlated with the number of moose-kills 1985-90 \((r=-0.793, P=0.109, n=5)\). The numbers of the DI when numbers of moose counted in the PCA were <75% of the peak count were not significantly correlated with the numbers of moose-kills \((r=-0.704, P=0.185, n=5)\). However, when the database was expanded including the 1981-85 winters, there was a significant positive correlation between maximum snowpack depth and train moose-kills \((r=0.815, P=0.007, n=9)\). The timing and depth of snow influenced dispersal of moose from postrut areas, and both correlated with train moose-kills. Maximum snowpack depth was an important factor influencing the number of train moose-kills. Perhaps, timing and magnitude of moose migrations from the PCA, which are influenced by snowpack depth, were weekly correlated with train moose-kills.
Fig. 7. Trimonthly maximum snowpack depth at Willow, numbers of moose counted on aerial surveys in alpine postrut concentration areas in the western foothills of the Talkeetna Mountains (B) and numbers of train moose-kills between railway milemarks 185 and 225, November-April (C), 1985-90, south-central Alaska. In other studies, onset of moose fall-winter migration coincided with snowpack depth = 40 cm.
because moose in the PCA migrate to winter range that is not near the railway.
**DISCUSSION**
A large number of moose were killed in train collisions in Alaska each year. This kill occurred mainly from November through April. Kills were clustered in certain segments of the railway, and more numerous in deep-snow winters. Kills were few in low-snow winters. These data agree with findings of others (Rausch 1958, Child 1983, Hatler 1983, Andersen et al. 1991). However, in southern Norway, <50% of the yearly train moose-kill occurred in winter (Jaren et al. 1991), and in Ontario and Manitoba, train-moose collisions were most frequent in June and July shortly after calving season (Child and Stuart 1987).
Train moose-kills increased when migratory moose moved to winter concentration areas near the railway. Kills were clustered in sections of the railway transecting migration routes and winter range. Kills were more numerous in deep-snow winters than in low-snow winters. In deep-snow winters, most moose in alpine postrut concentration areas dispersed to lowland winter range near the railway. In low-snow winters, many moose stayed in alpine habitat. The peak in train moose-kills occurred earlier in winter in an early-snow winter than in a late-snow winter because most moose migrated to winter range in response to snow accumulation. These findings were consistent with findings previously reported (Rausch 1958, Coady 1974, Van Ballenburghe 1977, Thompson et al. 1981, Child 1983, Sandegren et al. 1985). Although train moose-kills were numerous in deep-snow winters when large numbers of moose were near the railway, the additional affect of plowed snow along the railway likely affected behavior of moose increasing their vulnerability to train collisions (Rausch 1958, Child 1983, Hatler 1983, Andersen et al. 1991).
Loss of large numbers of moose in train collisions can have considerable consequences on management of local moose populations (Rausch 1958, Child 1983). More than 350 moose were killed in train collisions in Subunit 14B in the winter of 1989-90. However, in addition to moose resident in Subunit 14B, migratory moose from 2 neighboring Subunits were vulnerable to train collisions in Subunit 14B (R. Modafferi pers. comm.). Consequently, losses must be allocated among moose populations in 3 Subunits, and managers must understand movements of moose in the railway.
Plans to mitigate or resolve problems of train-moose collisions frequently include measures to manage habitat and moose populations along railways (Rausch 1958, Child 1983, Jaren et al. 1991). One option is to decrease numbers of moose near the railway. Forage along railways can be eliminated so moose are not attracted to the rail corridor. Habitat away from railways can be managed to attract moose and keep them distant from the rail corridor. Winter harvest quotas can be established near the railway. Fall harvest quotas can be increased in these Subunits overlapping the railway. However, findings in this study and another (R. Modafferi pers. comm.) suggest that these measures must be implemented at certain times and places to affect target moose populations.
In some moose management jurisdictions, railway corporations fail to provide wildlife managers with an accurate account of train moose-kills (Rausch 1958, Child and Stuart 1987). In Alaska, railway managers have cooperated with wildlife managers in collecting information on train-moose conflicts and in testing measures to help resolve the problem.
My findings indicate that moose distribution and numbers on winter range were related to snow accumulation throughout the winter. These findings agree with observations of Edwards and Ritcey (1956) who noted that
snow depth was a major factor influencing timing and extent of moose migrations and yearly differences in moose distribution. Van Ballenburgh (1977) found that snow conditions caused moose to break from traditional migratory patterns during a seasonal cycle. Crete (1980) showed that moose did not winter in the same forest stands during consecutive winters; snow conditions were not assessed. Modafferi (pers. comm.) indicated that some individual radio-marked moose in the lower Susitna Valley migrated differently and were located in different areas in a low-snow winter versus a series of average- to deep-snow winters. In contrast, Sweanor and Sandegren (1987) reported that moose fall-winter migration patterns were consistent each year. However, in all years of their study, snow depth exceeded 40 cm, the threshold snow depth that initiated onset of migrations in moose (Sandegren et al. 1985). In this study, timing, magnitude and extent of moose migrations were correlated with snowpack depth. My findings suggested that not all moose migrated in response to the same threshold of snowpack depth, and that snow depth influenced the final destination of migrations of moose.
There is considerable information on movements of moose to winter concentration areas and the importance of winter concentration areas to moose (Stevens 1970, Telfer 1970, Brassard et al. 1974, Coady 1974, LeResche 1974, Peek 1974, Van Ballenburgh 1977, Crete and Jordan 1982, Sandegren et al. 1985, Lavsund 1987, Danell and Bergstrom 1989, Hundertmark et al. 1990). There is less data available on movements of moose to postrut concentration areas and the importance of postrut areas in moose ecology (LeResche 1972, Lynch 1975, Thompson et al. 1981). Like winter concentration areas, importance of postrut concentration areas, is suggested by the traditional use by large numbers of moose. Moose left surrounding habitats to move to these postrut areas in early winter before deep snowpack forced them to move to winter range (Coady 1974, Telfer 1978). Thompson et al. (1981) suggested that quantity and quality of browse in moose early winter concentration areas was superior to browse in surrounding habitats. Weight and body condition of moose entering winter determines survival and influences productivity the following spring (Saether 1987, Schwartz et al. 1988). During the postrut period, moose increase food intake (Schwartz et al. 1984) and gain weight (Schwartz et al. 1987). Quality of range in these postrut concentration areas likely influenced moose movements to them.
My observations indicated moose winter range has two components, alpine postrut concentration areas and lowland winter concentration areas. Snowpack depth affected timing, duration and magnitude of moose use of each component. When deep snowpack occurred early, moose dispersed from postrut areas in November to winter ranges. During winters with low snowpack many moose stayed in alpine postrut concentration areas. This extended use of postrut areas reduced the impact of browsing on forage in lowland winter concentration areas. These findings suggest that moose postrut concentration areas were an integral component of moose habitat that deserve protection and further study.
ACKNOWLEDGEMENTS
Many persons deserve special thanks for contributing to various aspects of this study. I extend special thanks to my supervisor, K. B. Schneider, Alaska Department of Fish and Game (ADF&G), for providing support and helpful suggestions throughout this study, for reviewing drafts of this manuscript, and for willingly providing assistance in administration procedures. I am grateful to J. Swiss, John Swiss and Family, Big Game Guiding, Outfitting and Air Charter Service, and W. D. Wiederkehr, Wiederkehr Air Inc., for ability and safety in piloting and navigating PA-18
aircraft on the numerous aerial moose surveys, for enthusiasm in spotting moose and for willingness to complete surveys under less than ideal conditions. I thank J. C. Didrickson, C. A. Grauvogel, H. J. Griese, and M. W. Masteller, Area Management Biologists, ADF&G, for providing local support, useful suggestions on many aspects of the study, and for sharing their experiences and knowledge about moose. K. K. Koenen extracted train moose-kill information from railway dispatch records archived in the ARC headquarters. M. W. Masteller updated and organized parts of the train moose-kill database file. D. C. McAllister, ADF&G, provided logistic assistance and drafted Fig. 1. S. R. Peterson and other staff at ADF&G, Juneau, provided advice and many valuable comments on a previous version of this manuscript which greatly improved its quality. E. F. Becker, ADF&G, is greatly acknowledged for statistical treatment of data in this manuscript. I thank C. C. Schwartz and an anonymous reviewer for extensive critical reviews of this manuscript. I thank C. C. Schwartz and T. Timmermann for encouraging me to prepare and submit this paper. This study is a contribution of Fed. Aid Wildl. Restor., Proj. W-23.
REFERENCES
ANDERSEN, R., B. WISETH, P. H. PEDERSEN, and V. JAREN. 1991. Moose-train collisions: Effects of environmental conditions. Alces 27:79-84.
BRASSARD, J. M., E. AUDY, M. CRETE, and P. GRENIER. 1974. Distribution and winter habitat of moose in Quebec. Naturaliste can. 101:67-80.
CHATELAÍN, E. F. 1951. Winter range problems of moose in the Susitna Valley. Proc. Alaska Sci. Conf. 2:343-347.
CHILD, K. N. 1983. Railways and moose in the Central Interior of British Columbia: A recurrent management problem. Alces 19:118-135.
______, and K. M. STUART. 1987. Vehicle and train collision fatalities of moose: Some management and socio-economic considerations. Swedish Wildl. Res., Suppl. 1:699-703.
COADY, J. 1974. Influence of snow on behavior of moose. Naturaliste can. 101:417-436.
CRETE, M. 1980. Failure of moose to use the same stands in consecutive winters. Alces 16:482-488.
______, and P. A. JORDAN. 1982. Population consequences of winter forage resources for moose, *Alces alces*, in southwestern Quebec. Can. Field Nat. 96:467-475.
DANELL, K., and R. BERGSTROM. 1989. Winter browsing by moose on two birch species: impact on food resources. Oikos. 54:11-18.
EDWARDS, R. Y., and R. W. RITCEY. 1956. The migrations of a moose herd. J. Mammal. 37:486-494.
HATLER, D. F. 1983. Concerns for ungulate collision mortality along New Surface Route. MacLaren Plansearch Corporation, Vancouver. 47 pp.
HUNDERTMARK, K. J., W. L. EBERHARDT, and R. E. BALL. 1990. Winter habitat use by moose in southeastern Alaska: Implications for forest management. Alces 26:108-114.
JAREN, V., R. ANDERSEN, M. ULLEBERG, P. H. PEDERSEN, and B. WISETH. 1991. Moose-train collisions: The effects of vegetation removal with a cost-benefit analysis. Alces 27: 93-110.
LAVSUND, S. 1987. Moose relationships to forestry in Finland, Norway and Sweden. Swedish Wildl. Res., Suppl. 1:229-244.
LERESCHE, R.E. 1974. Moose migrations in North America. Naturaliste can. 101:393-415.
______. 1972. Migrations and population mixing of moose on the Kenai Peninsula (Alaska). Proc. N. Am. Moose Conf. Workshop 8:182-207.
LYNCH, G. M. 1975. Best timing of moose surveys in Alberta. Proc. N. Am. Moose Conf. Workshop 1:141-153.
PEEK, J. 1974. On the nature of winter habitats of Shiras moose. Naturaliste can. 101:131-141.
RAUSCH, R. A. 1958. The problem of railroad-moose conflicts in the Susitna Valley. Alaska Dep. of Fish and Game, Fed. Aid Wildl. Rest. Final Rep., Proj. W-3-R. 116pp.
SAETHER, B-E. 1987. Patterns and processes in the population dynamics of the Scandinavian moose (Alces alces): Some suggestions. Swedish Wildl. Res. Suppl. 1:525-537.
SANDEGREN, F., R. BERGSTROM, and P. Y. SWEANOR. 1985. Seasonal moose migration related to snow in Sweden. Alces 21:321-338.
SCHWARTZ, C.C., W.L. REGELIN, and A. W. FRANZMANN. 1984. Seasonal dynamics of food intake in moose. Alces 20:233-244.
______, W. L. REGELIN, and A. W. FRANZMANN. 1987. Seasonal weight dynamics of moose. Swedish Wildl. Res. Suppl. 1:301-310.
______, M. E. HUBBERT, and A. W. FRANZMANN. 1988. Energy requirements of adult moose for winter maintenance. J. Wildl. Manage. 52:26-33.
SNEDECOR, G. W. and W. C. COCHRAN. 1980. Statistical Methods. 7th edition. The Iowa Univ. Press, Ames, Iowa. 507pp.
SPENCER, D. L. and E. F. CHATELAIN. 1953. Progress in the management of the moose of south central Alaska. Trans. N. Am. Wildl. Conf. 18: 539-552.
STEVENS, D. R. 1970. Winter ecology of moose in the Gallatin Mountains, Montana. J. Wildl. Manage. 34:37-46.
SWEANOR, P. Y., and F. SANDEGREN. 1987. Migratory behavior of related moose. IV:59-65. In P. Y. Sweanor. Winter ecology of a Swedish moose popula-
tion: Social behavior, migration and dispersal. MSc. Thesis. Swedish Univ. Agricult. Sci. Rept. 13. Uppsala. pp.94.
TELFER, E. S. 1970. Winter habitat selection by moose and white-tailed deer. J. Wildl. Manage. 34:553-559.
______, 1978. Cervid distribution, browse and snow cover in Alberta. J. Wildl. Manage. 42:352-361.
THOMPSON, I. D., D. A. WELSH, and M.K. VUKELICH. 1981. Traditional use of early winter concentration areas by moose in northwestern Ontario. Alces 17:1-14.
VAN BALLENBURGHE, V. 1977. Migratory behavior of moose in southcentral Alaska. Pages 103-109 in 13th Int. Cong. of Game Bio. Atlanta, Ga.
VIERECK, L. A., and E. L. LITTLE, JR. 1972. Alaska trees and shrubs. U.S. Dept. Agric. Forest Serv. Handbook No. 410. 265pp.
|
Extreme Light Scientific and Socio-Economic Outlook
Tuesday 29 November 2016 - Wednesday 30 November 2016
Book of Abstracts
## Contents
Economic and Business Advisor at the Embassy of the Czech Republic .......................... 1
Conseiller départemental de l’Essonne representing Sylvie Retailleau President of the University Paris-Saclay ................................................................. 1
President of Ecole polytechnique .................................................................................. 1
Ambassador of Romania in France .............................................................................. 1
Deputy General Director of ELI-Delivery Consortium .................................................. 1
Minister of State for Higher Education and Research, attached to the Minister of National Education, Higher Education and Research ......................................................... 1
Extreme Light Scientific and Socio-Economic Outlook .................................................. 1
ELI-Beamlines: scientific and societal applications of ultra intense lasers ............... 2
IZEST and the European Strategy for Particle Physics .................................................. 3
Efficient Extreme Light Compression and its Application .......................................... 3
Extreme Light Induced Accelerating Plasma Mirrors to Investigate Black Hole Information Loss Paradox ................................................................. 3
Enhanced laser-driven ion acceleration using nanometer targets ............................... 3
Proton acceleration with light pressure and wakefield .................................................. 4
Conventional electron and proton accelerators: the state of the art ............................ 4
Hadron Therapy - Present and Perspectives ............................................................... 5
Evolution of fast electrons generated during interaction of high intensity laser with structured targets ................................................................. 5
Apollon laser facility present status ............................................................................. 5
Overview and strategy of the ELI-Nuclear Physics Project in Romania ....................... 6
Overview of the ELI-ALPS project and its few cycle phase controlled laser sources .... 7
Development of 10PW Super Intense Laser Facility at Shanghai ................................... 7
X-EUV Hartmann wavefront sensing ........................................................................... 8
New generation of deformable mirror dedicated to ultra high intensity laser .......................... 8
Toward spatially uniform pulse compression of top-hat beams at the subpetawatt level of peak powers ................................................................. 9
Dispersion management of the front end in SULF ....................................................... 9
Particle-in-Cell Simulation of X-ray Wakefield Acceleration and Betatron Radiation in Nanotubes ................................................................. 10
Observation of space parity gravitational violation in laser-Compton scattering ........... 10
Multifilamentation. Interaction and reduction of the filaments as nonlinear process. .... 11
Ti:Sapphire CPA booster amplifier for a 5 PW laser system ........................................ 12
Generation of multiple isolated attosecond pulses ...................................................... 13
TBD ......................................................................................................................... 13
Relativistic Flying Mirror for Extreme Light Sciences ............................................. 13
Short overview of laser physics and applications research activities of Institute of Electronics – Bulgarian Academy of Sciences ......................................................... 13
The socio-economic impact of research infrastructures: a generic evaluation framework and insights from selected case studies ......................................................... 14
ELI-NP as a crucible for innovation: the ELAP project ............................................ 15
Socio-Economics for Energy: Extreme Laser Pulses for Boron Fusion Power Reactors . 15
Laser-Driven High Energy Alpha Beam Interaction with Solid p11B to Achieve Fusion Ignition by Alpha Heating ................................................................. 16
Control of temporal intensity profile for PW laser pulses .......................................... 18
Twisted Photons at DESY ......................................................................................... 18
Deorbiting of Space Debris by Laser Ablation .......................................................... 19
Update on XCAN, Ecole Polytechnique-Thales Coherent Beam Combination joint laser program ................................................................. 19
Mini-Euso, a pathfinder on the ISS to detect [2 - 10 cm] debris ................................. 20
Gamma beams generation with high intensity lasers for the study of two photon Breit-Wheeler pair production ................................................................. 20
Electron-positron pairs beaming in the Breit-Wheeler process .................................. 21
Potential to search for Dark Matter with multi-wavelengths light sources .................. 22
Ultra-Intense X-Ray Radiation of Relativistic Laser Plasma Inducing Radiation-Dominated Matter Kinetics ................................................................. 22
High Peak Power Laser System for ELI NP .............................................................. 23
HHG Beamline, a unique turnkey system for the generation of a brilliant XUV beam . . 24
Economic and Business Advisor at the Embassy of the Czech Republic
Corresponding Author: email@example.com
Conseiller départemental de l’Essonne representing Sylvie Retailleau President of the University Paris-Saclay
President of Ecole polytechnique
Ambassador of Romania in France
Deputy General Director of ELI-Delivery Consortium
Minister of State for Higher Education and Research, attached to the Minister of National Education, Higher Education and Research
Corresponding Author: firstname.lastname@example.org
Extreme Light Scientific and Socio-Economic Outlook
Author: Gerard MOUROU\textsuperscript{1}
\textsuperscript{1}Ecole polytechnique - IZEST
Extreme light is one of the most exciting domains in the laser field today. It relies on the generation of ultra high peak power obtained by delivering the energy over a short time. Today, laser peak power exceeds typically the PW or thousand times the world grid power. The ability to produce and focus this gargantuan power over a size 10 times smaller than a hair offers unfathomable possibilities in science, technology medicine and is a harbinger to a floodgate of socio-economic applications.
France is a well-established academic and industrial leader in laser. Under the initiative of the Ecole Polytechnique, we proposed 10 years ago to the EU and Ile de France the construction of a Pan-European Infrastructure capable to generate the highest peak power ever produced and explore laser mater interaction at the highest possible intensities with the aim to carry out fundamental research and promote new societal applications.
The ELI infrastructure was built as a distributed three infrastructures, with ELI-Beam Line in Czech Republics, ELI-NP in Romania, ELI-ALPS in Hungary. In concert with ELI, France built a 10 PW project named Apollon on the plateau of Palaiseau. This infrastructure constellation will become operational and accessible in 2019, opening a new age in laser research.
The infrastructures are now halfway through completion. As the initiator of both projects ELI and Apollon, the Ecole Polytechnique has carried out a study; to gauge the socio academic impact of these world-class projects at all levels, regional, national and international. The conclusion of this report will be one of the conference highlights and will be reported at the meeting.
IZEST is about going beyond the horizon set by the ELI-Apollon facilities. During the conference we will have the opportunity to describe the most avant-garde laser concepts under development to gracefully segue from the petawatt to the exawatt, giving access to extremely short time structures down to the attosecond-zepetosecond regime. Pulses will be so short that the highest peak power in the x-ray regime could be reached with a modest amount of energy in the joule level yielding intensities in the Schwinger regime enough to materialize light. Among the remarkable application we note the generation of gargantuan electric field gradients amplitude enough to accelerate electrons over a centimetre to the 1eV level or relativistic protons widening the range of applications in sub-atomic physics, cosmology, vacuum physics and the like. In addition, trying to develop a new breed of laser sometime opens the way to new applications, like space debris removal which is a big issue in space activity in the near future.
**Introduction Workshop / 63**
**ELI-Beamlines: scientific and societal applications of ultra intense lasers**
**Author:** Georg KORN\(^1\)
\(^1\) ELI-beamlines, Prague, Institute of Physics, Academy of Sciences Czech Republic, Na Slovance 1999/2, 182 21 Praha 8, Czech Republic
**Corresponding Author:** email@example.com
ELI-Beamlines is the high-energy, repetition-rate laser pillar of the ELI (Extreme Light Infrastructure) project. It will be an international facility for both academic and applied research, slated to provide first user capability since the beginning of 2018. The main objective of the ELI-Beamlines Project is delivery of ultra-short high-energy pulses for the generation and applications of high-brightness X-ray sources and accelerated particles. The laser system will be delivering pulses with length ranging between 15 and 150 fs and will provide high-energy Petawatt (10Hz) and 10-PW peak powers. For high-field physics experiments it will be able to provide focused intensities attaining 1024 Wcm\(^{-2}\), while this value can be increased in a later phase without the need to upgrade the building infrastructure to go beyond the ultra-relativistic interaction regime in which protons are accelerated to energies comparable to their rest mass energy on the length of one wavelength of the driving laser. We will introduce the different experimental user areas with the emphasis of applications of secondary sources of x-rays and laser accelerated particles and the extreme field science area. We discuss new approaches for efficient proton acceleration with higher repetition rate targets based on a solid Hydrogen ribbon for possible medical applications in the energy range above 60 MeV. The ion acceleration beamline ELIMAIA and the ELIMED concepts will be highlighted for their use in different fields including medicine.
IZEST and the European Strategy for Particle Physics
Corresponding Author: firstname.lastname@example.org
Efficient Extreme Light Compression and its Application
Author: Jonathan Wheeler\textsuperscript{1}
\textsuperscript{1} Ecole Polytechnique
Corresponding Author: email@example.com
High power laser facilities capable of generating petawatt ($10^{15}$ W) level pulses are producing peak intensities that are approaching the threshold to a wide range of applications that include high energy physics; laser astrophysics and cosmology; vacuum physics; as well as medical imaging and treatments. State-of-the-art in high power laser systems consistently produce pulses within large diameter beams with nearly flat-top spatial modes and low divergences that suggest efficient nonlinear techniques for pulse post-compression that could dramatically extend the intensities achievable within existing facilities. Theoretical simulations demonstrate the potential for such a system and the efficiency of the process presents a route to compress the high power laser toward its wavelength-defined fundamental limit of a few femtoseconds while maintaining Joule-level energy within the pulse. At present, small-scale experimental tests of the methods involving the cooperative efforts of many research partners are demonstrating the conditions required to implement a full-scale test of the process while providing insight on the challenges that will arise. In addition, such high energy, single-cycle pulses show great promise as drivers of secondary sources for improved laser-driven ion acceleration, as well as hard X-ray pulses from solid targets capable of producing atto/zeptosecond-scale pulses at the exawatt level ($10^{18}$ W).
Extreme Light Induced Accelerating Plasma Mirrors to Investigate Black Hole Information Loss Paradox
Author: Pisin CHEN\textsuperscript{1}
\textsuperscript{1} Department of Physics & Leung Center for Cosmology and Particle Astrophysics National Taiwan University
Corresponding Author: firstname.lastname@example.org
The question of whether Hawking evaporation violates unitarity, and therefore results in the loss of information, remains unresolved since Hawking’s seminal discovery. So far the investigations remain mostly theoretical since it is almost impossible to settle this paradox through direct astrophysical black hole observations. Here we point out that relativistic plasma mirrors induced by state-of-the-art or soon-to-be ultrafast lasers can be accelerated drastically and stopped abruptly by impinging intense x-ray pulses on solid plasma targets with a density gradient. This is analogous to the late time evolution of black hole Hawking evaporation. A conception of such an experiment is proposed where a self-consistent set of physical parameters is presented. Critical issues such as how the black hole unitarity may be preserved can be addressed through the entanglement between the analog Hawking radiation photons and their partner modes.
Enhanced laser-driven ion acceleration using nanometer targets
Author: Xueqing Yan\textsuperscript{1}
\textsuperscript{1} Peking University
Corresponding Author: email@example.com
Radiation Pressure Acceleration is quite promising for ion acceleration. In order to improve the laser energy transmission efficiency and restrain instabilities such as RTI and hole boring, an ultra-high intensity, ultra high contrast laser pulse with steep front is required and therefore a plasma lens with near critical density is proposed. When the laser passes through the nearly critical dense Plasma lens, the transverse self-focusing, longitudinal self-modulation and prepulse absorption can be synchronously happened. The enhanced ion acceleration using plasma lens can be implemented by a DLC foil attached by a nanotube foam target. In recent experiments at RAL in UK and GIST in Korea, it was testified that the proton and ion energy can be enhanced by 2-3 times.
Prompted by the possibility to produce high energy, single-cycle laser pulses with tens of Petawatt power, we have also investigated laser-matter interactions in the few optical cycle and ultra relativistic intensity regimes. A particularly interesting instability-free regime for ion production was revealed leading to the efficient generation of monoenergetic ion bunches with a peak energy greater than GeV. Of paramount importance, the interaction is absent of the Rayleigh Taylor Instabilities and hole boring that plague techniques such as target normal sheath acceleration and radiation pressure acceleration.
Proton acceleration with light pressure and wakefield
Authors: Baifei Shen\textsuperscript{1}; xiaomei zhang\textsuperscript{1}
\textsuperscript{1} State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences
Corresponding Authors: firstname.lastname@example.org, email@example.com
We discuss proton acceleration with 10PW lasers. In 2001, we proposed proton acceleration with light pressure for the first time [1]. Then in 2007, we explained that light pressure acceleration is actually multistaged acceleration of collisionless electrostatic shock driven by the laser pressure [2, 3]. However, the method of light pressure is hard to support proton acceleration of energy larger than 10 GeV. Therefore, we proposed to acceleration proton with laser driven wakefiled [4]. The main problem for proton acceleration with wakefield is the transverse defocusing force preventing persistent acceleration. To solve this problem we proposed to use vortex to drive a wakefield of an electron cylinder in the middle [5]. Recent experiment with clusters is also discussed.
[1] Baifei Shen et al., PHYSICAL REVIEW E 64 056406
[2] Xiaomei Zhang, Baifei Shen et al., Phys. Plasmas 14, 073101 (2007)
[3] Xiaomei Zhang, Baifei Shen et al., Phys. Plasmas 14, 123108 (2007)
[4] Baifei Shen, Xiaomei Zhang et al., Phys. Rev. E805540220078
[5] Xiaomei Zhang, Baifei Shen et al. New J. Phys. 16, 123051(2014)
Conventional electron and proton accelerators: the state of the art
Author: Olivier Napoly\textsuperscript{1}
\textsuperscript{1} CEA/Saclay
Corresponding Author: firstname.lastname@example.org
Based on the ongoing construction to the two major European facilities, namely the European XFEL electron accelerator and the ESS relativistic proton accelerator, the state of the art in terms of conventional acceleration will be described: technology, performance, and construction cost.
Extreme Light and Applications / 27
Hadron Therapy - Present and Perspectives
Corresponding Author: email@example.com
Extreme Light and Applications / 47
Evolution of fast electrons generated during interaction of high intensity laser with structured targets
Corresponding Author: firstname.lastname@example.org
Interaction of high-intensity laser pulses with solid targets results in generation of large quantities of energetic electrons that are the origin of various effects such as intense x-ray emission, ion acceleration, and so on. Some of these electrons are escaping the target, leaving behind a significant positive electric charge. The electrons that are accelerated in the backward and forward directions are ejected from the target in vacuum, thus creating a potential drop in the Debye layer at the target surface. The cooling process through collisions with the surrounding particles defines the maximal time of the target charging and existence of the accelerating potential. It is then important to conduct temporally resolved measurements capable of predicting the residual charge (thus the potential and its temporal profile) of a target irradiated by a short intense laser pulse.
Our recent measurements related to the field enhancement conducted on FLAME laser will be presented. We realized a spatially-resolved Electro Optical Sampling by using a ZnTe crystal and a laser-probe directly split from the pump laser. Such solution allows monitoring temporal profile (with resolution < 100 fs/ee), in a non-intercepting and a single-shot way, the field generated by electrons bunch. By analyzing the signal intensity we retrieved the bunch Coulomb electric field, allowing retrieving the temporal profile and the quantity of the escaped electrons and demonstrated the field enhancement process by structured targets. In the case of the planar foil target, the signal shows the presence of a first emitted bunch with charge $Q_e \sim 1$ nC and energy 6 MeV followed by a second broadened structure carrying a larger amount of particles ($Q_e \sim 3$ nC), this bunch has energy of $\sim 1$ MeV. For the wedged target first bunch carries a larger charge ($Q_e \sim 2$ nC) while the charge in the second bunch is strongly reduced. Laser interaction with the tip target produced a much larger number of released electrons ($Q_e \sim 7$ nC) at higher energies 12 MeV.
Extreme Light around the world (1) / 49
Apollon laser facility present status
Recent large investments have been planned to build laser facilities for achieving intensity never reached before. The progress made in recent decade in laser technology allows us to envision laser facilities with repetition rate high enough to ensure the laser parameters stabilities and an increase of the statistic of the coherent measurement made in plasma physics with lasers.
High-intensity high-repetition lasers are one of the best concept tools to concentrate in a controllable manner, large amount of energy in space and time. Consequently, a whole range of high-energy particles (electrons, protons, highly charged ions, neutrons) and radiation, up to x-rays and gamma-rays can be produced as a result of the interaction with targets that can be either solid or gaseous. Studying interaction using higher and higher intensities is of fundamental interest because it continuously opens the possibilities to explore new regimes, like in particle acceleration or attosecond X-ray sources. Another motivation for the scientific community, is that the generation of such particle and radiation beams, intrinsically synchronised with the laser beam that has generated them, opens an extremely wide range of pump probe applications.
The Apollon is an upcoming laser facility capable of providing multi-PW peak power pulses at a repetition rate of 1 shot/minute. In order to achieve this extreme peak power level, Apollon is based on the generation of extremely short pulses of 15 fs. Employing state of the art technology, the OPCPA-based front-end generates high quality and high temporal contrast pulses at 820 nm with broad spectrum supporting sub-10 fs duration. Energy amplification up to 300 J is then realized in the power amplification section composed of 5 stages of Ti:Sapphire multi-pass amplifiers. The amplified chirped pulses will finally be compressed in an unfolded four-grating compressor. The objective of Apollon facility is to deliver high intensity laser beams on target for users. The design and the status of the Apollon facility will be presented.
Extreme Light around the world (1) / 19
Overview and strategy of the ELI-Nuclear Physics Project in Romania
Author: Kazuo A Tanaka\textsuperscript{1}
\textsuperscript{1} Extreme Light Infrastructure–Nuclear Physics
Corresponding Author: email@example.com
Since chirped pulse amplification scheme[1] has changed the game in high energy density physics, the available laser intensity has kept increasing, can reach $10^{23}$ W/cm$^2$ or even higher, and can deliver radiation higher than the previously used in nuclear facilities. In order to make use of this capability in full depth, a laser-centered, distributed pan-European research infrastructure, involving ultra-intense laser technologies with ultra-short pulses was triggered through the European Light Infrastructure (ELI) project at the state of the art and beyond.
The European Forum of Infrastructure (ESFRI) has selected in 2006 a proposal of constructing a 200J laser system with intensities up to $10^{22}-10^{23}$ W/cm$^2$, called ELI at the site of Bucharest-Magurele, Romania. The rest of two large scale high intensity ELI laser facilities are built in The Czech Republic, and Hungary[2].
The scientific research at ELI-NP includes two areas where only little experimental results were reported until now. The first one is 10 PW laser-driven nuclear physics, strong-field quantum electrodynamics and associated vacuum effects. The second area is that of study driven by a Compton-backscattering gamma beam ($< 20$ MeV), a combination of laser and accelerator technology at the frontier of knowledge. Typical experiments planned in the early stage [3] will be introduced with the system overview.
Extreme Light around the world (1) / 54
Overview of the ELI-ALPS project and its few cycle phase controlled laser sources
Author: Karoly Osvay\textsuperscript{None}
Corresponding Author: firstname.lastname@example.org
The major laser sources of the Attosecond Light Pulse Source of the Extreme Light Infrastructure (ELI-ALPS) deliver pulses with unique parameters: unparalleled fluxes, extreme broad bandwidths and sub-cycle control of the generated fields. The high repetition rate (HR) system delivers TW peak power, < 6 fs pulses at 100 kHz. The 1 kHz repetition rate single cycle (SYLOS) system provides 20 TW pulses with a pulse duration of < 5 fs. The petawatt-class high-field (HF) laser will operate at 10 Hz repetition rate with 17 fs pulse duration. The above laser systems operate in a bandwidth window of 600 nm - 1400 nm. These lasers are complemented by the mid-infrared (MIR) laser system, which provides tunable (2.5 µm - 3.9 µm) sub-4 cycle laser pulses at 100 kHz repetition rate with 15 W average power. High energy THz pulses at 50 Hz repetition rate are to be generated with a half a joule, half a picosecond laser system at 1.03 µm.
These exceptional laser sources will generate a set of secondary sources with incomparable characteristics, including light sources ranging from the THz to the X-ray spectral ranges and particle sources. The laser and secondary sources foreseen at ELI-ALPS will push the frontier of attosecond science in three main directions as coincidence measurements, investigations of highly nonlinear processes in the XUV and X-ray spectral range, and ultrafast valence-shell and core electron dynamics. The photon sources of ELI-ALPS would also provide regional and national, basic and applied science projects with experimental opportunities in radiobiology, biophotonics, plasma and particle physics.
Activities in the purpose-designed and built building complex will start with the installation of the MIR and the HR laser systems in Spring 2017. Simultaneously, we will also start the assembly of the high harmonic beamlines, the THz laboratory, and the nanoplasmonic experiments. The first XUV bursts of light with attosecond duration are expected to be generated by the end of 2017.
Extreme Light around the world (1) / 59
Development of 10PW Super Intense Laser Facility at Shanghai
Author: ruxin li\textsuperscript{None}
Co-authors: Lianghong Yu \textsuperscript{1}; Shuai Li \textsuperscript{2}; Yuxin LENG \textsuperscript{1}
\textsuperscript{1} State Key Laboratory of High Field Laser Physics, Shanghai Institute Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China
\textsuperscript{2} Siom
Corresponding Authors: email@example.com, firstname.lastname@example.org, email@example.com
We will introduce the 10PW laser project SULF (Shanghai Superintense Ultrafast Laser Facility), including the design of the laser facility and the potential applications of the laser facility in physics and chemistry researches.
As a first step, we have demonstrated the generation of 5.3PW, 24fs 800nm laser pulses from a CPA
laser system where a high gain 150 mm Ti:sapphire amplifier with 202J, 70nm (FWHM) output is measured.
We will also outline the future plan towards a 100PW laser facility at Shanghai.
POSTER SESSION / 12
X-EUV Hartmann wavefront sensing.
Author: Nadezda Varkentina\textsuperscript{1}
Co-author: Dietmar Korn \textsuperscript{1}
\textsuperscript{1} Imagine Optic
Corresponding Authors: firstname.lastname@example.org, email@example.com
Imagine Optic works since early 2000 on Hartmann sensors and has acquired a unique expertise in X-ray wavefront sensing showing outstanding results. The very first experiment performed on the Advanced Light Source beamline at Lawrence Berkeley National Laboratory, USA in 2003 reached accuracy better than $\lambda$EUV/120 rms (0.11 nm) at the wavelength of 13.4 nm [1]. Later we also demonstrated several examples of optimization of X-EUV sources by closed loop adaptive optics [2] and by active spatial filtering [3] of the amplified signal from high harmonics generation. Another application consists in X-rays beamline alignment, both automatically [4] and manually [5].
We report on our recent development and achievements in wavefront analysis in the extreme ultraviolet (EUV) and X-ray range via the Hartmann technique. Our sensor consists of an array of rotated squared apertures [6] to create a diffraction pattern on the surface of a CCD camera. The signal measured by the CCD contains both amplitude and phase of the sampled beam. To fully characterize the sensor, accuracy and sensitivity measurements in the specification range are performed. The standard range of the measurements of the incident beam’s wavelength is from 4 to 40 nm in EUV and from 1 to 10 keV in soft X-rays while keeping the previously acquired calibration at a single wavelength. We also present a custom design of wave-front sensor. In 2016, a EUV sensor calibrated on a very large bandwidth (20 nm to 120 nm) with a best accuracy of $\lambda$EUV/67 rms has been developed for ELI project in Czech Republic.
[1] P. Mercère et al., Opt. Lettr. 28, 17 (2003).
[2] J. Gauthier et al, Eur. Phys. Jour. D. 48, 3, (2008).
[3] J. P. Goddet et al., Opt. Lett. 34, 2438 (2009); J.P. Goddet et al., Opt. Lett. 34, 16, 2438-2440 (2009); J.P. Goddet et al., Opt. Lett., 32, 11, 1498-1500 (2007); L. Li et al., Opt. Lett. 38, 4011 (2013).
[4] P. Mercère et al, Optics Letters, 31, 2 (2006).
[5] L. Raimondi et al., NIMA 710:131 (2013).
[6] Imagine Optic Patent N° US 20040196450 A1.
POSTER SESSION / 9
New generation of deformable mirror dedicated to ultra high intensity laser
Author: Nadezda Varkentina\textsuperscript{1}
\textsuperscript{1} Imagine Optic
Corresponding Author: firstname.lastname@example.org
Like in astronomy, Adaptive Optics (AO) has recently become a standard feature at the modern ultra-high intensity lasers facilities. AO aims in reaching both maximum peak energy and intensity by
correcting both the thermal effects induced in the amplification stages and aberrations induced by the optical components of the laser chain. The new generation of ultra-high intensity femto-second petawatt and above class lasers requires new features of wavefront corrections. New challenges for AO consist in overcoming the constraints of potentially bigger diameters, larger aberration strokes, faster optics, higher risk of damaging optical components and faster and easier maintenance.
Imagine Optic being a pioneer company in the development of AO solutions proposes HASO wavefront sensors, ILAO STAR deformable mirrors with control and analysis software WaveView Suite, dedicated to ultra-high intensity lasers. Here, we will present the new generation of deformable mirror, ILAO Star, which is using a new patented design of mechanical actuators.
We will present these new actuators and their improvements compared to the previous generation. We will put special attention to principles of actuation, design of deformable mirror and its complete characterization. We will also introduce the advantages brought by these new actuators when integrated in the ILAO Star deformable mirror. Main advantages are to reach a better mechanical efficiency and better thermal stability with a faster speed. Wide correction capabilities for beam diameter dimension from 20 mm to 500 mm useful diameter working from 0° to 45° angle of incidence are achievable. Easier and safer maintenance have been also proven by replaceable mechanical actuators keeping the deformable mirror membrane.
PRESENTATION AVAILABLE UPON DIRECT REQUEST AT AUTHOR’S ATTENTION AT email@example.com
POSTER SESSION / 13
Toward spatially uniform pulse compression of top-hat beams at the subpetawatt level of peak powers
Author: Aleksandr Voronin\textsuperscript{1}
\textsuperscript{1} M.V.Lomonosov Moscow State University
Corresponding Author: firstname.lastname@example.org
High-peak-power laser beams with a top-hat transverse intensity profile are shown to offer unique options for the spectral and temporal nonlinear-optical transformations of high-intensity laser fields, promising a new technology of spatially uniform pulse compression at the subpetawatt level of peak powers.
POSTER SESSION / 8
Dispersion management of the front end in SULF
Authors: Shuai Li\textsuperscript{1}; Yuxin Leng\textsuperscript{1}
\textsuperscript{1} Shanghai Institute of Optics and Fine Mechanics
Corresponding Authors: email@example.com, firstname.lastname@example.org
Recently, the race to build a petawatt or even higher-power laser system with the pulse duration of few tens of femtosecond is initiated worldwide. Such ultrahigh-peak-power laser systems are greatly benefit for fundamental research areas, such as accelerating the charged particles (electrons and protons), and generating high-energy photon (X-ray and γ-ray) sources. The Shanghai Super-intense Ultrafast Laser Facility (SULF) is a large-scale project aimed at delivering 10 PW laser pulses. The CPA Ti: sapphire laser system consists of a front end, a power amplifier, a final three-pass booster amplifier, and a grating compressor. The front end starts from a commercial 1 kHz CPA laser system (Astrella, Coherent Inc.) delivering 95 μJ-level sub-30 fs pulses. The pulses are injected into the pulse
cleaner based on cross-polarized wave (XPW) generation. The spectral width of the cleaned pulses is over 65 nm (FWHM), which can support sub-15fs pulse duration [1]. The cleaned pulses with an energy of 20 µJ are stretched to about 2ns by an aberration-free Offner-triplet-type stretcher with a 1480 lines/mm gold-coated grating (Jobin Yvon, Inc.). Following the stretcher, the stretched pulse is amplified in the regenerative amplifier and three-stage multi-amplifier such that the energy reaches 7 J at a 1 Hz repetition rate.
To balance the spectral phase in the laser system, a double-pass grism pair is inserted into the petawatt laser system to compensate the residual dispersion up to the fourth order. The inserted position of the grism is between the stretcher and regenerative amplifier. Using this technique, the spectral phase distortion over the spectrum is less than 2.5 rad. The pulse duration is compressed to 22.2 fs, which is only 1.03 times that of the Fourier-transform-limited pulse. Experimental results show that near Fourier-transform-limited pulse can be achieved. In our next work, the grism pair and the main compressor will be used cooperatively to achieve the dispersion management of the 10 PW laser system.
**POSTER SESSION / 16**
**Particle-in-Cell Simulation of X-ray Wakefield Acceleration and Betatron Radiation in Nanotubes**
**Author:** xiaomei zhang\(^1\)
**Co-authors:** Gerard Mourou \(^2\); Jonathan Wheeler \(^2\)
\(^1\) State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences
\(^2\) DER-IZEST, École Polytechnique
**Corresponding Authors:** email@example.com, firstname.lastname@example.org, email@example.com
Laser wakefield theory shows that for a given laser, the energy gain and accelerating length are both inversely proportional to the plasma density [1]. This means that the lower the gas density, the longer the acceleration distance, which is undesirable in reaching ultra-high energies. The recent proposed generation of the X-ray laser pulse provides an attractive way to achieve ultrahigh energy [2]. Benefitting from the much higher critical density which is inversely proportional to the square of the laser wavelength, solid density materials can be chosen for the X-ray laser pulse driven case [3]. On the other hand, functional nanomaterials such as carbon-nanotubes have a large degree of dimensional flexibility and higher than 10 TV/m wakefield is possible. Accordingly, compact structures to obtain ultrahigh energy gain can in principle be realized through the state-of-the-art nanotube technology. Motivated by these, we explored the X-ray wakefield accelerator in a nanotube. We investigate the acceleration due to a wakefield induced by a coherent, ultrashort X-ray pulse guided by a nano-scale channel inside a solid material. By two-dimensional particle-in-cell computer simulations, we show that an acceleration gradient of TeV/cm is attainable. This is about three orders of magnitude stronger than that of the conventional plasma-based wakefield accelerations, which implies the possibility of an extremely compact scheme to attain ultrahigh energies. In addition to particle acceleration, this scheme can also induce the emission of high energy photons at \(O(10-100)\) MeV. Our simulations confirm such high energy photon emissions, which is in contrast with that induced by optical laser driven wakefield scheme. In addition to this, the significantly improved emittance of the energetic electrons has been discussed.
[1] T. Tajima, J.M. Dawson, Laser Electron-Accelerator, Phys. Rev. Lett. 43, 267 (1979).
[2] G. Mourou, et al., Single cycle thin film compressor opening the door to Zeptosecond-Exawatt physics, Eur. Phys. J. 223, 1181 (2014).
[3] T. Tajima, Laser acceleration in novel media. European Physical Journal-Special Topics Eur. Phys. J. 223, 1037 (2014).
Observation of space parity gravitational violation in laser-Compton scattering
Author: Vahagn Gharibyan\textsuperscript{1}
\textsuperscript{1} DESY
Corresponding Author: firstname.lastname@example.org
Gravity independence on rotations or spin direction is postulated in general relativity and experimentally constrained for low energy, non-relativistic matter.
An evidence for high energy CP violation in gravitational field has recently been found in the HERA Compton polarimeter’s 2 spectra measured with electron and positron beams.
Here I report analysis results of 838 thousand spectra, acquired during the 2004-2007 running period and tagged by laser polarization state.
The tagged spectra allow to separate charge (C) and space (P) parity contributions. While the C asymmetry is contaminated by change of the accelerator parameters for the electron and positron runs, the laser helicity frequent flips eliminate most of the potential systematic errors.
Measured Compton edge energy asymmetry induced by the laser helicity change is as high as $(4.62 \pm 0.06) \times 10^{-5}$ which corresponds to helicity dependent gravitational potentials’ difference of $(8.1 \pm 0.1) \times 10^{-15}$.
The measured sign applies a stronger gravitational coupling to left helicity particles.
In case of the observed coupling universality i.e. energy independence, the gravity will induce $3.69 \pm 0.05$ GHz and $2.01 \pm 0.028$ MHz spin resonances in nuclei and atoms respectively.
POSTER SESSION / 50
Multifilamentation. Interaction and reduction of the filaments as nonlinear process.
Author: Lubomir Kovachev\textsuperscript{1}
\textsuperscript{1} Institute of Electronics, Bulgarian Academy of Sciences
Corresponding Author: email@example.com
The recent experiments with high power Ti: Sapphire laser pulses demonstrate that it is not possible to produce a homogeneous beam pattern. Hot zones are situated across the beam cross section. Each hot zone self-focuses into a filament, if the intensity and power are high enough. Each of the multiple filaments has a core intensity clamped down to that of a single filament of the order of 0.5-5 TW/cm$^2$. These intensities are two to three orders less than the intensity needed for defocusing by ionization. From other hand, the filaments without ionization of the media were observed in silica, liquids and other materials. The filaments with such intensities attract and exchange energy during their propagation. The final result is that at long distances survived only few of them.
The absence of ionization in these processes forced us to seek other linear and nonlinear mechanisms for description of the above mentioned effects and to answer the following basic questions:
1. What kind is the diffraction of broad-band (attosecond and phase-modulated femtosecond) pulses?
2. What is the physical process that leads to asymmetrical broadband of powerful laser pulse in the early moments of filamentation?
3. What kinds are the mechanisms of merging and energy exchange between the filaments?
Recently in [1] and in some previous works we try to answer of the first question, solving the problem analytically and numerically. The result is that broad-band pulses diffract under new regime
with semi-spherical deformation of the intensity profile.
In [2] we try to solve the second question, providing experimental and theoretical investigation of the first picoseconds of formation of white continuum from 100 fs laser pulse in 0.5 cm BK7 glass. We point that the asymmetric spectral broadening of femtosecond laser pulses towards the higher frequencies in isotropic media due to nonlinear effect of cascade generation with THz spectral shift for solids and GHz spectral shift for gases. This shift is equal to three times the carrier-to-envelope frequency. The process works simultaneously with the four-photon parametric wave mixing.
To answer of the third question we investigate in details in [3] and [4] the process of nonlinear attraction between the pulses, due to cross-phase modulation, and energy exchanges due to degenerate four-photon mixing. The results were compared with the experimental results from other authors and demonstrate very good coincidence.
In our Lab there are also first results on nonlinear rotation of the vector of electrical field during process of filament propagation.
[1] A. M. Dakova, L. M. Kovachev, K. L. Kovachev, D. Y. Dakova, “Fraunhofer type diffraction of phase-modulated broad-band femtosecond pulses”, Journal of Physics Conference Series 594 012023 (2015).
[2] D. Georgieva, L. Kovachev, N. Nedyalkov, “Avalanche parametric conversion in the initial moment of filamentation”, Proc. SPIE, 18th International School on Quantum Electronics: Laser Physics and Applications, Sozopol (26-31 September 2016), accepted.
[3] L. M. Kovachev, D. A. Georgieva and A. M. Dakova, “Influence of the four-photon parametric processes and cross-phase modulation on the relative motion of optical filaments”, Laser Phys. 25 105402 (7pp), (2015).
[4] Daniela A Georgieva and Lubomir M Kovachev, “Energy transfer between two filaments and degenerate four-photon parametric processes”, Laser Physics, 25 035402 (7pp) (2015).
POSTER SESSION / 17
Ti:Sapphire CPA booster amplifier for a 5 PW laser system
Author: Lianghong Yu\textsuperscript{1}None
Co-author: Yuxin Leng \textsuperscript{1}
\textsuperscript{1} Shanghai Institute of Optics and Fine Mechanics
Corresponding Authors: firstname.lastname@example.org, email@example.com
In the recent years, 10 Peta-watt (PW) laser system is a hot topic in the field of laser technology. Many countries and laboratories are building or having a plan to build a 10 PW laser system [1, 2]. The CPA technique particularly using Ti:sapphire (Ti:S) CPA systems is still the main method to achieve PW and 10 PW-levels laser pulses for its high efficiency and stability [3, 4]. However, the transverse amplified spontaneous emission (TASE) and parasitic lasing (PL) within the booster amplifier volume are the main barriers to achieve a higher energy amplification when the larger-aperture Ti:S crystals are pumped at higher pump fluence and energy. In this letter, we reported on the energy amplification for a 5 PW laser system based on Ti:S CPA by using a new method to restrain the PL. The amplified energy was up to 202 J based on a Ti:S crystal with diameter of 150 mm. The pulse was compressed to 24 fs in pulse duration and the peak power was up to 5.3 PW. After being amplified by a regenerative amplifier and three multi-pass amplifiers, the energy of the stretched pulse was about 7 J and the spectrum width was 90 nm from 750 nm to 840nm. And then, the pulse was send to a Ti:S amplifier with diameter of 80 mm. The pump energy was generated by frequency-doubled Nd:glass amplifiers at 527 nm. The amplified pulse energy was about 48 J pumped by a pump pulse energy of 100 J. The amplified pulse was expanded to a diameter of 120 mm and then injected into a 150-mm-diameter Ti:S booster amplifier. The pump energy was 320 J with diameter of 150mm. To suppress the PL in the booster amplifier, we used the Cargille Series M refractive index liquid doped with an absorber (IR 140) as the cladding material. Besides, we used a method called temporal-dual-pulse pump as an important method to suppress the PL in the booster amplifier. After optimizing the time delay among the pump pulses and the signal pulse, the PL was suppressed when the 150-mm-diameter Ti:S was pumped by a 320 J pump energy. The amplified
energy was up 202 J and the conversion efficiency was about 49%. The amplified spectrum width was about 85 nm from 750 nm to 835 nm.
The amplified pulse from booster amplifier was expand to 300 mm by an achromatic spatial filter and send to a compressor with four gratings. After being compressed, the pulse duration was 24 fs. The total transmission of the spatial filter and the compressor was 64% and the peak power was 5.3 PW.
**Extreme Light around the world (2) / 57**
**Generation of multiple isolated attosecond pulses**
**Author:** Kyung Taec Kim\(^1\)
\(^1\) IBS / GIST
**Corresponding Author:** firstname.lastname@example.org
Isolated attosecond pulses are essential tools used in the time-resolved studies of some of the fastest electronic processes in atoms, molecules and solids. These pulses are synthesized from high order harmonics generated in inert gases by intense, femtosecond laser pulses. During this process an attosecond pulse is generated in every half-cycle of the laser pulse, thus forming an attosecond pulse train. A possible way to isolate a single pulse from this pulse train is the lighthouse technique. This relies on the generation of a laser pulse with rotating wavefront, producing multiple isolated attosecond pulses which propagate in different directions. These pulses become spatially separated in the far field if their divergence is smaller than the difference between their propagation angles. We show that the nonlinear propagation of the laser pulse through the generating gas medium can enhance the angular separation of the generated attosecond pulses through dynamic wavefront rotation.
**Extreme Light around the world (2) / 33**
**TBD**
**Extreme Light around the world (2) / 55**
**Relativistic Flying Mirror for Extreme Light Sciences**
**Author:** Masaki Kando\(^1\)
\(^1\) National Institutes for Quantum and Radiological Science and Te
**Corresponding Author:** email@example.com
A relativistic flying mirror is breaking wake wave excited in tenuous plasma by an intense, short laser pulse. This highly nonlinear wave has a singularity of electron density and has a velocity nearly equals to the speed of light. Thus the electrons can work as a partially reflecting mirror moving nearly at the speed of light. This concept was first proposed by Bulanov et al. So far some of the features of the flying mirror have been confirmed in several experiments and campaigns proving the frequency upshifting and reflectivity. We report our recent experimental results and discuss possible applications in extreme light sciences.
Short overview of laser physics and applications research activities of Institute of Electronics – Bulgarian Academy of Sciences
Author: Ekaterina Borisova\textsuperscript{1}
\textsuperscript{1} Institute of Electronics
Corresponding Author: firstname.lastname@example.org
In this report would be presented a short overview of the recent research activities in the field of laser physics and applications of the Institute of Electronics at the Bulgarian Academy of Sciences (IE-BAS).
The Institute was established in 1963 as a non-profit state organization conducting research, education and dissemination of scientific knowledge in the fields of physical electronics, photonics and quantum electronics and radio sciences. Soon, IE-BAS evolved as a leading scientific institution in these areas of applied physics and engineering within the Bulgarian Academy of Sciences and in the country.
The IE-BAS was where the first Bulgarian laser, lidar, micro-channel electron-optical converter, optical magnetometers, laser therapeutic and diagnostic systems for biomedical applications were built, followed by the development of several advanced e-beam technologies, novel types of optical gas sensors, pioneering achievements in laser nanostructuring and nanoparticle formation, laser and plasma high technologies, biomedical photonics and applications.
IE-BAS is a host and main organizer of a biannual International School on Quantum Electronics “Laser Physics and Applications”, which has more than 38-years history and become a well-known scientific event for training of young researchers and PhD students, covering different aspects of laser-matter interactions, laser spectroscopy and metrology, laser remote sensing and ecology, lasers in biology and medicine and laser systems and nonlinear optics.
Through the years, the Institute’s research field and structure have developed dynamically in response to the changes taking place in the main trends in applied physics and technologies: materials science and technologies; physics of nano-sized objects and nanotechnologies, nanoelectronics, photonics, opto-electronics, quantum optics, environmental physics and monitoring, biomedical photonics and medical laser systems.
Nowadays, IE-BAS is a leading research organization in Bulgaria in the field of laser physics and applications. The research in photonics and quantum electronics comprises theoretical and experimental studies on the interaction of short and ultrashort lasers pulses with matter; development of novel nano-structuring technologies; laser thin-films deposition and treatment; light-induced absorption and transmission in alkaline vapors; development of complex laser systems for analysis and modification of semiconducting and superconducting materials; theoretical and experimental investigation of non-linear optical phenomena and biomedical photonics.
The socio-economic impact of research infrastructures: a generic evaluation framework and insights from selected case studies
Author: Franck BROTTIER\textsuperscript{1}
\textsuperscript{1} Euroopportunities
Corresponding Author: email@example.com
After a short reminder on the evolution of growth theories, the presentation will focus on the role research infrastructures can play in the framework of such theories. Using this framework, the presentation will feature the different channels through which a research infrastructure contribute to economic growth, and the different parameters to take into account for a research infrastructure to deliver a maximum social and economic impact. The presentation will then discuss the specific modalities found for the different research infrastructures the author worked for/with.
ELI-NP as a crucible for innovation: the ELAP project
Author: Federico Canova\textsuperscript{1}
Co-author: Gérard Mourou \textsuperscript{2}
\textsuperscript{1} ELI-DC
\textsuperscript{2} IZEST
Corresponding Authors: firstname.lastname@example.org, email@example.com
ELAP is the Extreme Light Applications Park of the ELI-NP facility in Magurele (RO).
The Extreme Light physics is a novel approach to laser-matter interaction, made possible by the groundbreaking works of Prof T. Tajima (UCI, CA, USA) and Prof. G. Mourou (IZEST-Ecole Polytechnique, FR). The unique characteristic of the extreme light laser is to produce enormous amounts of energy and pressure; enough to rip matter apart, releasing sub-atomic particles such as protons, moving close to the speed of light. The core activities of ELAP are based on the breakthroughs in the field of the nuclear physics made possible by extreme light, especially in the field of nuclear medicine, but also to other real life applications like nuclear waste disposal.
Since the preparation of the ELI-NP white book [1], the project team identified the need of an application park to transform the scientific results into real-life applications. ELAP is the natural outcome of such an ambitious and unique project.
The present project for an Extreme Light Applications Park was defined by the brainstorming activity of the IZEST laboratory team (Ecole Polytechnique, France), the ELI-NP (Magurele, Romania) scientific team and private sector representatives (industry, investment).
Within the EUCALL project, senior scientists from SRS, FEL and HPL facilities will join to identify novel research opportunities, methodologies, and technologies. Strategies will be implemented towards optimum use of the laser light facilities, promotion of innovation, and coordinated user training/experience exchange. These activities help actively innovation projects like ELI-ELAP by supporting the following activities:
- collect information about scientific opportunities and instrument implementation,
- collect information about innovation opportunities, technical transfer, industrial access to the research facilities, success stories of laboratories spin-offs and start-ups,
- study of a few cases of existing cross-community collaborations.
In conclusion, ELAP answers, on the short term, to the need of building applications on the scientific novelties discovered at the ELI-NP facility. On the long term, the creation of an innovation activity in the environment of Magurele will represent a unique advantage for the development of the ELI-NP project and of Romania in general.
The author acknowledges the European Cluster of Advanced Laser Light Sources (EUCALL) which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 654220
[1] The White Book of ELI Nuclear Physics, Bucharest-Magurele, Romania, The ELI-Nuclear Physics working groups
Socio-Economics for Energy: Extreme Laser Pulses for Boron Fusion Power Reactors
Author: Heinrich Hora\textsuperscript{1}
Socio-Economics for Energy: Extreme Laser Pulses for Boron Fusion Power Reactors
H. Hora*
Department of Theoretical Physics, University of New South Wales, Sydney, Australia
ABSTRACT
Application of extreme laser pulses of less than picoseconds duration but with powers between Peta- and Exawatt is an example for direct use against the climatic changes by generating fusion energy from boron without nuclear radiation problem. Fusion of hydrogen nuclei (protons) with the boron-11 isotope (HB11) was known as extremely difficult or impossible. But this was of high interest being environmentally clean with less radioactivity per generated energy than from burning coal. This changed radically when it was discovered in 2008, that a non-thermal ignition of fusion in uncompressed fuel was converting the energy of extremely powerful very short laser pulses directly into the ultrahigh acceleration of macroscopic plasma blocks. This kind of ignition was similar to heavy ion fusion [1], but now the extreme laser pulses offer the necessary sufficient non-thermal conditions of plasma acceleration techniques. This is related to the science based results on a unique avalanche multiplication of HB11 reactions in the presentation by Moustaizis et al. with the discovery with many orders of magnitudes increased reaction gains measured [2] and explained by theory [3]. This paper presents details for designing a fusion reactor using the combination of the accelerated plasma blocks with ultrahigh laser generated magnetic fields [4]. An advantage for the technology consists in the fact that the cylindrical geometry for the trapping of the reaction volume permits the application of plane wave interaction of the ~petawatt laser pulses as a direct drive ignition process. This avoids the complications known from indirect drive and more general geometries [5][6]. The HB11 reaction energy in the alpha particles (helium) is converted by electrostatic fields with nearly no heat losses into electric power with standard Million-Volt technology. Economically profitable power generation without nuclear radiation problem appears to be possible including socio-economic aspects.
*Collaboration with G. Korn, S. Eliezer, N.Noaz, P. Lalousis, P. Moustaizis, G.H. Miley, G.J. Kirchhoff, G. Mourou, R.F.J. Barlow, D.H.H. Hoffmann et al.
[1] Rubbia Carlo, Laser and Particle Beams 11, 391 (1993)
[2] Hora Heinrich, Korn G. et al., Laser and Particle Beams 33, 607 (2015)
[3] Eliezer Shalom, Hora H., Korn G. et al. Physics of Plasmas 23, 050704 (2016)
[4] Hora H. and Kirchhoff G.J. PCT world patent WO 2015/144190 A1 (23 March 2014)
[5] Hora H. Chapter 10 of Laser Plasma Physics, Second Edition, SPIE Press, Bellingham WA, 2016
[6] Hora H. et al Conference Abstract 3rd ICHDP 23-25 SEP 2016, ShenZhen/China
Socio-Economic Impacts / 4
Laser-Driven High Energy Alpha Beam Interaction with Solid p11B to Achieve Fusion Ignition by Alpha Heating
Author: Stavros Moustaizis¹
¹ Technical University of Crete
Corresponding Author: firstname.lastname@example.org
Laser-Driven High Energy Alpha Beam Interaction with Solid p11B to Achieve Fusion Ignition by Alpha Heating
S.D. Moustaizis¹, P. Lalousis², H. Hora³, S. Eliezer⁴ and G. Korn ⁵
¹ Technical University of Crete, Lab of Matter Structure and Laser Physics, Chania, Crete, Greece
² Institute of Electronic Structure and Laser FORTH, Heraklion, Greece
³ Department of Theoretical Physics, University of New South Wales, Sydney 2052, Australia
⁴ Nuclear Fusion institute, Polytechnique University of Madrid, ETSII, Madrid 28006, Spain
5Institute of Physics, ELI-Beamlines, CSO Prague, Czech Republic & Max-Planck-Institute for Quantum Optics, Garching, Germany
ABSTRACT
We present for the first time numerical results on fusion ignition process produced by laser-driven high energy alpha beam interaction with compressed solid p11B fuel. Relevant results on alpha measurements from nuclear reactions, induced by laser-driven proton beam interaction with 11B plasma [1, 2], justify extensive numerical investigations on fusion burning feasibility of p811 fuel using a multi-fluid code [3, 4, 5, 6]. The consideration of a new physical process, termed as alpha avalanche effect [7, 8] contribute to enhance the alpha heating effect and consequently the reaction rate of the fusion process. The p11B nuclear fusion reaction is attractive not only because is an aneutronic reaction but because have the advantages to produce three alphas with total energy 8.9 MeV. But the main difficulty for fusion ignition in solid density p11B targets is that the cross section for nuclear fusion reactions is efficient for energies higher than 250 keV. The proposed concept, to overcome this difficulty, is to inject an energetic alpha beam in the compressed target and simulate the temporal evolution of the temperature and the reaction rate of the p11B fuel due to alpha heating effect. The initial high energy alpha beam could be produced by ultra-short and high intensity laser beam interaction with thin DT solid targets (see fig.3 of ref.9). The high energy alpha beam interact with a p11B plasma compressed 4-10 times the initial solid density and with low initial temperature. The numerical results show that the maximum of the reaction rate is achieved tens of nscc after the injection of the alpha beam. After this time interval the reaction rate decreases due to the depletion of the plasma ion density. The temperature corresponding to the maximum of the reaction rate is about 200 keV. The time corresponding to the maximum reaction rate depend strongly on the density of the injected alpha beam, the initial plasma temperature and the compression factor of the p11B fuel. In the near future Petawatt or Exawatt – Zetawatt laser systems [10, 11] and especially the IZEST project and the fiber based laser system investigate for the ICAN project, will be able to attain intensities up to 1025 W/cm2 and 1029 W/cm2 with pulse duration of the order of attoseconds or zettaseconds. These projects will enable unique applications on laser-driven ion beam acceleration with high current [12, 13, 14]. The numerical results of this work promote the development of new high efficiency and high power fibre laser systems like ICAN in order to generate high density alpha beams.
[1] Labaune, C., S. Depierreux, S. Goyon, C. Loisel, G. Yahia & J. Rafelski Nature Communications 4, 2506 (2013).
[2] G. Korn, D. Margarone, A. Picciotto, Boron-Proton Nuclear-Fusion Enhancement induced in Borondopped Silicon Targets by low Contrast Pulsed Laser Lecture at the IZEST conference, Paris, Romanian Embassy, 19 September 2014.
A. Picciotto et al. Physical Review X 4, 031030 (2014).
[3] P. Lalouais, H. Hora and S. Moustazis, Laser and Particle Beams 32, 499 (2014).
[4] H. Hora, G. Miley, P. Lalouais, S. Moustazis, K. Clayton and D. Jonas. Efficient Generation of Fusion Flames Using PW-ps Laser Pulses for Ultrahigh Acceleration of Plasma Blocks by Nonlinear (Pondemotive) Forces. IEEE Transaction of Plasma Science 42, 640-644 (2014).
[5] H. Hora, P. Lalouais, Shalom Eliezer, G. H. Miley, S. Moustazis, G. Mourou. 10 kilotesla magnetic field compression combined with ultra-fast laser accelerated plasma blocks for initiating fusion flames. Abstract of an oral presentation at the Physics-Congress Canberra/Australia, see arXiv 1412.4190. (11 December 2014).
[6] P. Lalouais, S. Moustazis, H. Hora and G.H. Miley, Kilotesla Magnetic Assisted Fast Laser Ignited Boron-11 Hydrogen Fusion with Nonlinear Force Driven Ultrahigh Accelerated Plasma Blocks, Journal of Fusion Energy, 34 62-67 (2015).
[7] H. Hora et al., Petawatt laser pulses for proton-boron high gain fusion with avalanche reactions excluding problems of nuclear radiation, SPIE Conf. Proceedings No. 9515, paper 9515-44 (2015).
[8] S. Eliezer, H. Hora, G. Korn, N. Nissim and J. M. Martinez Val. “Avalance proton-boron fusion based on elastic nuclear collisions”, Physics of Plasmas 23, 050704, 2016
[9] R. Banati, H. Hora, P. Lalouais, S. Moustazis.. “Ultrahigh laser acceleration of plasma blocks with ultrahigh ion density for fusion and hadron therapy”, Jour. Intense Pulsed Lasers & Application in Adv. Physics 4, No. 1, 11-16 (2014).
[10]. T. Tajima and G. Mourou, “Zettawatt-exawatt lasers and their applications in ultrastrong field physics”, Physical Review Special Topics – Accelerators and Beams, Vol 5, 0310301, 2002
[11]. G. Mourou, T. Tajima and S. Bulanov, Reviews of Modern Physics 78, 309 (2006)
[12]. G. Mourou, T. Tajima and Jens Limpert, “The future is fibre accelerators”, Nature Photonics, Vol. 7, April 2013.
[13] . T. Esirkepov, M. Borgesi, S. V. Bulanov, G. Mourou and T. Tajima, “High Efficiency RelativisticIon Generation in Laser-Piston Regime", Phys. Rev. Lett. Vol.92, no 17, 175003-2, April 2004.
[14]. T. Tajima, G. Mourou and S. Gals, "Arrangement for generating a proton beam and an installation for transmutation of nuclear wastes", European Patent Application, EP 2 709 429 A1, 2012.
Short Pulses and Quantum Beams / 11
Control of temporal intensity profile for PW laser pulses
Author: Efim Khazanov\textsuperscript{1}
Co-author: Gerard Mourou \textsuperscript{2}
\textsuperscript{1} IAP RAS
\textsuperscript{2} IZEST
Corresponding Authors: email@example.com, firstname.lastname@example.org
We present theoretical and experimental results on enhancement of temporal intensity profile of PW laser pulses. Techniques of peak power increasing based on self-phase modulation, second harmonic generation and cascaded quadratic nonlinearity effects will be discussed. The specific of its implementation for modern powerful laser facilities is investigated and results will be demonstrated.
Short Pulses and Quantum Beams / 7
Twisted Photons at DESY
Author: Vahagn Gharibyan\textsuperscript{1}
Co-authors: Dirk NOELLE \textsuperscript{1}; Gero Kube \textsuperscript{1}; Kay Wittenburg \textsuperscript{1}; Klaus BALEWSKI \textsuperscript{1}; Klaus FLOETTMANN \textsuperscript{1}; Siegfried Schreiber \textsuperscript{1}
\textsuperscript{1} DESY
Corresponding Authors: email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com
Vortex light with orbital angular momentum (OAM) have successfully been generated in many laboratories. So far, however, energies of the particles with OAM, called twisted photons or electrons, remain below 0.1 keV. Here we report on the first high energy vortex photon beam obtained via Compton scattering of the topologically charged (OAM=−2) 2.3eV laser photons on the PETRA 6 GeV electrons. According to angular momentum conservation, the scattered twisted photons have topological charge 2 near their maximal energy of 588 MeV. This opens up an unprecedented possibility for direct quadrupole excitation of nuclei with evenly-charged twisted photons. After modifying the laser entrance pipe, we plan to expand the energy range of twisted photons from 10 MeV to 1.1 GeV. That will allow exploring multipole resonances in nuclei at MeV, as well as quark matter inside the nucleons at GeV energies. The latter could pave a way towards quark gamma laser with twisted photon pumping and cold nuclear fusion with altered charge distributions in deuteron and tritium. The PETRA twisted photon setup allows for fast flipping of the topological charge between ±2 or ±1 states, which could be used for the quarks’ orbital angular momentum measurements, by twisted photons’ scattering asymmetry; this could solve the long standing nucleon spin puzzle. Employing the twisted Compton scattering at FLASH and E-XFEL will expand the energy range of the twisted particles from keV to few GeV energies, along with some possible applications. In perspective, extreme power
lasers will be required for the transition from proof of principle vortex beam experiments to novel field of science and technology with twisted particles.
Orbital Debris / 18
Deorbiting of Space Debris by Laser Ablation
Author: Toshikazu Ebisuzaki\textsuperscript{1}
\textsuperscript{1} RIKEN
Corresponding Author: firstname.lastname@example.org
Recent years deorbiting by the laser ablation attracts increasing attentions as almost unique effective method to remediate cm-sized space debris. According to Ebisuzaki et al. 2014, the deorbiting operation is divided into three steps. First, a super-wide field telescope detects the reflection signal of the solar light by a space debris and roughly determine its position and moving direction. Second, laser beams are ejected to the directions of the debris to determine the position and velocity precisely as well as its distance. Finally, a high intensity laser beam is focused onto the debris surface to induce laser ablation on the surface. The reaction force of the ablation leads the debris to the deorbiting to the Earth’s atmosphere. In this talk, we will propose the step-by-step approach for the technical demonstration of the mission and present the concept of a possible space mission dedicated to deorbiting cm-sized space debris by laser ablation technology.
[1] Ebisuzaki et al., Demonstration designs for the remediation of space debris from the International Space Station, Acta Astronautica, 112 (2015), 102-113.
Orbital Debris / 2
Update on XCAN, Ecole Polytechnique-Thales Coherent Beam Combination joint laser program
Co-author: jean-christophe chanteloup \textsuperscript{1}
\textsuperscript{1} Ecole Polytechnique
Corresponding Author: email@example.com
Ecole Polytechnique and Thales are engaged into the development of a femtosecond laser system based on the coherent combination of laser beams produced through a network of 61 amplifying optical fibers [1] known as XCAN [2].
Designing, integrating and operating efficiently a laser system based on such an innovative architecture is clearly a challenge. But major key issues have already been studied as part of Thales previous activities in this field [3] but also within the ICAN (International Amplifying Coherent Network) project [4] of the European Commission. This consortium included scientists from the communities of particle accelerators (International Committee for Future Accelerators) and ultra-intense lasers (International Committee on Ultra-High Intensity Lasers). Together, they defined the key lasers parameters required for a prototype wake field based laser particle accelerator. ICAN helped to combine the expertise of high-energy and laser/fiber physicists, while ensuring a close connection with relevant industry experts in this field.
X-CAN relies on the coherent beam combination of fiber chirped-pulse amplifiers operating at 50 kHz repetition rate. Sub µJ energy pulses of 300 fs duration will be temporally stretched up to 2 to 5 ns and spatially demultiplexed in 61 channels. Parallel amplification will be performed through successive amplifying stages, the main one based on large mode area fibers. The output beams will be bundled into one single beam, and a small fraction will be used for collective phase measurement. A
feedback loop relying on individual phase control devices implemented in each channel will ensure maximum, stable combination efficiency. Followed by pulse compression, the coherent addition of all individual phased beams is expected to provide ultrashort pulses of several mJ energy, and will pave the way for large-scale fiber-based coherent amplifying networks in the femtosecond regime.
[1] G. A. Mourou, D. Hulin and A. Galvanauskas, “The road to High Peak Power and High Average Power Laser: Coherent Amplification Network (CAN),” AIP Conference Proceedings, Third International Conference on Superstrong Fields in Plasmas, vol. 827, Dimitri Batani and Maurizio Lontano, 152-163 (2006).
[2] L. Daniault, S. Bellanger, J. Le Dortz, J. Bourderionnet, É. Lallier, C. Larat, M. Antier-Murgey, J.-C. Chanteloup, A. Brignon, C. Simon-Boisson et G. Mourou, "XCAN—A coherent amplification network of femtosecond fiber chirped-pulse amplifiers," The European Physical Journal Special Topics 224, no. 13 (2015): 2609-2613.
[3] J. Bourderionnet, C. Bellanger, J. Primot, A. Brignon, "Collective coherent phase combining of 64 fibers", Opt. Express. 19 (18), (2011).
[4] G. Mourou, B. Brocklesby, T. Tajima, J. Limpert, "The future is laser accelerator," Nature Photonics, 7, 258-261 (2013).
PRESENTATION AVAILABLE UPON DIRECT REQUEST AT AUTHOR'S ATTENTION AT firstname.lastname@example.org
Orbital Debris / 48
Mini-Euso, a pathfinder on the ISS to detect [2 - 10 cm] debris
Author: Philippe Gorodetzyk\textsuperscript{1}
Co-author: Marco Casolino \textsuperscript{2}
\textsuperscript{1} APC lab - Paris Diderot University
\textsuperscript{2} Tor Vergata, Rome, Italy, and RIKEN, Japan
Corresponding Authors: email@example.com, firstname.lastname@example.org
Mini-Euso is a small telescope to be installed inside the ISS in 2017. It is a small pathfinder of the main Jem-Euso mission in which a large UV telescope is to be set outside the ISS to capture the most energetic cosmic rays by detecting the [300-400 nm] fluorescence light from nitrogen struck by the shower charged particles.. Mini-Euso looks at earth through a UV window on the Russian segment, with two 25 cm Fresnel lenses. As a Jem-Euso pathfinder, it is primarily dedicated to assess the technology and look during the night at luminous events like storms, meteors, etc. It detects single photo-electrons with large dynamics [from 0.1 to $10^6$ pe par time gate (2.5 $\mu$s)]. At ISS sunset and sunrise, the earth is in the dark for 5 mn, while ISS is sun illuminated. During these 10 mn every 90 mn, we will observe debris under the ISS (at 300 to 400 km altitude) by their brightness. They will look as a slow moving track on the focal surface (48 x 48 pixels, recorded every 2.5 $\mu$s). It is the first step to observe the [2-10 cm] debris, before using a satellite big enough to install a CAN laser which would shoot at it to ablate and recoil it to earth. Some 50 such debris could be detected that way during a year of observation.
Basic Science / 5
Gamma beams generation with high intensity lasers for the study of two photon Breit-Wheeler pair production
Author: Emmanuel d'Humières\textsuperscript{1}
\textsuperscript{1} Université de Bordeaux
Direct production of electron-positron pairs in photon collisions is one of the basic processes in the Universe. The linear Breit-Wheeler (BW) pair creation process ($\gamma + \gamma$ to $e^+ + e^-$), is the lowest threshold process in photon-photon interaction, controlling the energy release in Gamma Ray Bursts and Active Galactic Nuclei [1]. It is also responsible for the TeV cutoff in the photon energy spectrum of extra-galactic sources. The linear BW process has never been clearly observed in laboratory with important probability of matter creation [2]. Using MeV photon sources a new experimental set-up based on numerical simulations with QED effects has recently been proposed [3]. This scheme offers a possibility of conducting a multi-shot experiment with a reliable statistics on laser systems with pulse energies on the level of a few joules to tens of joules, and in a low noise environment without heavy elements. This scheme relies on a collision of relatively low energy (few MeV), intense photon beams. By colliding two of them in vacuum, one would be able to produce a significant number of electron-positron pairs in a controllable way.
To prepare future experiments using this scheme, we have performed an optimization study on collimated gamma sources generation with high intensity lasers using numerical simulations with QED effects for different possible ways of creation of the MeV photons with solid foils or dense gas jets. At ultra high intensities, higher than $10^{25} W/cm^2$, most of the energetic photons are generated in the synchrotron-like radiation dominated regime, but for intermediate intensities, between a few $10^{21}$ and $10^{23} W/cm^2$, a competition between the Bremsstrahlung and the synchrotron-like processes arise. For intensities lower than $10^{21} W/cm^2$, Bremsstrahlung dominates. This optimization study has been performed using the parameters of soon to be available laser facilities like Apollon at Université Paris Saclay, the ultra high intensity upgrade of the INRS laser in Canada, the Texas Petawatt in the US, and PETAL on the LMJ facility in France.
The possibility to study two-photon Breit-Wheeler pair production in the laboratory would allow to test new concepts of pair plasma production and to explore this pair creation process in the ultra high field regime with important potential applications in astrophysics. Moreover, the obtained optimized gamma sources could also have promising applications as radiography sources.
References: [1] Ruffini, R. et al. Physics Reports 487, 1-140 (2010). [2] Bamber C. et al. Phys. Rev. D, 60, 092004 (1999). [3] Ribeyre, X. et al., Phys. Rev. E, 93, 013201 (2016).
Basic Science / 6
Electron-positron pairs beaming in the Breit-Wheeler process
Author: Xavier Ribeyre$^1$
Co-author: Emmanuel d’Humières $^1$
$^1$ Université de Bordeaux
Corresponding Authors: email@example.com, firstname.lastname@example.org
Direct production of electron-position pairs in photon collisions is one of the basic processes in the Universe. The electron-positron production $\gamma + \gamma$ to $e^+ + e^-$ (linear Breit-Wheeler process) is the lowest threshold process in photon-photon interaction, controlling the energy release in Gamma Ray Bursts, Active Galactic Nuclei, black holes and other explosive phenomena [1]. It is also responsible for the TeV cutoff in the photon energy spectrum of extra-galactic sources. The linear Breit-Wheeler process has never been clearly observed in laboratory with important probability of matter creation [2].
Thanks to the recent progress in high-power laser sources it will be possible to create compact sources of intense $\gamma$-ray beams (few MeV) and to use them in new experiments allowing to observe and study the BW process in laboratory [3]. In this presentation, based on the kinematics of two photon collisions, we study the $e^+ - e^-$ beam properties. In particular, we demonstrate a possibility for beaming of $e^+ - e^-$ pairs in one particular direction, which may strongly facilitate the observation of the BW process [4]. We show that the pair beaming effect depends on the angle between photon
beams and the energy of each beam. Moreover, the numerical simulations with the photon collision code TriLens [5] are in good agreement with the analytical model. Simulation results obtained with TriLens using optimized gamma beams to prepare experiments on future ultra-high intensities lasers like Apollon will be presented. With higher photon beam energies (>100 MeV) the beaming effect can be observed also for muon-pairs creation.
We acknowledge the financial support from the French National Research Agency (ANR) in the frame of “The Investments for the Future” Programme IdEx Bordeaux - LAPHIA (ANR-10-IDEX-03-02) - Project TULIMA. This work is partly supported by the Aquitaine Regional Council (project ARIEL).
[1] Ruffini, R. et al. Physics Reports, 487, 1-140 (2010).
[2] Bamber C. et al. Phys. Rev. D, 60, 092004 (1999).
[3] Ribeyre X. et al. Phys. Rev. E, 93 013201 (2016).
[4] Ribeyre X. et al., PPCF 59, 014024 (2017).
[5] Jansen O. et al., Submitted to JCP, arXiv:1608.01125 (2016).
Basic Science / 52
Potential to search for Dark Matter with multi-wavelengths light sources
Author: Kensuke Homma\textsuperscript{1}
\textsuperscript{1} Hiroshima University
Corresponding Author: email@example.com
Nambu and Goldstone have predicted emergence of massless boson (NGB) as a result of spontaneous symmetry breaking of a global symmetry. Originally the lightness of the pion mass was explained because pion is NGB as a result of chiral symmetry breaking. This guiding principle can be applied to any kinds of global symmetries. There are theoretically predicted NGBs which can be dark components in the universe if their couplings to matter are very weak. However, these masses cannot be exactly zero due to complicated quantum corrections and these theories cannot exactly predict where these masses physically appear. Therefore, it is important to perform searches for such Dark Matter candidates over a wide range of the mass scale. I would like to discuss how we can extend the search window by introducing laser fields from sub-eV to 10 keV.
Basic Science / 65
Ultra-Intense X-Ray Radiation of Relativistic Laser Plasma Inducing Radiation-Dominated Matter Kinetics
Author: Sergey Pikuz\textsuperscript{None}
Co-author: Masaki Kando \textsuperscript{1}
\textsuperscript{1} National Institutes for Quantum and Radiological Science and Te
Corresponding Authors: firstname.lastname@example.org, email@example.com
The energy of petawatt optical laser pulses of pico- and femto-second duration and relativistic intensities exceeding 1021 W/cm2 is efficiently converted to X-ray radiation, which is emitted by hot electron component in collision-less processes and heats the solid density plasma periphery. In turn,
the intense X-ray radiation effectively ionizes the matter inside out, providing a large population of exotic states called hollow ions, i.e. ions with empty inner and populated outer electron shells. Hollow-ion emission from radiation-dominated hot, dense plasmas provides a new opportunity for diagnosing high-intensity x-ray radiation fields. However, constructing adequate non-LTE atomic models remains a challenge, since configuration interaction plays a significant role in the structure of the emission, and multiply excited states with many holes in both valence and inner shells can lead to enormous structural and computational complexity. Currently, the process of hollow ion production by X-rays is of particular importance due to the advent of multiple high-power X-ray lasers such as: transient–collisional, based on plasma pumping by visible lasers and the free-electron lasers. Direct high-resolution spectroscopic measurements demonstrate that X-ray radiation from plasma periphery exhibits unusual non-linear growth E4–5 of its power. The non-linear power growth occurs far earlier than the known regime when the radiation reaction dominates particle motion (RDR). Nevertheless, the radiation is shown to dominate the kinetics of the plasma periphery, changing in this regime the physical picture of the laser plasma interaction. Although in the experiments reported here we demonstrated by observation of KK hollow ions that X-ray intensities in the keV range exceeds 1018 W/cm2, there is no theoretical limit of the radiation power.
Therefore, such powerful X-ray sources can produce and probe exotic material states with high densities and multiple inner-shell electron excitations even for higher Z elements. Relativistic laser-produced plasmas may thus provide unique ultra-bright polychromatic X-ray source for future studies of matter in extreme conditions and radiography applications.
**POSTER SESSION / 14**
**High Peak Power Laser System for ELI NP**
**Author:** Christophe Radier\(^1\)
\(^1\) Thales
**Corresponding Author:** firstname.lastname@example.org
High peak power lasers for Ultra High Intensity (UHI) physics have been developed for almost two decades. The first generation of such lasers has been essentially built with Nd:Glass Chirped Pulse Amplifiers (CPA) operating at very low repetition rates (few shots per day).
The last decade has seen the tremendous development of CPA based on Titanium Sapphire crystals pumped by the second harmonic of a Nd:YAG or a Nd:Glass laser. Several groups have recently reported up to 2 PW whereas entirely commercial systems have achieved PW output at a repetition rate of Hz based on Nd:YAG pump laser technology as well as the shortest pulse ever produced by a PW laser, below 25fs. Such systems have already produced outstanding results such as laser-plasma acceleration of electrons up to the record value of 4.25GeV.
The next generation will be based on lasers delivering more peak power than currently including ELI (Extreme Light Infrastructure) Nuclear Physics (ELI-NP) in Romania involving two laser beams of 10PW each (HPLS - High Power Laser System) awarded in July 2013 to Thales by the Romanian Nuclear Physics Institute IFIN-HH.
HPLS is made of two beamlines which will deliver each a main beam of 10PW peak power at 1 shot per minute, with intermediate outputs at 100TW/10Hz and 1PW/1Hz. The two beamlines will be seeded by a common 10mJ Front End.
The 10PW beamline is based on an hybrid scheme involving a first TiSa based kHz CPA of mJ level, a XPW filter for temporal contrast enhancement, an optically synchronized 532nm-pumped OPCPA stage delivering 10mJ at 10Hz capable to enlarge the bandwidth and enhance the temporal contrast and a second TiSa based CPA. Design has taken benefit from a technology transfer agreement with CNRS leading to technical discussion and exchange on the Apollon laser design.
The gain narrowing and wavelength red-shifting effects in TiSa amplifiers is compensated through the insertion of properly designed spectral filters between amplifying stages. In order to reach around 300J of energy per pulse before compression, final amplifier stage between 1 PW intermemediate output and 10 PW compressor will be pumped by 800J at 527nm provided by 8 pump lasers (ATLAS-100) delivering each 100J per pulse at 1 shot per minute.
Two Front End beam have been entirely built and characterized. Energy per pulse of more than 10.5mJ has been demonstrated for less than 55mJ pump available at OPCPA BBO crystals showing therefore an overall optical efficiency close to 20% for the OPCPA without any temporal and spatial shaping of the pump beam. The spectral bandwidth is of about 100nm which is more than sufficient in order to seed the following chain of amplifiers. The temporal contrast has been measured and has confirmed the enhancement by at least three orders thanks to the OPCPA.
The Front End beam was used to seed the amplifiers stages corresponding to level required for the 100TW output and the 1PW output with demonstration of the expected performances in term of spectral bandwidth and energy level. The two beams were sent through the compressor with demonstration of a pulse compression at 21fs leading to a peak power of 1.3PW at 1Hz.
Fourteen ATLAS 100 pump lasers (out of sixteen) have been also built and the delivery of more than 100J per pulse with a near flat-top spatial profile has been confirmed for all the units with a pulse energy stability better than 1.5% rms over 8 hours of continuous operation.
Initial results obtained with the system integrated up to the 1PW level within Thales facilities at Élancourt, France, have confirmed the expectations from the design of the HPLS for ELI NP. The final performances are now expected in Romania where the HPLS installation has started in the new facility dedicated to the ELI NP project.
POSTER SESSION / 1
HHG Beamline, a unique turnkey system for the generation of a brilliant XUV beam
Author: Fabio Giambruno\textsuperscript{1}
\textsuperscript{1} ARDOP
Corresponding Author: email@example.com
Over the past years, the ultra-intense laser field has continued to flourish as demonstrated by a growing number of scientific and technological projects. In particular, Europe’s commitment towards ultra-high intensity physics is exemplified by the involvement of several European countries pooling research, network resources and experience to succeed in the completion of different state of the art laser facilities.
ELI consortium represents the core of the European effort to create unique laser facilities that can explore new regimes of laser-matter interaction as has never done before. It is divided into three facilities, ELI-Beamlines in Czech Republic, ELI-NP in Romania and ELI-ALPS in Hungary, each one equipped with unique laser systems and dedicated to a particular type of physics that will be studied. Most of the laser system will be used to generate secondary sources, like electron beams, Xray beams, gamma beams etc. In particular, ELI-Beamlines facility has been designed in order to let interact beams of different nature in unique pump-probe experiments. One of the beamlines – a High Harmonic Generation Beamline - will be designed, delivered and installed by a French company ARDOP as a turn-key system generating a broadband XUV beam.
The HHG beamline has been designed to accept two driving 1 kHz laser beams with pulse duration <20 fs and energy up to 100 mJ and, to superpose and focus them on a gas cell, to filter-out the residual laser beams and to characterize the generated XUV beam. The system can generate a very broadband radiation in the XUV region (from 5nm to 120), thanks to its modular design that allows to work at different focusing geometries (focal lengths from 1 m to 20 m), the gas cell that has a variable length and the choice of different rare gases.
The beamline is composed of four meter-size vessels, a complete IR rejection system done of grazing incidence mirrors and thin metallic filters, and a diagnostic system including a focusing flat-field spectrometer, a wavefront sensor and a calibrated photodiode for photon flux measurements.
The HHG Beamline has been designed also to accommodate two parallel driving lasers, generating two parallel XUV beams. To our knowledge, this is the only HHG beamline that can generate such a broad spectrum at such high intensities.
The HHG Beamline installation in Czech Republic should start in September and will be operational by the end of 2016.
|
Alternative Approaches to Exhaustion of State Remedies in Federal Habeas Corpus
Follow this and additional works at: https://scholarlycommons.law.wlu.edu/wlulr
Part of the Constitutional Law Commons
Recommended Citation
Alternative Approaches to Exhaustion of State Remedies in Federal Habeas Corpus, 30 Wash. & Lee L. Rev. 247 (1973), https://scholarlycommons.law.wlu.edu/wlulr/vol30/iss2/4
This Article is brought to you for free and open access by the Washington and Lee Law Review at Washington & Lee University School of Law Scholarly Commons. It has been accepted for inclusion in Washington and Lee Law Review by an authorized editor of Washington & Lee University School of Law Scholarly Commons. For more information, please contact firstname.lastname@example.org.
A prisoner, confined pursuant to a judgment of a state court, who seeks federal habeas corpus relief must allege the deprivation of a right guaranteed by the Federal Constitution or the statutes or treaties of the United States.\textsuperscript{1} In order for a federal court to exercise its remedial power to grant the writ of habeas corpus, the prisoner must also have met the exhaustion of state remedies requirement of 28 U.S.C. § 2254.\textsuperscript{2} This statute provides that the prisoner must either have presented the alleged deprivation of right to the courts of the state in which he is confined, or show in his federal habeas corpus petition that there is no state corrective process available to him or that the available state corrective process would be ineffective because of special circumstances.\textsuperscript{3} The exhaustion requirement is designed not to frustrate the acquisition of relief in the federal courts but rather to afford the state courts an opportunity to correct any errors made in the state criminal process.\textsuperscript{4} In practice, however, the doctrine has often confused federal courts\textsuperscript{5} and has been misapplied by many to the detriment of the state prisoner.\textsuperscript{6}
\textsuperscript{1}28 U.S.C. § 2241(c) (1970). This statute pertains to the power of federal courts to grant the writ of habeas corpus and provides in relevant part:
(c) The writ of habeas corpus shall not extend to a prisoner unless—
(3) He is in custody in violation of the Constitution or laws or treaties of the United States. . . .
\textit{Id.}
Habeas corpus relief for federal prisoners is provided for in 28 U.S.C. § 2255 (1970).
\textsuperscript{2}This requirement was originally judge-made but was eventually codified in 1948. See text accompanying notes 59-67.
\textsuperscript{3}28 U.S.C. § 2254(b) (1970). The statute provides:
(b) An application for a writ of habeas corpus in behalf of a person in custody pursuant to the judgment of a State court shall not be granted unless it appears that the applicant has exhausted the remedies available in the courts of the State, or that there is either an absence of available State corrective process or the existence of circumstances rendering such process ineffective to protect the rights of the prisoner.
\textit{Id.} See note 62 \textit{infra}.
\textsuperscript{4}R. SOKOL, \textsc{Federal Habeas Corpus} 164 (2d ed. 1969).
\textsuperscript{5}\textit{E.g.,} United States \textit{ex rel.} Murdaugh v. Murphy, 183 F. Supp. 440 (N.D.N.Y. 1960). The court commented: "I am never certain that we have any rigidity or definiteness in the procedures of [the application of the exhaustion requirement]." \textit{Id.} at 442.
\textsuperscript{6}\textit{See, e.g.,} Hammond v. North Carolina, 227 F. Supp. 1 (E.D.N.C. 1964); Monroe v.
In *Young v. Maryland*,\(^7\) the Fourth Circuit Court of Appeals divided on the question of whether a state prisoner had sufficiently presented in the state courts certain of his alleged deprivations of right. The majority and dissent in *Young* disagreed about whether the contentions of the petitioner should be categorized as claims or arguments within the structure of section 2254.\(^8\) The majority held that two contentions asserted in the circuit court, which could have been legitimate constitutional claims in a separate habeas corpus petition, were new constitutional claims and thus required a prior presentation in the state courts.\(^9\) While not denying that the contentions could have been independent constitutional claims in some other petition, the dissent maintained that these contentions were merely supporting arguments for the basic constitutional claim of Young's habeas corpus petition and as arguments did not require prior presentation to the state courts.\(^10\) It would seem that the crux of the difference between the two opinions in *Young* is whether a contention which invokes a constitutional provision and could thereby qualify as an independent but "unexhausted" habeas corpus claim must therefore be asserted *exclusively* as a constitutional "claim," or whether in some contexts such a contention can be utilized flexibly by the petitioner in a lesser role as part of a supporting "argument" for his "exhausted" constitutional claim. *Young* is particularly appropriate for demonstrating the uncertainty in the exhaustion area of the law of habeas corpus in that to
---
\(^7\) Director, 227 F. Supp. 295 (D. Md. 1964). In these cases, the courts applied the exhaustion of state remedies requirement as a prerequisite to their jurisdiction to grant the writ. The Supreme Court has clearly stated that the exhaustion requirement is not a jurisdictional limitation on federal courts. Fay v. Noia, 372 U.S. 391, 418 (1963); Bowen v. Johnston, 306 U.S. 19, 27 (1939).
\(^8\)455 F.2d 679 (4th Cir.), cert. denied, 407 U.S. 915 (1972).
\(^9\)The words "claim" and "argument" are not part of the statutory language of either § 2241 or § 2254. However, "custody in violation of the Constitution" is the basis for virtually all habeas corpus cases today and the type of issue that can be raised under this provision runs the entire gamut of constitutional law. R. Sokol, *Federal Habeas Corpus* 38-39 (2d ed. 1969). It is clear from the Supreme Court cases that the phrase "constitutional claim" means an allegation of deprivation of right guaranteed by some constitutional provision. See, e.g., Fay v. Noia, 372 U.S. 391, 394 (1963).
When discussing exhaustion of state remedies in federal habeas corpus, the choice of an appropriate word with which to label an issue presented to a court is difficult, since fundamentally different consequences flow from whether the issue is a "claim" or an "argument." Use of either of these two readily available words would tend to imply conclusions about the nature of that issue and its resulting treatment under the exhaustion doctrine. In order to avoid this threshold semantic problem, the word "contention" will be used as a neutral label when no implications about an issue's treatment under the exhaustion doctrine are intended.
\(^{10}\)455 F.2d at 681, 686. Note 22 infra.
\(^{11}\)455 F.2d at 686.
support their respective positions both majority and dissent cited *Picard v. Connor,*\textsuperscript{11} a recent Supreme Court opinion dealing with the exhaustion requirement.
The petitioner in *Young* was convicted of burglary and rape in 1960 and received a ten year prison sentence for burglary and a death sentence for rape, although the latter was commuted by executive action to confinement for life.\textsuperscript{12} During post-conviction appeal in the state courts, Young asserted a deprivation of his fourth amendment right\textsuperscript{13} in that after his arrest police had seized evidence from a room which Young rented in his father's home by explaining to his father that they merely wanted to gather some clothing for Young to wear while in jail.\textsuperscript{14} Young maintained on appeal that the admission of the evidence in question, a trenchcoat upon which there were spermatozoa stains, prejudiced his trial in that it tended to prove forcible rape when the only issue had been Young's credibility in insisting that the intercourse had been consensual.\textsuperscript{15} This argument, rooted in the fourth amendment, was found to be without merit by the Supreme Court of Maryland, whereupon Young petitioned the federal district court for habeas corpus relief. The district court found the petitioner's claim meritorious and granted relief. The State of Maryland then appealed the grant of habeas relief to the Fourth Circuit.
In the circuit court, Young asserted for the first time\textsuperscript{16} the contentions upon which the exhaustion disagreement centered. There Young articulated a contention that his trial had been prejudiced because the police had exploited the illegally seized trenchcoat to coerce an admission of the intercourse.\textsuperscript{17} Young further contended that the introduction of the coat into evidence compelled him to take the stand to repeat his denial of
\textsuperscript{11}404 U.S. 270 (1971).
\textsuperscript{12}455 F.2d at 680.
\textsuperscript{13}The fourth amendment states:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched and the pesons or things to be seized.
U.S. Const. amend. IV.
\textsuperscript{14}455 F.2d at 682.
\textsuperscript{15}Id. at 680-81.
\textsuperscript{16}Id. at 681. The court commented:
During the argument in this Court, the possibility that the trenchcoat may have been improperly used by police to induce the appellee's confession was suggested. Until that time, there has not been even a remote contention either in the State Court or in the District Court that appellee's confession has been induced by the seizure of the trenchcoat.
\textsuperscript{17}455 F.2d at 685.
forcible rape, a decision which he would not have made had the coat not been admitted into evidence.\textsuperscript{18} The circuit court reversed the grant of habeas relief but its grounds for doing so were not based entirely on lack of prior presentation of these two contentions.
The circuit court went through a two-step process to reach its decision. First, the court determined that it need not deal with the merits of Young's fourth amendment claim because the "admission of the trenchcoat was harmless error beyond a reasonable doubt, and it was accordingly immaterial whether the trenchcoat had been illegally seized."\textsuperscript{19} However, the court added parenthetically that there was some support for finding that the search was constitutional.\textsuperscript{20} The finding of harmless error was based on the fact that Young confessed to the act of intercourse as well as having worn the trenchcoat during the act. The court reasoned that the admission of the trenchcoat into evidence was not prejudicial because the only act it could possibly tend to prove was one which the petitioner admitted.\textsuperscript{21} The court's second step was contained in a brief discussion of the nature of the newly asserted contentions. Judge Russell's opinion for the majority concluded that, since Young's induced admission contention constituted a "claim" which had not even been raised remotely in the state courts, this claim was unexhausted.\textsuperscript{22}
The majority believed the case of \textit{Picard v. Connor}\textsuperscript{23} to be analogous to the exhaustion of remedies question in \textit{Young} and cited \textit{Picard} as authority for its finding that Young had not adequately raised the claim of coerced admission in the state courts. \textit{Picard} is a case in which the Supreme Court apparently attempted to dispel confusion over how sufficiently a state prisoner must present his claims of deprivation to a state court in order to be eligible for federal habeas corpus relief. The precise holding of \textit{Picard} is that the "substance of a federal habeas corpus claim must first be presented to the state courts."\textsuperscript{24} An examination of the facts in \textit{Picard} is necessary for an understanding of the test for which the case seems to stand and its application to the exhaustion controversy in \textit{Young}.
In \textit{Picard}, the Court reversed the First Circuit Court of Appeals which
\begin{itemize}
\item \textsuperscript{18}Id.
\item \textsuperscript{19}Id. at 680.
\item \textsuperscript{20}Id. The court cited no authority for this statement.
\item \textsuperscript{21}The majority did not answer the dissent's argument that there may have been prejudicial results from the inflammatory use of the trenchcoat at trial. See note 41 \textit{infra}.
\item \textsuperscript{22}455 F.2d at 681. For a reason not explained in either the majority or dissenting opinions, the majority recognized only one new "claim," the coercion of the admission, and did not deal with the contention that Young was prejudicially forced to take the witness stand.
\item \textsuperscript{23}404 U.S. 270 (1971).
\item \textsuperscript{24}Id. at 278.
\end{itemize}
had granted relief to a state prisoner on an equal protection ground which the circuit court admitted had not been urged in the state courts or in the respondent's habeas corpus petition. In the state courts, Connor's principal ground for seeking relief had been a contention that the Massachusetts "fictitious name" statute violated the fifth amendment's requirement of a grand jury indictment. This statute authorized his name to be inserted in an indictment after the indictment had been delivered in "John Doe" form by a grand jury. The Supreme Court examined the pre-trial, trial, appellate, and habeas corpus papers without finding a hint of any attack on the indictment as violative of the equal protection clause of the fourteenth amendment. The Court concluded that the issue entered the case solely because the First Circuit injected it and obviously could not have been presented in the state courts.
As both the First Circuit and the Supreme Court pointed out, Picard was not a case in which factual allegations were made in the federal courts which were not made in the state courts. The issue, simply stated, was whether the state courts had a fair opportunity to rule on the equal
---
25Connor v. Picard, 434 F.2d 673 (1st Cir. 1970), rev'd., 404 U.S. 270 (1971).
26434 F.2d at 674. As the First Circuit stated:
Petitioner did not present the constitutional question to the Massachusetts court in the particular focus in which this opinion is directed. We suggested it when the case reached us, and invited the Commonwealth to file a supplemental brief (emphasis added).
Id.
27Mass. Ann. Laws ch. 277, § 19 (1968).
28404 U.S. at 277.
29Id. at 276. Introduction of new facts in a federal court which are relevant to the disposition of a petitioner's constitutional claim will cause a federal court to require a presentation of these facts to a state court. See, e.g., Buffalo Chief v. South Dakota, 425 F.2d 271 (8th Cir. 1970); Schiers v. California, 333 F.2d 173 (9th Cir. 1964).
In order for the circuit court in Young to have ruled on the coerced admission and involuntary testimony contentions, there must have been enough facts in the record for these contentions to be more than mere conclusory allegations. In light of standards set up by the Fourth Circuit for the sufficiency of factual support in the record, Young's two contentions could possibly have been disposed of without reaching their merits because of inadequate factual development.
In Thompson v. Peyton, 406 F.2d 473, 474-75 (4th Cir. 1968), it was held that factual matters necessary for determination of exhaustion questions are sufficiently presented if a state court could rule on them as a matter of law without the necessity of a further evidentiary hearing. The dissent in Young, in an attempt to show that the petitioner had sufficiently developed the facts behind his coerced admission contention, pointed out that the record indicated that Young had been held incommunicado for seventeen hours without admitting intercourse but then admitted the sexual contact less than one hour after the coat was seized, 455 F.2d at 686. In addition, the confession itself contained a reference to the coat. 455 F.2d at 686 & n.4. Under the Thompson standard it is possible that the Fourth Circuit would have held these facts insufficient.
protection claim. The Court held that this claim had not been fairly presented to the state courts even though facts pointing to a denial of equal protection were evident on the record.\textsuperscript{30} Since the equal protection claim was not the substantial equivalent of the fifth amendment claim, it was a completely new constitutional claim requiring presentation in the state courts.
The Court qualified its holding in \textit{Picard} by expressly approving of the results in \textit{United States ex rel. Kemp v. Pate}.\textsuperscript{31} It followed the Seventh Circuit’s reasoning that there are instances in which “‘the ultimate question for disposition’ . . . will be the same despite variations in the legal theory or factual allegations urged in its support.”\textsuperscript{32} The circuit court in \textit{Kemp} used the phrase “ultimate question for disposition” to mean the basic issue which is the reason behind the attempt to acquire habeas corpus relief.\textsuperscript{33} The \textit{Kemp} petitioner’s ultimate question for disposition was a fifth amendment claim of coerced confession. In the state court, Kemp supported his claim by emphasizing physical coercion but in his federal habeas corpus petition, the emphasis changed to psychological coercion and the totality of the circumstances.\textsuperscript{34} The Court in \textit{Picard} concluded from the approved \textit{Kemp} language that it would not have been necessary for the petitioner in \textit{Picard} to have cited “book and verse” from the Federal Constitution in the state courts regarding his equal protection claim to be eligible to present it to the federal courts.\textsuperscript{35} But to benefit from this limitation to the exhaustion doctrine, the contention in the federal court, taking into account all new supporting legal theories advanced there, must have been the substantial equivalent of the contention in the state court. The Court was unable to find such a relationship between the \textit{Picard} petitioner’s fifth amendment and equal protection
\textsuperscript{30} 404 U.S. at 277. The Court commented:
To be sure, respondent presented all facts. Yet the constitutional claim the Court of Appeals found inherent in those facts was never brought to the attention of the state courts.
\textit{Id.}
\textsuperscript{31} 359 F.2d 749 (7th Cir. 1966).
\textsuperscript{32} 404 U.S. at 277.
\textsuperscript{33} The court explained:
The involuntary confession question presented to the district court by the petitioner was submitted in a substantially similar fashion to the Illinois Supreme Court. The evidence in the record before the state court was essentially the same. The ultimate question for disposition—the voluntariness of the petitioner’s confession—was precisely the same (emphasis added).
\textsuperscript{34} 359 F.2d at 751.
\textsuperscript{35} \textit{Id.} at 750.
\textsuperscript{36} 404 U.S. at 277-78.
claims.\textsuperscript{36} Thus it would seem clear that if the ultimate question for disposition remains constant between the state and federal courts, the "substance" of a petitioner's habeas corpus claim will have been presented to the state court sufficiently enough to meet the \textit{Picard} test.\textsuperscript{37}
In \textit{Young}, the majority cited \textit{Picard} as sufficiently analogous to be support for the proposition that the petitioner's claim of coerced admission was not presented to the state courts.\textsuperscript{38} It would seem that the circuit court did not see this claim as in any way supporting the basic fourth amendment claim made by Young, just as Connor's fourteenth amendment claim was found by the Supreme Court in \textit{Picard} to be a new claim and not support for his fifth amendment claim.\textsuperscript{39} The coerced admission
\textsuperscript{36}\textit{Id.} at 278.
\textsuperscript{37}Although the Court apparently intended the \textit{Kemp} language to be a modification of its holding in \textit{Picard}, it is not entirely clear how dissimilar two contentions can be and yet still be considered supportive of the same ultimate question for disposition under the \textit{Kemp} modification. The two legal theories in \textit{Kemp}, physical and psychological coercion of a confession, are both matters relating to the fifth amendment's privilege against self-incrimination. The Court in \textit{Picard} cited the two legal theories espoused by the petitioner in \textit{Sanders v. United States}, 373 U.S. 1 (1963), as ready examples of the permissible variations in legal theory under the \textit{Kemp} holding. 404 U.S. at 277. \textit{Sanders} involved successive applications for habeas corpus relief by a federal prisoner under § 2255. However, the \textit{Sanders} example does not shed light on how dissimilar two legal theories can be since the legal theories in \textit{Sanders} were identical to those in \textit{Kemp}: physical and psychological coercion. "But a claim of involuntary confession predicated on alleged psychological coercion does not raise a different 'ground' than does one predicated on alleged physical coercion." 373 U.S. at 16.
\textsuperscript{38}455 F.2d at 681. There are two viewpoints implicit in the \textit{Young} opinion. The first, adopted by the majority, is that at some point a contention in a federal court becomes so dissimilar from a contention made in the state courts that it can be characterized as either supporting a new claim or becoming itself a new claim. Applied to the specific facts in \textit{Young}, this viewpoint dictated that the coerced admission contention, on its face a fifth amendment consideration and thus dissimilar from the fourth amendment claim urged in the state courts, was itself a new constitutional claim. A second viewpoint, seemingly espoused by the \textit{Young} dissent, is that as long as the petitioner presents the two contentions within the context of the same basic claim and as long as they can be characterized as supporting arguments for this claim, the apparent dissimilarity on the face of the two contentions is not relevant. Thus a contention which might on its face be a fifth amendment consideration could be legitimately asserted in the federal courts in support of a fourth amendment claim as long as the basic nature of the fourth amendment claim is not somehow altered.
\textsuperscript{39}It is difficult to imagine how a petitioner could present a contention that he was denied the equal protection of the law in such a way that it became a supporting argument for a claim that his indictment was illegal because it failed to comport with the fifth amendment's requirement of a grand jury indictment. However, a fifth amendment supporting argument for a fourth amendment claim is not an improbable combination and in light of several Supreme Court cases, the Fourth Circuit should have been at least somewhat sensitive to their close relationship. In \textit{Boyd v. United States}, 116 U.S. 616 (1886), Mr. Justice Bradley commented:
claim evidently appeared to be an independent constitutional claim requiring prior presentation to the state courts. By divorcing the coerced admission claim from the fourth amendment claim, the circuit court was thus able to dispose of the latter through a harmless error finding, a disposition which would not have been possible had the majority admitted the possibility that a contention could be either an independent claim or an argument supporting a claim, depending on how it is presented.\textsuperscript{40} The finding of harmless error was sufficient to overcome the argument of prejudice from the probative value of the trenchcoat's admission; but it is doubtful that arguments emphasizing the trenchcoat's inflammatory effect\textsuperscript{41} or the prejudice resulting from the coerced admission of the intercourse itself could have been blunted by the harmless error rule.\textsuperscript{42}
Standing independently of the fourth amendment claim, the facts and arguments bearing upon Young's coerced admission contention would fit nicely within the general focus of the fifth amendment.\textsuperscript{43} In fact, in order
\begin{quote}
We have already noticed the intimate relation between the [fourth amendment and fifth amendment]. They throw great light on each other. For the "unreasonable searches and seizures" condemned in the Fourth Amendment are almost always made for the purpose of compelling a man to give evidence against himself, which in criminal cases is condemned in the Fifth Amendment. . . . And we have been unable to perceive that the seizure of a man's private books and papers to be used in evidence against him is substantially different from compelling him to be a witness against himself.
\textit{Id.} at 633. More recently, in \textit{Mapp v. Ohio}, 367 U.S. 643, 656-57 (1961), the Court found that the fourth and fifth amendments enjoy an "intimate relation." \textit{See Bivens v. Six Unknown Federal Narcotics Agents}, 403 U.S. 388, 414 (1971) (Burger, C.J., dissenting).
\textsuperscript{40}It is unclear from Judge Russell's opinion exactly how the majority saw this contention. In the paragraph of the opinion which discusses the contention, it is first treated as if it might be a contingent claim depending upon the validity of the fourth amendment claim: "During argument in this Court, the possibility that the trenchcoat may have been improperly used by the police to \textit{induce} the appellee's confession was suggested." 455 F.2d at 681 (emphasis added). Yet further in the same paragraph, the coerced admission contention is treated as if it were a completely independent fifth amendment claim: "The appellee has never testified he was \textit{coerced} in giving his confession. . . . This Court, on a record barren of any evidence \textit{that the confession was coerced}, may not assume its involuntariness." \textit{Id.} at 681 (emphasis added).
\textsuperscript{41}Judge Sobeloff maintained that the fact that Young's trial was before a judge did not alter the pervasive prejudicial effects of the trenchcoat. He pointed out that the not unlikely impact of the physical production of the coat at trial was to divert the fact finder from a true assessment of its evidentiary value. \textit{Id.} at 686-87.
\textsuperscript{42}In \textit{Chapman v. California}, 386 U.S. 18, 23 (1967), it was stated that any error which "possibly influenced the jury adversely to a litigant" could not be harmless.
\textsuperscript{43}The fifth amendment to the United States Constitution provides in part:
No person . . . shall be compelled in any criminal case to be a witness against himself . . . .
U.S. CONST. amend. V.
to maintain a federal habeas corpus proceeding, a petitioner must be able to portray his claimed deprivation as a deprivation of a right or privilege guaranteed by a statutory or constitutional provision,\textsuperscript{44} such as the fifth amendment's privilege against self-incrimination. Where a federal habeas corpus petitioner has been unable to do this, as where the petitioner alleged a violation of a state statute\textsuperscript{45} or a defect in procedure of less than constitutional stature,\textsuperscript{46} the writ has been held unavailable. It may be that the Fourth Circuit's finding that Young's coerced admission contention was a constitutional claim, wholly independent from his fourth amendment claim, was facilitated by the fact that his contention could have stood on its own as a constitutional claim had it been the sole allegation of a deprivation of a constitutional right made in the petition.
The dissent in \textit{Young} disagreed with the majority in both steps of the majority's two-step decisional process. Judge Sobeloff pointed out initially that the search made by the police, when tested against the well-settled standards in the consent-to-search area of the law,\textsuperscript{47} violated the petitioner's right under the fourth amendment. He then proceeded to show how the illegally seized evidence had been both directly and indirectly prejudicial.\textsuperscript{48} The opinion emphasized that because the coat was the only piece of evidence introduced against Young on the rape charge\textsuperscript{49} and because judges' minds are not impervious to misleading influences such as a dramatic flourishing of the stained trenchcoat at trial,\textsuperscript{50} there was a reasonable possibility that the admission of the trenchcoat into evidence was directly prejudicial and contributed to the conviction. The majority opinion appears to treat the coerced admission claim as independent of
\textsuperscript{44}See note 1 \textit{supra}.
\textsuperscript{45}McCord v. Henderson, 384 F.2d 135 (6th Cir. 1967).
\textsuperscript{46}Garrison v. Johnston, 104 F.2d 128 (9th Cir.), \textit{cert. denied}, 308 U.S. 553 (1939).
\textsuperscript{47}The dissent relied on Reeves v. Warden, 346 F.2d 915 (4th Cir. 1965), which held that evidence seized from a habeas corpus petitioner's room after his mother had consented to the search was improperly admitted at trial because the bureau from which the evidence was seized was set aside exclusively for the petitioner and thus only he could give constitutionally effective permission for the search.
\textsuperscript{48}In Wong Sun v. United States, 371 U.S. 471 (1962), it was held that evidence seized unlawfully could neither be proof against the victim of the search nor against one who might be indirectly prejudiced by the evidence, though not the victim of the search. That the exclusionary rule extends to both the direct and indirect products of an illegal search was settled in Silverthorne Lumber Co. v. United States, 251 U.S. 385 (1920), in which Mr. Justice Holmes commented:
\begin{quote}
The essence of a provision forbidding the acquisition of evidence in a certain way is that not merely evidence so acquired shall be used before the Court but that it shall not be used at all.
\end{quote}
\textsuperscript{49}251 U.S. at 392.
\textsuperscript{50}455 F.2d at 684.
\textsuperscript{51}\textit{Id.} at 686-87.
the main fourth amendment claim even though the admission was allegedly coerced by the fruits of the seizure. Rather than characterizing the coerced admission and involuntary testimony contentions made by Young as independent claims, the dissent interpreted these as arguments indicating indirect prejudicial effects of the illegal seizure of the trenchcoat. Judge Sobeloff pointed out that it was not necessary for the petitioner to prove beyond a reasonable doubt that prejudice existed,\textsuperscript{51} as the majority seemed to require. Rather, the burden was on the state to prove that any error was harmless beyond a reasonable doubt, a burden which the state had failed to sustain.\textsuperscript{52}
It would seem that the dissent's most fundamental difference with the majority lies in its interpretation of \textit{Picard} and the resulting application of the \textit{Picard} test to the situation in \textit{Young}. Judge Sobeloff evidently read \textit{Picard} as allowing arguments which might, on their face, fit under a constitutional provision different than the one articulated in the state courts as long as the basic, ultimate claim for disposition was not changed as the result of the assertion of the new arguments in the federal courts.\textsuperscript{53} That Young's new contentions could possibly qualify as constitutional claims should not functionally fix them as "claims" to the exclusion of their use as supporting "arguments" in the total scheme of the petitioner's fourth amendment thrust. Thus the majority was accused of indulging in an overly technical view of the exhaustion requirement\textsuperscript{54} in that it evidently set up a mutually exclusive dichotomy between those contentions which can be characterized as claims and those which can be characterized as arguments supporting claims.
The dissenting opinion stated that the exhaustion of state remedies doctrine requires \textit{only} that the substance of a federal habeas corpus claim must first be presented to a state court.\textsuperscript{55} Judge Sobeloff's use of the word
\textsuperscript{51}\textit{Id.} at 684.
\textsuperscript{52}\textit{Id.}
\textsuperscript{53}In this reading of the \textit{Picard} case, the dissent is joined by the Seventh Circuit Court of Appeals. In Macon v. Lash, 458 F.2d 942 (7th Cir. 1972), the Seventh Circuit held that a claim in the state courts, that the petitioner was denied his right to appeal because his court-appointed counsel filed a futile motion for an extension of time instead of filing a timely motion for a new trial, was not changed simply because he framed his claim in the federal courts as one of incompetence of counsel. As the court posed the issue:
Our question then is whether petitioner's emphasis on the incompetence of counsel in the federal court is a mere variation of the legal theory presented to the Indiana Supreme Court or should be regarded as a different claim. In view of the holding in \textit{Picard}, the question is not free of doubt, but in our judgment we are dealing with a variation of the same claim rather than a different legal ground.
\textit{Id.} at 948.
\textsuperscript{54}455 F.2d at 686.
\textsuperscript{55}\textit{Id.} at 685 & n.2a.
"only" to preface the *Picard* holding would seem to indicate that he interpreted the word "substance" to mean "basic thrust" of the ultimate claim for disposition. This basic thrust, without the polish of all the legal arguments which may be used in a federal habeas corpus proceeding, is all the state court should need in order to apply controlling legal principles.\textsuperscript{56} Accordingly, the substance of Young's habeas corpus claim, that he was the victim of an unconstitutional search and seizure, was not changed by the addition of new arguments in the federal courts. The dissent had no difficulty distinguishing *Young* from *Picard*, in which the fifth amendment indictment question posed to the state courts was transformed into an entirely distinct equal protection question in the federal courts.\textsuperscript{57} Instead, the whole tenor of Judge Sobeloff's opinion implies that *Young* fits within the *Kemp* limitation of *Picard*.
It is not entirely clear from the procedural facts of *Young* and *Kemp*, however, that these two cases actually belong in the same category. In *Kemp*, the petitioner's ultimate question for disposition was a fifth amendment claim of coerced confession. The two legal theories which Kemp asserted bore directly upon the question of a possible constitutional violation in procuring the confession. But once this coercion was shown, no question of prejudice or harmless error could have arisen since admission of a coerced confession into evidence is prejudicial \textit{per se}.\textsuperscript{58} In *Young*, on the other hand, the petitioner needed to show both a constitutional violation in the search and sufficient prejudice resulting from the admission of the tainted evidence to render his trial unfair. The new contentions in *Young*, which the majority determined to be new claims, were directed at strengthening the petitioner's showing of total prejudice and not at the basic constitutional violation as were those in *Kemp*. Therefore, *Young* would seem \textit{sui generis} on its facts and not subject to categorization with *Kemp* since in the latter case the distinction between the constitutional violation and the effects flowing from that violation was of no importance.
It may be that the dissent felt it unnecessary to point out that the new contentions raised in the federal courts by Young and Kemp applied to different steps in proof. Taking a broader view of the *Picard* and *Kemp* language than is implicit in this sort of distinction, Young may have satisfied the requirement that he present the substance of his claim to the state courts even though his arguments as to the effect of the alleged constitutional violation on the fairness of his trial were not as completely articulated at the state level as they were at the federal level. Although
\textsuperscript{56}\textit{Cf. Sullivan v. Scafati}, 428 F.2d 1023 (1st Cir. 1970), \textit{cert. denied}, 400 U.S. 1001 (1971); Wilbur v. Maine, 421 F.2d 1327 (1st Cir. 1970).
\textsuperscript{57}455 F.2d at 685 & n.2a.
\textsuperscript{58}\textit{See, e.g., Miranda v. Arizona}, 384 U.S. 436 (1966).
not explicitly stated in his opinion, it can be assumed that Judge Sobeloff saw no practical or important difference between new legal theories bearing on an alleged constitutional violation and new legal theories bearing on the effect of an alleged constitutional violation so long as the ultimate question for disposition remained unaltered in either case. If this assumption is correct, the dissent has, in effect, pointed out that the important distinction for a habeas corpus petitioner to make is the one between those arguments which change the nature of his claim and those arguments which do not, regardless of the stage of proof at which they are made or any other collateral characteristic.
The history and policy behind the exhaustion of state remedies doctrine lend support to the dissent's position in *Young*, which seems to be a more flexible interpretation of section 2254 than is evidenced by the majority opinion. The doctrine had its beginning in *Ex parte Royall*,\(^{59}\) a case which held that federal courts had the power to grant a writ of habeas corpus to a state prisoner even in advance of a trial. The Court cautioned, however, that this power should be tempered by discretion as to the time and mode of its exercise. Considerations of comity between the state and federal court system would be paramount in exercising this discretion,\(^{60}\) but federal court abstention would be subordinated to any special circumstances requiring immediate action. What was originally a flexible principle encompassing discretion *not* to grant the writ gradually became hardened into a rule that, in the absence of exceptional circumstances, the lower federal courts *must* refuse to grant a writ of habeas corpus unless state remedies have been exhausted.\(^{61}\) The codification of the exhaustion doctrine in section 2254 then followed,\(^{62}\) coming at a time when the doctrine was being rigidly applied. It was not until the trilogy of cases\(^{63}\) beginning with *Fay v. Noia*\(^{64}\) in 1963 that the Supreme Court
---
\(^{59}\)117 U.S. 241 (1886). The exhaustion of state remedies doctrine was further elaborated upon in *Whitten v. Tomlinson*, 160 U.S. 231 (1895), and *Davis v. Burke*, 179 U.S. 399 (1900).
\(^{60}\)In *United States ex rel. Drury v. Lewis*, 200 U.S. 1, 7 (1906), the Supreme Court commented on the reason for this deference to comity:
> It is an exceedingly delicate jurisdiction given to the Federal courts by which a person under indictment in a state court and subject to its laws may, by the decision of a single judge of the Federal court, upon a writ of *habeas corpus*, be taken out of the custody of the officers of the State and finally discharged therefrom. . . .
*Id.*
\(^{61}\) *Ex parte Hawk*, 321 U.S. 114 (1944).
\(^{62}\)The original statute, Act of June 25, 1948, Ch. 153, § 2254, 62 Stat. 967, was enacted four years after *Ex parte Hawk*, 321 U.S. 114 (1944), but was amended to its present form in 1958.
\(^{63}\) *Fay v. Noia*, 372 U.S. 391 (1963) (federal courts have power under the federal habeas corpus statutes to grant relief despite the applicant's failure to have pursued a state remedy
reaffirmed the flexibility of the exhaustion doctrine and federal courts began giving full consideration to policies of timely vindication\textsuperscript{65} and judicial efficiency\textsuperscript{66} as well as to the oft-invoked policy of comity.\textsuperscript{67}
A flexible interpretation of section 2254 is also consistent with decisions stating that habeas corpus was never intended to be a narrow, formalistic remedy.\textsuperscript{68} Mr. Justice Douglas, in his dissent in \textit{Picard}, recognized that nicety of analysis is not a valuable or functional concept in the area of exhaustion of state remedies.\textsuperscript{69} By concerning itself with how a legal argument appeared on its face rather than looking to the argument's actual substantive effect on the total scheme of petitioner's claimed deprivation of right, the majority in \textit{Young} seems to indulge in
\begin{quote}
not available to him at the time he applies), Townsend v. Sain, 372 U.S. 293 (1963) (where facts are in dispute, federal courts in habeas corpus cases must hold evidentiary hearing in a state court), and Sanders v. United States, 373 U.S. 1 (1963) (controlling weight must not be given to denial of prior habeas applications if not adjudicated on the merits or if a new ground is presented) are said to be the liberalizing trilogy. Laubach, \textit{Exhaustion of State Remedies as a Prerequisite to Federal Habeas Corpus: A Summary}, 1966-1971, 7 Gonzaga L. Rev. 34 (1971).
\textsuperscript{65}372 U.S. 391 (1963).
\textsuperscript{66}See Dixon v. Florida, 388 F.2d 424 (5th Cir. 1968); United States ex rel. Davis v. Henderson, 330 F. Supp. 797 (W.D. La. 1971).
\textsuperscript{67}See United States ex rel. Newman v. Brierly, 420 F.2d 781 (3rd Cir. 1970); Phelper v. Decker, 401 F.2d 232 (5th Cir. 1968); Kennedy v. Sigler, 397 F.2d 556 (8th Cir. 1968); United States ex rel. Levy v. McMann, 394 F.2d 402 (2d Cir. 1968).
\textsuperscript{68}Given the Fourth Circuit's handling of exhaustion of state remedies issues in cases spanning the past five years, it is somewhat surprising that it would take a narrow view of the requirement in its disposition of \textit{Young}. The Fourth Circuit has been characterized as having a moderate to liberal approach to exhaustion matters. Laubach, \textit{supra} note 63, at 48. A review of some Fourth Circuit cases bears out this characterization.
In Hewett v. North Carolina, 415 F.2d 1316 (4th Cir. 1969), the Fourth Circuit determined that raising a new issue in the final paragraph of an addendum to petitioner's application should not, from the standpoint of comity, require an exhaustion of the new claim before a federal court could address itself to the fully exhausted claim. Other federal courts have been reluctant to hear fewer than all the issues asserted in a habeas corpus petition. See Morgan v. Beto, 313 F. Supp. 1265 (S.D. Tex. 1970); Howard v. Craven, 306 F. Supp. 730 (C.D. Cal. 1969).
In Sheftic v. Boles, 377 F.2d 423 (4th Cir.), \textit{cert. denied}, 389 U.S. 986 (1967), the court held that the real benefit derived from allowing prisoners, who had presented the same claims in a state court three years earlier, to present their claims in the federal courts would outweigh any tangential benefits which might accrue to the cause of federalism by sending the petitioners back to the state courts. Similarly, in Rowe v. Peyton, 383 F.2d 709 (4th Cir. 1967), the court was willing to find exhaustion in the state courts even though the state argued that the issue had not been lucidly presented there. In dicta, the court brushed aside the state's objections by saying that judges must not depend on the arguments of lawyers for the limits of their perception and analysis but must know and understand many things not intelligently presented by lawyers in a particular case.
\textsuperscript{69}Jones v. Cunningham, 371 U.S. 236 (1963).
\textsuperscript{70}404 U.S. at 281.
the very type of academic analysis against which Mr. Justice Douglas warned.
An inclination to read *Picard* as a device with which to foreclose somewhat the availability of habeas corpus relief is understandable in light of the barrage of habeas corpus applications that federal courts in general, and the Fourth Circuit in particular, have received in recent years. However, making the vindication of constitutional guarantees more difficult is a questionable means of easing the burden at best. In reality, both state and federal courts are responsible for the administration of the criminal justice system in the United States. Thus, at least in terms of judicial economy, it should make no difference whether a federal court or a state court ultimately decides a prisoner's claim. To base a justification for returning a state prisoner who has made a colorable attempt to exhaust state remedies upon a desire for federal judicial economy would seem to ignore the fact that reviewing the petition of a prisoner such as Young and subsequently sending him back to the state courts itself wastes judicial energy.
What would seem to be the most unfortunate aspect of a formalistic application of the exhaustion of state remedies doctrine is that the petitioner with a meritorious claim remains confined, at least during the time needed for the claimed deprivation of right to be heard in a state court. The decision in *Young* may have the effect of making it more difficult for a state prisoner to achieve a hearing on a claimed deprivation of right in a forum which, in terms of orientation toward federal constitutional matters, is preferable to a state court. A prisoner who seeks habeas corpus relief in the Fourth Circuit and refashions his legal arguments in the federal courts runs the risk that his arguments may be found to be unexhausted constitutional claims. It would seem that a speedy vindication of constitutional deprivations in the criminal justice process cannot countenance the impediment of a formalistic view of section 2254. Otherwise the federal courts would "make a trap out of the exhaustion doctrine which promises to exhaust the litigant and his resources, not the remedies." Thus it is submitted that between the majority and dissenting viewpoints in *Young* the flexibility of the latter is far more desirable.
A. Neal Barkus
---
70 Of the 10,798 appeals filed in the United States courts of appeals during the fiscal year ending June 30, 1971, the largest single source was federal question habeas corpus petitions amounting to a total of 1,261. *Annual Report of the Director of the Administrative Office of the United States Courts* 252-53 (1971).
71 Of the total civil cases pending in the United States district courts on June 30, 1971, only the district courts in the Fifth Circuit had a greater number of habeas corpus petitions from state prisoners than did the district courts in the Fourth Circuit. *Id.* at 272-73.
72 An obvious alternative solution is to increase the number of federal judgeships. *See* Washington Post, Oct. 28, 1972 at A 13, col. 7.
73 404 U.S. at 281.
|
ICDS 2012
The Sixth International Conference on Digital Society
ISBN: 978-1-61208-176-2
January 30 - February 4, 2012
Valencia, Spain
ICDS 2012 Editors
Jaime Lloret Mauri, Polytechnic University of Valencia, Spain
Gregorio Martinez, University of Murcia, Spain
Lasse Berntzen, Vestfold University College - Tønsberg, Norway
Åsa Smedberg, Stockholm University/The Royal Institute of Technology, Sweden
The sixth edition of The International Conference on Digital Society (ICDS 2012) was held in Valencia, Spain, on January 30th – February 4th, 2012.
Nowadays, most of the economic activities and business models are driven by the unprecedented evolution of theories and technologies. The impregnation of these achievements into our society is present everywhere, and it is only question of user education and business models optimization towards a digital society.
Progress in cognitive science, knowledge acquisition, representation, and processing helped to deal with imprecise, uncertain or incomplete information. Management of geographical and temporal information becomes a challenge, in terms of volume, speed, semantic, decision, and delivery.
Information technologies allow optimization in searching an interpreting data, yet special constraints imposed by the digital society require on-demand, ethics, and legal aspects, as well as user privacy and safety.
The event was very competitive in its selection process and very well perceived by the international scientific and industrial communities. As such, it is attracting excellent contributions and active participation from all over the world. We were very pleased to receive a large amount of top quality contributions.
The accepted papers covered a large spectrum of topics related to advanced networking, applications, social networking, and systems technologies in a digital society. We believe that the ICDS 2012 contributions offered a large panel of solutions to key problems in all areas of digital needs of today’s society.
We take here the opportunity to warmly thank all the members of the ICDS 2012 technical program committee as well as the numerous reviewers. The creation of such a broad and high quality conference program would not have been possible without their involvement. We also kindly thank all the authors that dedicated much of their time and efforts to contribute to the ICDS 2012. We truly believe that thanks to all these efforts, the final conference program consists of top quality contributions.
This event could also not have been a reality without the support of many individuals, organizations and sponsors. In addition, we also gratefully thank the members of the ICDS
20102 organizing committee for their help in handling the logistics and for their work that is making this professional meeting a success.
We hope the ICDS 2012 was a successful international forum for the exchange of ideas and results between academia and industry and to promote further progress on the topics of the conference.
We also hope the attendees enjoyed the beautiful surroundings of Valencia, Spain.
**ICDS 2012 Chairs**
**ICDS 2012 General Chair**
Jaime Lloret Mauri, Polytechnic University of Valencia, Spain
Gregorio Martinez, University of Murcia, Spain
**ICDS 2012 Advisory Committee**
Lasse Berntzen, Vestfold University College - Tønsberg, Norway
Åsa Smedberg, DSV, Stockholm University/KTH, Sweden
Freimut Bodendorf, University of Erlangen, Germany
Adolfo Villaflorita, Fondazione Bruno Kessler, Italy
A.V. Senthil Kumar, Hindusthan College of Arts and Science, India
Charalampos Konstantopoulos, University of Piraeus, Greece
ICDS 2012 General Chair
Jaime Lloret Mauri, Polytechnic University of Valencia, Spain
Gregorio Martinez, University of Murcia, Spain
ICDS 2012 Advisory Committee
Lasse Berntzen, Vestfold University College - Tønsberg, Norway
Åsa Smedberg, DSV, Stockholm University/KTH, Sweden
Freimut Bodendorf, University of Erlangen, Germany
Adolfo Villaforita, Fondazione Bruno Kessler, Italy
A.V. Senthil Kumar, Hindusthan College of Arts and Science, India
Charalampos Konstantopoulos, University of Piraeus, Greece
ICDS 2012 Technical Program Committee
Gil Ad Ariely, California State University (CSU), USA / Interdisciplinary Center(IDC) Herzliya, Israel
Adolfo Albaladejo Blázquez, Universidad de Alicante, Spain
Salvador Alcaraz Carrasco, Universidad Miguel Hernández, Spain
Shadi Aljawarneh, Isra University - Amman, Jordan
Giner Alor Hernández, Instituto Tecnológico de Orizaba-Veracruz, México
Aini Aman, Universiti Kebangsaan Malaysia, Malaysia
Pasquale Ardimento, University of Bari, Italy
Marcelo E. Atenas, Universidad Politecnica de Valencia, Spain
Charles K. Ayo, Covenant University, Nigeria
Gilbert Babin, HEC Montréal, Canada
Kambiz Badie, Iran Telecom Research Center & University of Tehran, Iran
Lasse Berntzen, Vestfold University College - Tønsberg, Norway
Aljoša Jerman Blažič, SETCCE - Ljubljana, Slovenia
Marco Block-Berlitz, Mediadesign Hochschule- Berlin, Germany
Nicola Boffoli, University of Bari, Italy
Mahmoud Boufaida, Mentouri University of Constantine, Algeria
Mahmoud Brahimi, University of Msila, Algeria
Diana Bri, Universidad Politecnica de Valencia, Spain
Luís M. Camarinha-Matos, New University of Lisbon, Portugal
Vlatko Ceric, University of Zagreb, Croatia
Walter Castelnuovo, University of Insubria, Italy
Yul Chu, University of Texas Pan American, USA
David Day, Sheffield Hallam University, UK
Gert-Jan de Vreede, University of Nebraska at Omaha, USA
Prokopios Drogkaris, University of the Aegean - Karlovasi, Greece
Mohamed Dafir El Kettani, ENSIAS - University Mohammed V-Souissi – Rabat, Morocco
Matthias Finger, SwissFederal Institute of Technology, Switzerland
Alea M. Fairchild, Vrije University Brussel & Hogeschool University Brussel, Belgium
Karla Felix Navarro, University of Technology, Sydney
Robert Forster, Edgemount Solutions, USA
Roberto Fragale, Universidade Federal Fluminense (UFF) & Fundação Getúlio Vargas (FGV-RJ), Brazil
Shauneen Furlong, Territorial Communications Ltd.-Ottawa, Canada / University of Liverpool, UK
Jean-Gabriel Ganascia, University Pierre et Marie Curie, France
Miguel García, Universidad Politecnica de Valencia, Spain
Genady Grabarnik, CUNY - New York, USA
Panos Hahamis, University of Westminster - London, UK
Gy R. Hashim, Universiti Teknologi Mara, Malaysia
Mikko Heikkinen, Aalto University, Finland
Hany Abdelghaffar Ismail, German University in Cairo (GUC), Egypt
Marko Jäntti, University of Eastern Finland, Finland
Maria João Simões, University of Beira Interior, Portugal
Mohammad Kajbaf, INFOAMN, Iran
Atsushi Kanai, Hosei University, Japan
Georgios Kapogiannis, The University of Salford, UK
Károly Kondorosi, Budapest University of Technology and Economics (BME), Hungary
Christian Kop, University of Klagenfurt, Austria
Andrew Kusiak, The University of Iowa, USA
Antti Lahtela, Regional State Administrative Agency for Eastern Finland, Finland
Peter Mikulecky, University of Hradec Králové, Czech Republic
John Morison, Queen's University - Belfast, UK
Darren Mundy, University of Hull, UK
Khaled Nagi, Alexandria University, Egypt
SangKyun Noh, BNERS, Korea
M. Kemal Öktem, Hacettepe University - Ankara, Turkey
Daniel E. O’Leary, University of Southern California, USA
Gerard Parr, University of Ulster, UK
Carolina Pascual Villalobos, Universidad de Alicante, Spain
Jyrki Penttinen, Nokia Siemens Networks, Spain
Mick Phythian, De Montfort University - Leicester, UK
Augustin Prodan, Iuliu Hatieganu University - Cluj-Napoca, Romania
Juha Puustjärvi, University of Helsinki, Finland
T. Ramayah, Universiti Sains Malaysia - Penang, Malaysia
Christopher Rentrop, HTWG Konstanz, Germany
Karim Mohammed Rezaul, Glyndwr University - Wrexham, UK
Jarogniew Rykowski, Poznan University of Economics, Poland
Francesc Saigi Rubió, Open University of Catalonia (UOC), Spain
Farzad Sanati, University of Technology - Sydney, Australia
Alain Sandoz, University of Neuchâtel, Switzerland
Antoine Schlechter, Centre de Recherche Public - Gabriel Lippmann, Luxembourg
Rainer Schmidt, Aalen University, Germany
Andreas Schmietendorf, Berlin School of Economics and Law (HWR Berlin), FB II, Germany
Thorsten Schöler, University of Applied Sciences Augsburg, Germany
Hossein Sharif, University of Portsmouth, UK
Larisa Shwartz, IBM T. J. Watson Research Center, USA
Dimitrios Serpanos, ISI/R.C. Athena & University of Patras, Greece
Dharmendra Shadija, Sheffield Hallam University, UK
Jamal Shahin, Vrije Universiteit Brussel, Belgium & University of Amsterdam, The Netherlands
Pushpendra B. Singh, MindTree - Bangalore, India
Åsa Smedberg, Stockholm University, Sweden
Sasikumaran Sreedharan, King Khalid University, Saudi Arabia
Maryam Tayefeh Mahmoudi, Research Institute for ICT, Iran
Sampo Teräs, Aalto University, Finland
Steffen Thiel, Furtwangen University of Applied Sciences, Germany
Ashley Thomas, Dell Secureworks, USA
Ioan Toma, STI, Austria
Jesus Tomas, Universidad Politecnica de Valencia, Spain
Jengnan Tzeng, National Chengchi University - Taipei, Taiwan
Nikos Vrakas, University of Piraeus, Greece
Komminist Weldemariam, Fondazion Bruno Kessler (FBK-Irst), Italy
Alex Wiesmaier, AGT Germany, Germany
Qishi Wu, University of Memphis, USA
Xiaoli (Lucy) Yang, Purdue University - Calumet, USA
Zhengxu Zhao, Shijiazhuang Tiedao University, P. R. of China
Rongbo Zhu, South-Central University for Nationalities - Wuhan, P. R. China
Dimitrios Zissis, University of the Aegean, Greece
Copyright Information
For your reference, this is the text governing the copyright release for material published by IARIA.
The copyright release is a transfer of publication rights, which allows IARIA and its partners to drive the dissemination of the published material. This allows IARIA to give articles increased visibility via distribution, inclusion in libraries, and arrangements for submission to indexes.
I, the undersigned, declare that the article is original, and that I represent the authors of this article in the copyright release matters. If this work has been done as work-for-hire, I have obtained all necessary clearances to execute a copyright release. I hereby irrevocably transfer exclusive copyright for this material to IARIA. I give IARIA permission or reproduce the work in any media format such as, but not limited to, print, digital, or electronic. I give IARIA permission to distribute the materials without restriction to any institutions or individuals. I give IARIA permission to submit the work for inclusion in article repositories as IARIA sees fit.
I, the undersigned, declare that to the best of my knowledge, the article does not contain libelous or otherwise unlawful contents or invading the right of privacy or infringing on a proprietary right.
Following the copyright release, any circulated version of the article must bear the copyright notice and any header and footer information that IARIA applies to the published article.
IARIA grants royalty-free permission to the authors to disseminate the work, under the above provisions, for any academic, commercial, or industrial use. IARIA grants royalty-free permission to any individuals or institutions to make the article available electronically, online, or in print.
IARIA acknowledges that rights to any algorithm, process, procedure, apparatus, or articles of manufacture remain with the authors and their employers.
I, the undersigned, understand that IARIA will not be liable, in contract, tort (including, without limitation, negligence), pre-contract or other representations (other than fraudulent misrepresentations) or otherwise in connection with the publication of my work.
Exception to the above is made for work-for-hire performed while employed by the government. In that case, copyright to the material remains with the said government. The rightful owners (authors and government entity) grant unlimited and unrestricted permission to IARIA, IARIA's contractors, and IARIA's partners to further distribute the work.
| Title | Page |
|----------------------------------------------------------------------|------|
| Polish E-Government at Local Level: Heavy Road to Citizens’ Empowerment | 1 |
| Leszek Porebski | |
| Community Detection based on Structural and Attribute Similarities | 7 |
| The Anh Dang and Emmanuel Viennet | |
| The Evolution of the e-ID card in Belgium: Data Privacy and Multi-Application Usage | 13 |
| Alea Fairchild and Bruno de Vuyst | |
| Analyzing Social Roles using Enriched Social Network on On-Line Sub-Communities. | 17 |
| Mathilde Forestier, Julien Velcin, and Djamel A. Zighed | |
| Unique Domain-specific Identification for E-Government Applications | 23 |
| Peter Schartner | |
| Sociological Reflections on E-government | 29 |
| Maria Joao Simoes | |
| SSEDIC: Building a Thematic Network for European eID | 35 |
| Victoriano Giralt, Hugo Kerschot, and Jon Shamah | |
| Designing National Identity: An Organisational Perspective on Requirements for National Identity Management Systems | 40 |
| Adrian Rahaman and Martina Angela Sasse | |
| Towards the Automatic Management of Vaccination Process in Jordan | 50 |
| Edward Jaser and Islam Ahmad | |
| Three Dimensional Printing: An Introduction for Information Professionals | 54 |
| Julie Marcoux and Kenneth-Roy Bonin | |
| Unsupervised Personality Recognition for Social Network Sites | 59 |
| Fabio Celli | |
| Cellular Automata: Simulations Using Matlab | 63 |
| Stavros Athanassopoulos, Christos Kaklamanis, Gerasimos Kalfoutzos, and Evi Papaioannou | |
| Fast Polynomial Approximation Acceleration on the GPU | 69 |
| Lumir Janosek and Martin Nemec | |
| Title | Page |
|----------------------------------------------------------------------|------|
| Generating Context-aware Recommendations using Banking Data in a Mobile Recommender System | 73 |
| *Daniel Gallego Vico, Gabriel Huecas, and Joaquin Salvachua Rodriguez* | |
| Web Personalization: Implications and Challenges | 79 |
| *Ahmad Kardan and Amirhossein Roshanzamir* | |
| New Service Development Method for Prosumer Environments | 86 |
| *Ramon Alcarria, Tomas Robles, Augusto Morales, and Sergio Gonzalez-Miranda* | |
| Digital Investigations for Enterprise Information Architectures | 92 |
| *Syed Naqvi, Gautier Dallons, and Christophe Ponsard* | |
| Shadow IT - Management and Control of Unofficial IT | 98 |
| *Christopher Rentrop and Stephan Zimmermann* | |
| A Secure and Distributed Infrastructure for Health Record Access | 103 |
| *Victoriano Giralt* | |
| Active Mechanisms for Cloud Environments | 109 |
| *Irina Astrova, Arne Koschel, Stella Gatziu Grivas, Marc Schaaf, Ilja Hellwich, Sven Kasten, Nedim Vaizovic, and Christoph Wiens* | |
| Information Technology Planning For Collaborative Product Development Through Fuzzy QFD | 115 |
| *Jbid Arsenyan and Gulcin Buyukozkan* | |
| Indoor IEEE 802.11g Radio Coverage Study | 121 |
| *Sandra Sendra, Laura Ferrando, Jaime Lloret, and Alejandro Canovas* | |
| Security Issues of WiMAX Networks with High Altitude Platforms | 127 |
| *Ilija Basicevic and Miroslav Popovic* | |
| Alteration Method of Schedule Information on Public Cloud | 132 |
| *Tatsuya Miyagami, Atsushi Kanai, Noriaki Saito, Shigeaki Tanimoto, and Hiroyuki Sato* | |
| Identifying Potentially Useful Email Header Features for Email Spam Filtering | 140 |
| *Omar Al-Jarrah, Ismail Khater, and Basheer Al-Duwairi* | |
| Fault Tolerant Distributed Embedded Architecture and Verification | 146 |
| *Chandrasekaran Subramaniam, Prasanna Vetrivel, Srinath Badri, and Sriram Badri* | |
| Determining Authentication Strength for Smart Card-based Authentication Use Cases | 153 |
| *Ramaswamy Chandramouli* | |
Polish E-Government at Local Level
Heavy Road to Citizens’ Empowerment
Leszek Porebski
Department of Political Science and Contemporary History
AGH University of Science and Technology
Krakow, Poland
email@example.com
Abstract—The paper evaluates the development of e-government in Polish local governments, within the framework of the role played by an individual in political processes. Presented here are the results of the empirical research carried out in the period of 2005-2009. The study comprised the assessment of the official websites of Polish counties, the secondary level of local government system. Sites of 314 counties were analyzed, with the application of the quantitative method based on Website Attribute Evaluation System. The change of citizens’ position with respect to public institutions was assessed against the background of the four basic functions performed by local governments websites. They are: information, promotion, consultation and service delivery. Research results indicate that local level of Polish e-government is on the preliminary stage of development and the impact of new technologies on the model of local democracy is limited.
Keywords-e-government; e-democracy; websites content; local democracy; local government.
I. INTRODUCTION
The use of Information and Communication Technologies (ICT) by public institutions remains one of the most popular issues undertaken by scholars dealing with social implications of the information revolution. Nevertheless, the local level still attracts much less attention than activity of parliaments, governments or governmental agencies. While the vast majority of both individual research projects and international benchmarking studies focus mostly on consequences of ICT use in the macro-scale of political and administrative processes [1], [2], [3], from the perspective of a democratic theory it is the study of the local democracy that offers an excellent insight into a political transformation. The survey of what occurs in local communities can be the preliminary stage of pointing the overall direction democracy heads for.
The present paper presents an assessment of the role played by local government websites in redefining the model of contemporary democracy. The focus of the analysis is on the position taken by the individual in his relations with the state, represented by public institutions. Without a doubt ICT modify patterns of interactions between various political actors. Nonetheless, consequences of the emergence of new technologies on the status of the citizen in relation to the state have been vigorously debated for several years. ICT enthusiasts note the definite positive the impact of technologies on political life, such as the empowerment of an individual in the realm of political communication, political participation as well as decision-making [4], [5]. More skeptical observers however point out that cyberspace in fact mirrors the real life “politics as usual” game, with the same actors dominating the scene [6]. Others claim that it is too early to prejudge the ultimate effect of ICT use in political processes [7], [8].
Is the role of the individual, especially in local democracy, enhanced by new technologies? To what extent websites of local government institutions stimulate civic activism and participatory attitudes? What type of democracy is formed by the way local authorities use the ICT? These issues are addressed in the paper, on the base of the empirical assessment of the content of Polish local government websites.
II. BASIC TERMS
E-government is the notion which is both commonly used and at the same time lacking agreed, precise meaning. It can be defined as the use of technology in the management and delivery of public services [9], or the employment of ICT to provide electronic services to citizens, businesses and organizations [10]. The same term is however described at times in much broader perspective. According to Carbo and Williams [11] the role of e-government is also to involve citizens in the democratic process and decision making in the convenient, customer-oriented and cost-effective way. Consequently, e-government cannot be reduced to the process of electronic services distribution. It is much more than merely the technological phenomenon. The essence of e-government is the reconstruction of mutual interactions between citizens and service providers [12].
In this paper, the broad concept of e-government is assumed. It comprises a few basic dimensions. Most important of them are: delivery of public services, provision of information, strengthening the public debate as well as the stimulation of citizens participation and their involvement in the decision making process. Such a wide-ranging approach to e-government makes it very close to the notion of edemocracy. In fact, e-government can be considered to be the aspect of e-democracy associated with the pursuit of various types of public institutions. It obviously implies that performance of e-government considerably determines whether the overall goals of the ICT use in politics can be accomplished, regardless of how they are articulated.
III. Subject and Scope of the Research
The research project presented in this paper was dedicated to the analysis of official websites of Polish counties. In the three-tier system of local government in Poland (introduced in January 1999) municipalities are the primary units, counties are units on the secondary level and provinces make up the third tier of the system. There are 2478 municipalities, 379 counties and 16 provinces altogether.
There are two different types of counties in Poland: urban and territorial ones. Cities with population over one hundred thousand residents establish 65 urban counties. The territory of the county is in this case limited to the area of the single city – the county seat. Nevertheless, legally these cities are endowed with rights of counties. The second category of counties – territorial counties, are composed of several rural and urban municipalities. The largest city in the area is usually the seat of the county. It performs the role of educational, economic and cultural center of the region as well. Regions represented by territorial counties are very often linked with strong ties, which are rooted in shared history and common traditions. The sense of local identity is thus often preserved by both residents of the county and its local government authorities.
The elected organ of the county is the council, while the executive branch is represented by the county board. The chair of the board is in charge of both temporary works of the county administration and the execution of the policy assumed by the council. There are several statutory tasks of the county. The most important of them include: health care, social welfare, public transport and public roads maintenance, culture and tourism, education and building supervision.
In the reviewed research, websites of territorial counties were the only ones to be selected for the analysis. Urban counties were excluded from the study to ensure the internal cohesion of the sample and to allow for generalized conclusions. Cities with populations close to or even greater than half a million inhabitants are very much different from the majority of territorial counties. The latter are typically rural and sparsely populated units, often – as mentioned above – founded around common history and enduring social ties. Therefore, the assumption that both types of counties are equal (and including them within the same sample) would distort the results of the study.
Consequently, websites of all the 314 territorial counties in Poland were analyzed within the framework of the project. There were the official websites only, these maintained formally by the county office. The research was carried out for five years, from 2005 to 2009, between April and May of each year.
IV. Method of the Research
The major goal of the research project presented in this paper was the comprehensive assessment of the content of the counties’ websites. The questionnaire constructed for the study was a quantitative one. It was based on the overall idea proposed by the Cyberspace Policy Research Group, known as the Website Attribute Evaluation System (WAES). The WAES is used both in the analysis of websites [13] and as the point of reference for researchers of websites performance [14], [15]. The WAES is the binary tool. It analyses the content of the website in the context of specific detailed criteria (types of information, services, web tools). The component in the content either exists or is absent. As a result, a score of either “0” or “1” is assigned to the specific criterion.
The questionnaire applied in the analysis of websites of Polish counties was founded on the same principle. In the 2005 edition of the research it included 55 detailed criteria. After minor modifications introduced in 2006 (a few criteria were substituted with new ones) the number of criteria was reduced to 54. This final version of the questionnaire was used in the research conducted from 2006 to 2009.
Prior to the beginning of the actual research, in November 2004, the preliminary, qualitative survey of several local government websites was performed. Four major aspects of website content were identified on the base of its results. They are major functions performed by websites in everyday activity of local government institutions. The functions are: information, promotion, consultation and service delivery. In the questionnaire several specific criteria were assigned to each function.
The information function is associated with the access of Internet users to various types of data. Local governments publish both basic personal information (composition of the county council and the board) and information pertaining to their work (office hours, announcements), as well as other information helpful for customers of the county administration. They include: the division of powers among various departments of the county administration and information on handling specific matters. A website of local government can be also regarded as a hub in the network connecting many different kinds of public institutions, local civic initiatives, NGOs, etc. The simplest way to facilitate this process is to place links to such organizations on the website. Accessibility of such links is also a part of information function.
Promotion is the only function performed by the website aimed mainly at non-residents of the county. That aspect of online presence includes the presentation of touristic and cultural qualities of the region (directed to individual visitors) as well as commercial assets (e.g., offers to potential investors). An important dimension of promoting the region is also the availability of website content in foreign languages.
Consultation is the most directly “political” dimension of local government websites content. It includes services and tools that stimulate public debate on local issues and enhance communication with citizens as well as civic participation.
Detailed criteria include the availability of email addresses to local government representatives, online polls, discussion forum or chat.
The last function, electronic delivery of public services, can be regarded as synonymous to the narrowly defined concept of e-government. It refers to interactions between local government administration and the individual (considered as a beneficiary of various services). The questionnaire applied in the research has not assessed the electronic availability of specific services. Instead, stages of online sophistication were measured. They include: downloading forms, the ability to apply online, online transactions with the office as well as the possibility of tracking the individual matter handling.
Beside the survey of four major functions the questionnaire included also a few criteria assessing the availability of additional services. They were: accessibility of the web site for persons with disabilities and presence of various types of multimedia content (pictures, audio and video materials).
All major functions performed by local government websites can be regarded as founding elements of the broadly defined domain of e-government. Therefore, the research of Polish counties websites was in fact an attempt to evaluate the standing of local dimension of e-government development during the first decade of the 21st century. It was also the indirect indicator of the current status of e-democracy in local communities.
V. THE CONTENT OF LOCAL GOVERNMENT WEBSITES. RESEARCH RESULTS
A. Overall results and the distribution of scores
As mentioned above, the maximum score that could be obtained in the research was 55 points in 2005 and 54 points in 2006-2009 period. Fig. 1 presents the average scores achieved by county websites in the consecutive years of the analysis. The results indicate that except for 2006, when slight decline in the total score was noticed, we can observe gradual and steady growth of overall sophistication of the websites. The greatest progress appeared between 2006 and 2007, followed by the decline of the pace of growth.
This development can be considered disappointing, especially if compared with the rapid growth of overall ICT accessibility during the same period all over the world, including Poland. Local governments improved their web offer although they have hardly kept abreast of the overall progress in technology.
Fig. 2 depicts the distribution of scores in the first (2005) and the last (2009) year of the research project execution. In 2005 about one third of all the websites (33.7%) scored between 16 and 20 points. More than one hundred counties can be found in this interval, making it the most numerous category of scores.
Quite similar frequency of scores can be found however in the interval between 11 and 15 points, which is a category of explicitly minor scores, compared with the mean value. Altogether, as much as 85% of all the websites scored between 11 and 25 points. The single minimum result was 3 points, while the maximum score was 31 points (obtained by two websites). That proves that the gap between leaders and laggards was considerable.
Five years later, in 2009, the distribution of scores clearly leans towards results surpassing the mean value. At this time the most common category of scores is the interval between 21 and 25 points (more than both the mean and the median value).
Simultaneously, the frequency of scores between 11 and 15 points is almost two times lower than in 2005. In 2009 both the minimum and the maximum results have increased. They were: 7 and 34 points, respectively. Thus the whole sample remained very much differentiated, with the variation between the best and the worst websites similar to that of 2005.
B. Basic functions performance
The adequate measure of the advancement in the use of Internet capabilities is the presentation of websites score as the percentage of the total number of points, which could have been obtained. These data, concerning both the total score and specific functions, are presented in Table I.
The starting point of the project – 2005 – is the moment when less than half of criteria taken into account in the research questionnaire were fulfilled. It refers both to the overall score and to each particular function. Performance of information and promotion is much more advanced than in case of consultation and service delivery. Nevertheless, even with respect to the provision of information, the leading aspect of web use, the score is slightly below fifty percent. In 2005, electronic service delivery was especially at the preliminary stage of development. Its score strongly lagged behind all other functions, with result two times lower than the total score.
Successive years bring about the improvement of scores but the sequence of performance of specific functions continues to be stable. The most advanced dimension of websites content is the access to information, followed by promotion and consultation. Service delivery all the time remains to be the least developed. In 2009 the total score was about one fifth greater as compared to the first year of the research. Approximately the same level of increase can be observed with respect to information provision. The lowest increase occurred in case of the promotion. In fact it was the only function that has barely grown during five years of the research. Aspects of websites content which registered the most significant progress are: consultation and, in particular – service delivery. In case of the latter the spectacular growth of performance took place during the last two years of the analysis. In 2007 the score of this function was lower than in 2005, while during next two years it recorded the improvement by almost seventy percent.
Data comparison between 2005 and 2009 proves gradual transformation of the pattern of websites use by Polish local government.
In the first year of the research passive and one-way forms of web communication (information and promotion) were dominating in the offer of self-government institutions for the Internet users. In the successive years, mostly interactive features of websites were improved, making the content of websites more balanced and open to more active participation of citizens. It represents the typical model of e-government and e-democracy development [16].
Nevertheless, functions which are essential from the perspective of democratic theory – consultation and service delivery – are still evidently delayed. Unless the pace of described changes accelerates, websites of Polish local governments will continue to function as the electronic bulletin boards as opposed to the tools of real political and civic interaction.
C. Availability of resources encouraging the development of e-democracy
In the framework of the analysis of Polish local governments websites three functions play critical role in terms of supporting the citizens-oriented model of e-government. They are: information, consultation and service delivery. Accessibility of particular types of information, web tools and services is then the adequate measure of the “democratic maturity” of the assessed websites.
Table II presents data on selected criteria of the information function. As mentioned in the previous part of this text the provision of information is performed at relatively satisfactory level.
Basic data on the county office are available on virtually every one of the analyzed websites (however, in 2009 precise instructions on handling particular matters could have been found only on every second – 49.4% – of assessed sites). Moreover, local government site gradually becomes an information center for the local community. In the consecutive years of research there was a visible growth in the availability of resources useful in everyday life of residents. It refers to local newsletter (with information on program of movies, cultural events, etc.), links to websites of municipalities in the county as well as sites of local NGOs. In 2009 these data were available on the vast majority of the surveyed websites.
A completely different picture emerges when consultation resources are analyzed (see Table III). These are services dedicated directly to the encouragement of civic activity and public debate. Therefore, their presence on local government sites is the indicator of local authorities’ readiness to face the real e-participation. Research results suggest that representatives of Polish local governments are not exactly enthusiastic about this prospect.
| Content | 2005 | 2007 | 2009 |
|-------------------------------|------|------|------|
| Organization of the office | 89,2 | 89,2 | 92,0 |
| Local newsletter | 54,8 | 58,0 | 70,4 |
| Links to government websites | 33,1 | 33,8 | 29,3 |
| Links to websites of municipalities | 74,2 | 87,6 | 87,6 |
| Links to websites of local NGOs | 51,6 | 57,9 | 63,7 |
TABLE III. Availability of Selected Consultation Resources
| Content | 2005 | 2007 | 2009 |
|----------------------------------------------|------|------|------|
| Email address to the office | 90,8 | 95,5 | 95,5 |
| Online discussion forum | 27,1 | 16,2 | 13,7 |
| Online poll on local issues | 15,6 | 19,1 | 18,8 |
| Chat with county officials | 4,5 | 2,2 | 2,6 |
| Interactive service for the direct contact | | | |
| with county officials | 8,0 | 14,3 | 16,9 |
The only resource commonly available on the assessed websites is the office email address (it is worth stressing however that even in 2009 a few counties have not published their own email addresses on their websites). Nevertheless, making email address accessible is not the same as responsiveness. In 2009, only one fifth of websites (21,7%) replied to electronic messages sent to local authorities during the research. It proves that email is still not perceived as the regular means of communication with citizens.
All the other, much more sophisticated, resources can be only found on the relatively small part of the analyzed sites. It refers to discussion forums, online polls, chats as well as various types of interactive services facilitating contacts with representatives of local authorities. In 2009 none of these resources was available on more than one fifth of evaluated sites. Specifically, chat can be hardly found in general (only 2,6% of sites enabled the use of that service). Out of tools stimulating the public debate, in 2005 a discussion forum was the most popular one. Every fourth of assessed sites (27,1%) provided the opportunity to debate on local issues. The following years brought about the remarkable decline of the accessibility of that service (2009 – 13,7%). This tendency goes together with growing popularity of online tools, which assist in the direct contact with county officials. Both services represent however various forms of online communication. Discussion forums are open for multilateral communication, protect anonymity and provide the arena for real deliberation of public problems. Conversely, services facilitating contact with local representatives enable only bilateral interaction, during which an individual Internet user can ask a question or present his or her views to the particular county official. In addition, in 2009 more than every second of the latter services (54,7%) required revealing personal data of a user. Growing popularity of this mode of local debate proves an obvious intention of local authorities to manage the course of online dialogue with citizens. What can be observed is then the emergence of the supervised model of e-democracy.
Distribution of public services is another aspect of online pursuit, which is of fundamental importance in the context of e-government development. With regard to Polish local governments, electronic delivery of services – as mentioned before – performs at a disappointingly low level (see Table IV). A relatively well accessible option is the ability to download various forms from the county site. Close to two thirds of local governments (59,6%) provided this opportunity in 2009. On the contrary, online transactions are still hardly possible on the analyzed sites. Only seven out of 314 counties (2,2%) made that service available.
TABLE IV. Availability of Selected Service Delivery Resources
| Content | 2005 | 2007 | 2009 |
|----------------------------------------------|------|------|------|
| Downloading forms | 46,2 | 55,1 | 59,6 |
| Ability to apply online | 2,9 | 2,6 | 28,7 |
| Online transactions\(^a\) | - | 0,3 | 2,2 |
\(^a\) The criterion was included to the questionnaire in 2006.
Thus, apart from considerable improvement during last years, public services delivery remains the major challenge for local governments. Their websites will either include electronic distribution of services to online offer, or they stay behind the main current of information revolution.
VI. Conclusion
The use of ICT can support various forms of democracy. E-voting obviously strengthens representative, procedural model of government, while virtual local communities reinforce libertarian aspects of democracy [17]. In the theoretical framework of the research presented in this paper the ultimate model of democracy formed by ICT was not decided in advance. Instead, the general term of “citizens empowerment” was introduced, as the possible model of the growing role of individuals in political processes.
It is worth stressing that the same analytical framework can be applied in the study of various levels of government. Further research is needed however to assess if phenomena and processes which characterize local democracy are visible in states and democratic political systems as well.
The data presented above indicate that with regard to Polish local government websites we can barely observe the reinforcement of the individuals’ status in their interactions with public institutions. Thanks to websites citizens have certainly much better access to information. Thus, in that domain costs they bear to pursue their democratic rights are visibly reduced. It is however the only dimension of e-government that really works. Well informed individuals, who are ready to get involved in local public life face the challenge of a very poor offer provided by local authorities. In case of Polish counties, the increase of e-participation is not among top priorities of local elites. Obviously, there are numerous locations where online local debate can proceed. Websites of local government (which are the natural spot to confront residents’ and their representatives’ views and opinions) are not in the forefront of civic engagement encouragement.
Electronic delivery of services, which makes operations of public institutions more transparent and customer oriented is yet another aspect of possible empowerment of citizens. This element of e-government has recently undergone perhaps the most spectacular transformation to enable the improvement of a citizen position in his or her relation with the state. Nevertheless, benefits of the ICT use in service distribution bypass users of Polish local government sites. Inability to perform transactions with the office and very limited access to online applications prove that electronic service delivery is still in the preliminary stage of development. In his or her relations with local administration a resident of the county is still considered more as the
passive petitioner than the customer, whose satisfaction is critical in the evaluation of the performance of the office.
The overall image of local e-government in Poland, based on data concerning counties websites, does not support the thesis of the observable reinforcement of the role individuals play in political processes. Thus far, ICT seem to have very limited impact on the nature of local democracy. They rather strengthen the existing rules of the game, with dominating position of political institutions, sluggish public debate and poor intensity of political participation. Members of local communities are still in search for the effective means to empower their political position. It seems that at the moment local government websites remain less than helpful in this endeavor.
ACKNOWLEDGMENT
This work was funded by the Polish Ministry of Science and Higher Education (research project NN116 331338).
REFERENCES
[1] D. Janssen, S. Roththier and K. Snijkers, “If you measure it they will score: an assessment of the international eGovernment benchmarking,” Information Polity, Vol. 9, 2004, pp. 121-130.
[2] F. Salem, “Benchmarking the e-government bulldozer: beyond measuring the tread marks,” Measuring Business Excellence, Vol. 11, Issue 4, 2007, pp. 9-22.
[3] F. Bannister, “The curse of the benchmark: an assessment of the validity and value of a e-government comparisons,” International Review of Administrative Sciences, Vol. 73, June 2007, pp. 171-188.
[4] L. Grossman, The Electronic Republic, New York: Viking, 1995.
[5] N. Negroponte, Being Digital, New York: Vintage, 1995.
[6] M. Margolis and D. Resnick, Politics as Usual: The Cyberspace ‘Revolution’, London: Sage, 2000.
[7] P. Norris, A Virtuous Circle: Political Communications in Postindustrial Societies, Cambridge: Cambridge University Press, 2000.
[8] L. Porebski, “Three faces of electronic democracy,” Proc. Xth European Conference on Information systems (ECIS 2002), Gdansk, June 2002, pp. 1218-1227.
[9] K. Edmiston, “State and local e-government: prospects and challenges,” American Review of Public Administration, Vol. 33, March 2003, pp. 20-45.
[10] L. Berntzen and M. Olsen, “Benchmarking E-Government: a comparative review of three international benchmarking studies,” Proc. Third International Conference on Digital society, 2009, pp. 77-82.
[11] T. Carbo and J. Williams, “Models and metrics for evaluating local electronic government systems and services,” Electronic Journal of E-Government, Vol. 2, 2004, pp. 95-104.
[12] A. Evangelidis, J. Akomode, A. Taleb-Bendiab and M. Taylor, “Risk assessment & success factors for e-government in a UK establishment,” Proc. The First International Conference on Electronic Government, 2002, pp. 395-401.
[13] P. Ferber, F. Foltz and R. Pugliese, “The Politics of state legislature web sites: making e-government more participatory,” Bulletin of Science, Technology & Society, Vol. 23, June 2003, pp. 157-167.
[14] P. Leith, and J. Morison, “Communication and dialogue: what government websites might tell us about citizenship and governance,” International Review of Law, Computers and Technology, Vol. 18, Issue 1, 2004, pp. 25-35.
[15] J. H. Lim and S. Y. Tang, “Urban e-government initiatives and environmental decision performance in Korea,” Journal of Public Administration Research and Theory, Vol. 18, January 2008, pp. 109-138.
[16] L. Porebski, “Evaluating the development of eGovernment systems: the case of Polish local government Web Sites,” Proc. the 11th European Conference on eGovernment (ECEG 2011), June 2011, pp. 475-481.
[17] J. Van Dijk, The Network Society: Social Aspects of New Media, London: Sage, 1999.
Community Detection based on Structural and Attribute Similarities
The Anh Dang, Emmanuel Viennot
L2TI - Institut Galilée - Université Paris-Nord
99, avenue Jean-Baptiste Clément - 93430 Villetaneuse - France
{theanh.dang,firstname.lastname@example.org
Abstract—The study of social networks has gained much interest from the research community in recent years. One important challenge is to search for communities in social networks. A community is defined as a group of users such that they interact with each other more frequently than with those outside the group. Being able to identify the community structure can facilitate many tasks such as recommendation of friends, network analysis and visualization. In real-world networks, in addition to topological structure (i.e., links), content information is also available. Existing community detection methods are usually based on the structural features and do not take into account the attributes of nodes. In this paper, we propose two algorithms that use both structural and attribute information to extract communities. Our methods partition a graph with attributes into communities so that the nodes in the same community are densely connected as well as homogeneous. Experimental results demonstrate that our methods provide more meaningful communities than conventional methods that consider only relationship information.
Keywords-social network; community detection; clustering;
I. INTRODUCTION
Social networks of various kinds demonstrate a feature called community structure. Individuals in a network tend to form closely-knit groups. The groups are called communities or clusters in different context. Community detection is the task of detecting these cohesive groups in a social network [1] [2]. In many real-world networks, in addition to topological structure, content information is also available. Data is associated to the nodes and in the form of text, images, etc. For example in a social network, each user has information about age, profession, interests, etc. When content data is available, it might be relevant to extract groups of nodes that are not only connected in the social graph but also share similar attributes.
Many existing community detection techniques only focus on the topological structure of the graph. On the other hand, data clustering has been studied for a long time but most algorithms (e.g., k-means, EM) do not deal with relational data. The work of incorporating structural and attribute data has not been thoroughly studied yet in the context of large social graphs. This is the motivation of our work. Our key contributions are summarized next. In this paper, we study the relationship between semantic similarity of users and the topology of social networks (homophily concept). We propose two approaches to extract communities on several real-world datasets. Based on our evaluations, we conclude that our methods are able to discover more relevant communities.
II. RELATED WORK
Detecting communities in a social network is still an open problem in social network analysis. In literature, many community detection methods have been proposed. According to [1], these approaches can be divided into four categories: node-centric, group-centric, network-centric and hierarchicentric. Some popular methods are modularity maximization [3] [4], Givan-Newman algorithm [5], Louvain algorithm [6], clique percolation [7], link communities [8], [2] and [9] provide a throughout review of the topic. However, these methods ignore the attributes of the nodes. Below are some studies that incorporate node attributes in the clustering process. Steinhaeuser et al. [10] proposed an edge weighting method NAS (Node Attribute Similarity) that takes into account node attributes. A community detection method is then proposed based on random walks. The complexity of the algorithm is $O(n^2 \log n)$ (for random walks) or $O(n)$ (for scalable random walks) where $n$ is the number of nodes. Zhou et al. [11] defined a unified distance measure to combine structural and attribute similarities. Attribute nodes and edges are added to the original graph to connect nodes which share attribute values. A neighborhood random walk model is used to measure the node closeness on the augmented graph. A clustering algorithm SA-Cluster is proposed, following the K-Medoids method. The time complexity of the algorithm is $O(n^3)$.
Coupling relationship and content information in social network for community discovery is an emerging research area because current methods do not focus on social graphs or they are not efficient for large-scale datasets.
III. PROBLEM STATEMENT
An attributed graph is denoted as $G = (V, E, X)$, where $V$ is the set of nodes, $E$ is set of edges, $X = X^1, ..., X^d$ is the set of $d$ attributes associated with the nodes in $V$. Each vertex $v_i$ is associated with an attribute vector $(x^1_i, ..., x^d_i)$. The goal of this work is to find communities in an attributed graph, that is to find communities in $K$ disjoint groups (i.e., communities) $G_i = (V_i, E_i, X)$, where $V = \bigcup_{i=1}^{K} V_i$
and \( V_i \cap V_j = \emptyset \) \( \forall i \neq j \). Nodes in the same communities are expected to be highly connected and have similar attributes.
Before clustering, a similarity measure must be determined. Our algorithms do not depend on the details of the measurement. Let \( simA(i,j) \) be the similarity between a pair of nodes \((i,j)\) in an attributed graph \( G = (V,E,X) \). The measure should reflect the degree of closeness of the nodes in terms of their attribute values. An attribute can be classified as continuous, discrete or textual.
If the attributes are discrete, a commonly used similarity measure is based on the simple matching criterion. The similarity between two nodes in an attributed graph is determined by examining each of the \( d \) attributes and counting the number of attribute values they have in common.
For continuous attributes, the most commonly used metric is based on the Euclidean distance.
\[
simA(i,j) = \frac{1}{1 + \sqrt{\sum_d (x^d_i - x^d_j)^2}}
\]
If the attributes are textual, we first need to transform them into numeric values. A text document can be represented as bag of words. Each word is represented as a separate variable having numeric weight. The most popular weighting schema is tf-idf (term frequency-inverse document frequency). Each document is then represented as a vector of weight. To measure the similarity between two document vectors, cosine similarity is the most widely used metric.
### IV. Community Detection Algorithms
In this section, we present two methods to discover communities in an attributed graph, given a similarity measure.
#### A. Algorithm SAC1
Our first approach is based on the modification of Newman’s well-known modularity function. Given a graph of \( n \) nodes and \( m \) edges, \( G_{i,j} \) represents the link \((i,j)\), \( d_i \) is the degree of node \( i \). If a graph is partitioned into \( K \) clusters, Newman’s modularity [3] can be written as
\[
Q_{Newman} = \sum_{l=1}^{K} \sum_{i \in C_l, j \in C_l} S(i,j)
\]
where the link strength \( S(i,j) \) between two nodes \( i \) and \( j \) is measured by comparing the true network interaction \( G_{ij} \) with the expected number of connections \((d_i \cdot d_j)/2m\)
\[
S(i,j) = \frac{1}{2m} \cdot \left( G_{i,j} - \frac{d_i \cdot d_j}{2m} \right)
\]
Newman’s modularity does not include the attribute similarity between nodes. We define the “modularity attribute” \( Q_{Attr} \) of a partition as
\[
Q_{Attr} = \sum_{C} \sum_{i,j \in C} simA(i,j)
\]
where \( simA \) is the attribute similarity function.
Next, we introduce a composite modularity as a weighted combination of modularity structure (1) and modularity attribute (2)
\[
Q = \sum_{C} \sum_{i,j \in C} (\alpha \cdot S(i,j) + (1-\alpha) \cdot simA(i,j))
\]
\( \alpha \) is the weighting factor, \( 0 \leq \alpha \leq 1 \).
The next step is to find an approximate optimization of \( Q \) (direct optimization is a NP-hard problem [12]). We follow an approach directly inspired by the Louvain algorithm [6]. The algorithm starts with each node belonging to a separated community. A node is then chosen randomly. The algorithm tries to move this node from its current community. If a positive gain is found, the node is then placed to the community with the maximum gain. Otherwise, it stays in its original community. This step is applied repeatedly until no more improvement is achieved.
When moving node \( x \) to community \( C \), the composite modularity gain is calculated as
\[
\Delta Q = \alpha \cdot \Delta Q_{Newman} + (1-\alpha) \cdot \Delta Q_{Attr}
\]
in which
- Gain of modularity structure \( \Delta Q_{Newman} \):
\[
\Delta Q_{Newman} = \sum_{i,j \in C \cup x} S(i,j) - \sum_{i,j \in C} S(i,j)
\]
\[
= \frac{1}{2m} \left( \sum_{i \in C} G_{i,x} - \frac{d_x}{2m} \sum_{i \in C} d_i \right)
\]
- Gain of modularity attribute \( \Delta Q_{Attr} \):
\[
\Delta Q_{Attr} = \sum_{i,j \in C \cup x} simA(i,j) - \sum_{i,j \in C} simA(i,j)
\]
\[
= \sum_{i \in C} simA(x,i)
\]
The first phase is completed when there is no more positive gain by moving of nodes. Following Louvain, we can reapply this phase by grouping the nodes in the same communities to a new community-node. The weights between new nodes are given by the sum of the weight of the links between nodes in the corresponding communities [6]. To determine the attribute similarity between two communities, we propose two approaches. The first is to sum up the similarity of their members, the second way is to set to the similarity of their centroids.
#### B. Algorithm SAC2
Our first algorithm SAC1 repetitively checks all nodes, leading to \( O(n^2) \) complexity. To reduce the computational cost, we propose another approach that only makes use of a node’s nearest neighbors. Given an attributed graph:
Algorithm 1 Structure-Attribute Clustering Algorithm SAC1
Input: An attributed graph $G = (V, E, X)$ and a similarity matrix
Output: A set of communities
Phase 1: Initialize each node to a separated community
repeat
for $i \in V$ do
for $j \in V$ do
Remove $i$ from its community, place to $j$’s community
Compute the composite modularity gain $\Delta Q$
end for
Choose $j$ with maximum positive gain (if exists) and move $i$ to $j$’s community
Otherwise $i$ stays in its community
end for
until No further improvement in modularity
Phase 2
- Each community is considered as new node
- Reapply Phase 1
Algorithm 2 Structure-Attribute Clustering Algorithm SAC2
Input: An attributed graph $G = (V, E, X)$
Output: A set of communities
Phase 1: Construct k-NN Graph $G_k$
Phase 2: Apply detection method to find structural communities in $G_k$. The result corresponds to the communities in $G$
$G = (V, E, X)$, we define a k-nearest neighbor graph (k-NN) $G_k = (V, E_k)$ as a directed graph in which each node has exactly $k$ edges, connecting to its $k$ most similar neighbors in $G$. The similarity measure between 2 nodes $i$ and $j$ is defined as
$$S(i, j) = \alpha \cdot G_{i,j} + (1 - \alpha) \cdot simA(i, j)$$
where $simA(i, j)$ is the attribute similarity function, $G_{i,j}$ represents the link $(i, j)$. Note that we can replace $G_{i,j}$ by other similarity measurements such as Jaccard similarity, cosine similarity, etc. [13] discussed several similarity metrics based on local information. Similar to the previous algorithm, we use $\alpha$ as a weighting factor.
We apply the measurement $S$ in the first place to construct the nearest neighbor graph. In $G_k$, a structural edge represents the similarity between nodes (in terms of structure and attribute) in the original graph $G$.
The naive approach to build k-NN graph uses $O(n^2)$ time and $O(nk)$ space. However substantial effort has been devoted to speed up the process, such as parallel algorithms ([14], [15]), approximation algorithms ([16], [17]). In most recent work, [18] introduced $NN-Descent$, an algorithm for approximate k-NN construction with an arbitrary similarity measure. The method is scalable with the empirical cost $O(n^{1.14})$.
We propose a simple algorithm with two phases: constructing a k-NN graph $G_k$ and finding structural communities in $G_k$ to obtain the final clustering. In Phase 2, various methods can be employed to find communities. In our experiments, we choose Louvain as the detection method because of its scalability. We set $k$ equal to the average degree of the nodes in the graph $G$.
V. EXPERIMENTAL STUDY
A. Experimental Datasets
We perform experiments to evaluate our algorithm on several real social networks:
**Political Blogs Dataset**: A directed network of hyperlinks between weblogs on US politics, recorded in 2005 by Adamic and Glance [19]. This dataset contains 1,490 weblogs with 19,090 hyperlinks between these webblogs. Each blog in the dataset has an attribute describing its political leaning as either liberal or conservative.
**Facebook Friendship Datasets**: The datasets contain the Facebook networks (from a date in Sept. 2005) from these colleges: Caltech, Princeton, Georgetown and UNC Chapel Hill [20]. The links represent the friendship on Facebook. Each user has the following attributes: ID, a student/faculty status flag, gender, major, second major/minor (if applicable), dormitory(house), year and high school.
**DBLP Dataset**: A co-authorship network with 10,000 authors, captured from the DBLP Bibliography data in four research areas: database (DB), data mining (DM), information retrieval (IR) and artificial intelligence (AI). Each author has two attributes: prolific and primary topic. Details of this dataset can be found in [11].
One of the most fundamental characteristic of social network is homophily [21]. The principle of homophily states that actors in a social network tend to be similar (i.e., to share some common attributes) with their connected neighbors, or “friends”. In order to show this feature, for each attribute $a$ in the dataset (e.g., political view, dormitory, year), we compute the probability that two friends are similar and compare to the probability of a random pairwise sample
$$P_{sl} = P(Similar|Link) = \frac{|(i, j) \in E : s.t. a_i = a_j|}{|E|}$$
$$P_s = P(Similar) = \frac{|(i, j) : s.t. a_i = a_j|}{|E| \cdot (|E| - 1)}$$
Table I shows that the similarities between friends are significant higher than random, according to a particular attribute. In Political Blogs, 90% of connected blogs are similar, compared to 49% of random pair. In Caltech network, similarity in dormitory are significant between friends (42%
Table I: Homophily measurement in experimental datasets
| Graph | #Nodes | #Edges | Attribute | $P_{sl}$ | $P_s$ |
|----------------|---------|----------|-----------|----------|-------|
| Political Blogs| 1,490 | 16,716 | Leaning | 0.90 | 0.49 |
| Caltech | 796 | 16,656 | Dorm | 0.42 | 0.12 |
| Princeton | 6,596 | 293,320 | Year | 0.53 | 0.13 |
| Georgetown | 9,414 | 425,638 | Year | 0.58 | 0.13 |
| UNC | 18,163 | 766,800 | Year | 0.43 | 0.15 |
| DBLP | 10,000 | 28,110 | Topic | 0.35 | 0.01 |
compared to 12%). In the graphs Princeton, Georgetown and UNC, friends are more likely to have the same class year. In DBLP, authors are most likely not connected if they do not share the primary topic.
The analysis of homophily demonstrates the correlation between structure and attribute information in real social networks. For that matter, node attributes could provide valuable information to facilitate community discovery.
B. Evaluation Measures
We extract the communities from the above datasets, using 6 different methods:
- Attribute-based clustering: K-means method is used to group nodes based on the similarity in attributes (link information is ignored).
- Random walks: Method proposed by Steinhaeuser et al. [10], based on random walks and hierarchical clustering. The walk length is set to the number of nodes.
- Louvain algorithm on unweighted graph.
- Fast greedy: Method proposed by Clauset et al. [22] based on the greedy optimization of modularity. The graph is weighted by node attribute similarities.
- Our proposed algorithms SAC1 and SAC2.
To evaluate the quality of these methods, we compare the number of communities, size of communities, modularity structure, modularity attribute and additional two measurements: density $D$ and entropy $E$
$$D = \sum_{c=1}^{K} \frac{m_c}{m}$$
where $m_c$ is number of edges in community $c$, $m$ is the number of edges in $G$, $K$ is the number of communities. $D$ reflects the proportion of community intra-links over total number of links. High density denotes good separation of communities.
$$E = \sum_{c=1}^{K} \frac{n_c}{n} \cdot entropy(c)$$
$$entropy(c) = -\sum_i p_{ic} \log(p_{ic})$$
where $n_c$ is the number of nodes in community $c$, $n$ is the number of nodes in $G$, $p_{ic}$ is the percentage of nodes in c with attribute $i$. Communities with low entropy means they are more homogeneous with respect to the attribute $a_i$.
C. Comparison of SAC1 and SAC2
Because our approaches make use of the parameter $\alpha$ as a weighting factor between structural similarities and attribute similarities, we first examine the community qualities with different values of $\alpha$. Figure 1 plots the modularity structure (Eq (1)), modularity attribute (Eq (2)) and modularity composite (Eq (3)) of SAC1’s communities (in 4 graphs), for $\alpha \in [0, 1]$. The x-axis represents the values of $\alpha$, the y-axis represents the modularities values. There is an increasing trend of modularity structure and decreasing trend of modularity attribute since the algorithm gives more favor to structural similarities as $\alpha$ increases. For SAC2 (not shown here), the modularities also follow the similar patterns.
![Figure 1: SAC1 modularity structure, modularity attribute and modularity composite for $\alpha \in [0, 1]$](image)
Table II reports the average entropy and density of SAC1 and SAC2 on the datasets. Average entropy of SAC2 is lower than SAC1’s whereas density of SAC1 is higher than SAC2’s. That is, SAC2’s communities are more homogeneous, but in terms of density, SAC1’s communities are more dense.
Table II: Average entropy and density of SAC1 and SAC2
| Graph | Average Entropy | Average Density |
|-----------|-----------------|-----------------|
| | SAC1 | SAC2 | SAC1 | SAC2 |
| Political Blogs | 0.06 | 0.1 | 0.91 | 0.90 |
| Caltech | 0.75 | 0.33 | 0.50 | 0.46 |
| Princeton | 1.04 | 0.41 | 0.64 | 0.55 |
| Georgetown | 0.91 | 0.41 | 0.68 | 0.60 |
| UNC | 1.76 | 0.51 | 0.64 | 0.45 |
| DBLP | 3.01 | 1.24 | 0.82 | 0.52 |
D. Comparison against other methods
1) Number of communities and size distribution: We observe that SAC1 and SAC2 result in less number of communities than other methods. Figure 2 shows the number of communities found by Louvain and SAC1. The x-axis represent values of $\alpha$, the y-axis is the number of corresponding communities. The outermost right bar is the number of communities from Louvain. It is clear that Louvain results in more communities. The result is similar for SAC2 (Table III). However, many of the communities found by Louvain are very small. For instance in Political Blogs, although 276 communities are found, the biggest two communities already consist of 80 percent of nodes. The rest of communities have the maximum size of 5 nodes. On the other hand, our algorithms correctly identified two communities in this graph, which correspond to two political views: liberal and conservative. It is observed that for large networks, Louvain often results in a few mega-sized communities and numerous small-sized communities. Our methods achieved a more balanced distribution of community sizes.

Table III: Number of communities in SAC2($\alpha = 0.5$), Louvain and Fast greedy
| Graph | SAC2 | Louvain | Fast greedy |
|-----------|------|---------|-------------|
| Political Blogs | 2 | 277 | 277 |
| Caltech | 7 | 10 | 9 |
| Princeton | 7 | 20 | 24 |
| Georgetown | 9 | 12 | 42 |
| UNC | 7 | 19 | 31 |
| DBLP | 47 | 566 | 864 |
2) Community quality: Table IV and V compare the clustering entropy and density (with $\alpha = 0.5$) on two datasets. It shows that SAC1 and SAC2 result in communities with lower entropy (higher attribute similarities) than Louvain and Fast greedy’s communities. For example, in Caltech graph, the entropy of SAC1 and SAC2 is 0.75 and 0.33 respectively, while the entropy of Louvain and Fast greedy is 1.65 and 1.71. On the other hand, the density of our methods is a little lower than the density of these two methods but higher than attribute-based clustering and random walks. For other datasets, the results are also similar.
Table IV: Entropy and Density of Caltech’s communities
| Method | Entropy | Density |
|---------------|---------|---------|
| Attribute-based | 0 | 0.42 |
| Random walks | 0 | 0.35 |
| Louvain | 1.65 | 0.57 |
| Fast greedy | 1.71 | 0.56 |
| SAC1 | 0.75 | 0.50 |
| SAC2 | 0.33 | 0.46 |
Table V: Entropy and Density of Princeton’s communities
| Method | Entropy | Density |
|---------------|---------|---------|
| Attribute-based | 0 | 0.53 |
| Random walks | 0 | 0.47 |
| Louvain | 1.71 | 0.62 |
| Fast greedy | 1.80 | 0.74 |
| SAC1 | 0.84 | 0.62 |
| SAC2 | 0.41 | 0.55 |
VI. DISCUSSIONS
Both of our methods are parameterized, i.e., using $\alpha$ as a weighting factor, the natural question is how to choose $\alpha$. Note that the results are quite stable with respect to $\alpha$. With no domain knowledge, it is difficult to determine the value of $\alpha$ a priori. However, in social networks, we expect the links contain more information than attribute values. Based on this idea, we propose a strategy to approximate $\alpha$. It is illustrated below:
**init:**
- $\alpha = 1$
- Set an interval $i$ (e.g., $i = 0.1$ in our experiments)
**repeat**
- Compute the optimized clustering corresponding to $\alpha$
- Let $Q_{Newman}(\alpha)$ and $Q_{Attr}(\alpha)$ be the modularity structure and modularity attribute of the partition
- Let $\alpha' = \alpha - i$
- Let $\Delta = (Q_{Newman}(\alpha') - Q_{Newman}(\alpha)) + (Q_{Attr}(\alpha') - Q_{Attr}(\alpha))$
- $\alpha = \alpha'$
**until** $\Delta <= 0$
Table VI reports the value of $\alpha$ found using the aforementioned strategy for SAC1 algorithm. It shows that the communities found are reasonably good in terms of modularity values.
Table VI: Optimum $\alpha$ found for the graphs
| Graph | $\alpha$ | $Q_{Newman}$ | $Q_{Attr}$ |
|-----------|----------|--------------|------------|
| Political Blogs | 0.5 | 0.41 | 0.99 |
| Caltech | 0.6 | 0.31 | 0.99 |
| Princeton | 0.7 | 0.42 | 0.96 |
| Georgetown | 0.5 | 0.43 | 0.98 |
| UNC | 0.6 | 0.33 | 0.91 |
| DBLP | 0.5 | 0.27 | 0.83 |
VII. CONCLUSION AND PERSPECTIVES
In this paper, we studied the issue of community detection in attributed graphs. We propose two methods that couple topological structure as well as attribute information in the detection process. Experimental results in real social networks demonstrated that our methods achieve a flexibility in combining structural and attribute similarities, hence able to bring in more meaningful communities. As future work, we try to bring further enhancements to our methods, e.g., reduce the algorithms’ complexity, explore different similarity functions. We will apply our methods in different scenarios, for example with textual data or missing attribute values. We try to understand the roles of links and content information in the formation of online communities in order to devise adapted discovery strategies and to model the dynamic of the networks.
ACKNOWLEDGMENT
This work was partially supported by the projects ANR Ex DEUSS and DGCIS CEDRES.
REFERENCES
[1] L. Tang and H. Liu, *Community Detection and Mining in Social Media (Synthesis Lectures on Data Mining and Knowledge Discovery)*. Morgan-Claypool, 2010, ch. 3.
[2] S. Fortunato, “Community detection in graphs,” *Physics Reports* 486, 75-174 (2010), 2009.
[3] A. Clauset, M. E. J. Newman, and C. Moore, “Finding community structure in very large networks,” *Physical Review E*, vol. 70, p. 066111, 2004.
[4] K. Wakita and T. Tsurumi, “Finding community structure in mega-scale social networks,” *Computer Science - Computers and Society, Physics - Physics and Society*, 2007.
[5] M. E. J. Newman and M. Girvan, “Finding and evaluating community structure in network,” *Phys. Rev. E* 69, 026113, 2004.
[6] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre, “Fast unfolding of communities in large networks,” *Journal of Statistical Mechanics: Theory and Experiment*, vol. 2008, no. 10, p. P10008 (12pp), 2008.
[7] G. Palla, I. Derenyi, I. Farkas, and T. Vicsek, “Uncovering the overlapping community structure of complex networks in nature and society,” *Nature*, vol. 435, no. 7043, pp. 814–818, Jun. 2005.
[8] Y.-Y. Ahn, J. P. Bagrow, and S. Lehmann, “Link communities reveal multiscale complexity in networks,” *Nature*, vol. 466, no. 7307, pp. 761–764, Jun. 2010.
[9] J. Leskovec, K. J. Lang, and M. W. Mahoney, “Empirical comparison of algorithms for network community detection,” *CoRR*, vol. abs/1004.3539, 2010.
[10] K. Steinhaeuser and N. V. Chawla, “Identifying and evaluating community structure in complex networks,” *Pattern Recognition Letters*, Nov. 2009.
[11] Y. Zhou, H. Cheng, and J. X. Yu, “Graph clustering based on structural/attribute similarities,” *Proc. VLDB Endow.*, vol. 2, pp. 718–729, August 2009.
[12] U. Brandes, D. Delling, M. Gaertler, R. Goerke, M. Hoefer, Z. Nikoloski, and D. Wagner, “Maximizing modularity is hard,” *ArXiv Physics eprints*, vol. physics.da, no. 001907, 2006.
[13] T. Zhou, L. Lu, and Y.-C. Zhang, “Predicting missing links via local information,” *European Physical Journal B*, vol. 71, no. 4, pp. 623–630, 2009.
[14] M. Connor and P. Kumar, “Fast construction of k-nearest neighbor graphs for point clouds,” *IEEE Transactions on Visualization and Computer Graphics*, vol. 16, no. 4, pp. 599–608, 2009.
[15] M. D. Lieberman, J. Sankaranarayanan, and H. Samet, “A fast similarity join algorithm using graphics processing units,” *2008 IEEE 24th International Conference on Data Engineering*, vol. 25, no. April, pp. 1111–1120, 2008.
[16] J. Chen, H. Fang, and Y. Saad, “Fast approximate knn graph construction for high dimensional data via recursive lanczos bisection,” *Journal of Machine Learning Research*, vol. 10, no. 2009, pp. 1989–2012, 2009.
[17] S. Arya, D. M. Mount, N. S. Netanyahu, R. Silverman, and A. Y. Wu, “An optimal algorithm for approximate nearest neighbor searching in fixed dimensions,” in *ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS*, 1994, pp. 573–582.
[18] W. Dong, C. Moses, and K. Li, “Efficient k-nearest neighbor graph construction for generic similarity measures,” in *Proceedings of the 20th international conference on World wide web*. ser. WWW ’11, 2011.
[19] L. A. Adamic and N. Glance, “The political blogosphere and the 2004 u.s. election: divided they blog,” in *Proceedings of the 3rd international workshop on Link discovery*, ser. LinkKDD ’05, 2005.
[20] A. L. Traud, E. D. Kelsic, P. J. Mucha, and M. A. Porter, “Comparing community structure to characteristics in online collegiate social networks,” 2010, sIAM Review, in press (arXiv:0809.0960).
[21] P. F. Lazarsfeld and R. K. Merton, “Friendship as a social process: A substantive and methodological analysis,” in *Freedom and Control in Modern Society*. Van Nostrand, 1954.
[22] A. Clauset, M. E. J. Newman, and C. Moore, “Finding community structure in very large networks,” *Physical Review E*, vol. 70, no. 6, pp. 1–6, 2004.
The Evolution of the e-ID card in Belgium: Data Privacy and Multi-Application Usage
Alea Fairchild
Vrije Univ. Brussel (VUB)
Brussels, Belgium
email@example.com
Bruno de Vuyst
Vrije Univ. Brussel (VUB)
Brussels, Belgium
firstname.lastname@example.org
Abstract: Since mandating in 2004 that all Belgian citizens carry electronic identification cards (e-ID), Belgium has been at the forefront of trends in electronic identification. As an e-ID card has become a necessity for service provisioning, the government has also started with distribution of e-ID cards to non-Belgians and children under the age of 12. Up until quite recently, the e-ID card only held the basic information of citizenship. This paper will examine the evolution of the e-ID card, and discuss the privacy issues of multi-application data on one card as the recent announcement of data for additional applications reopens the discussion of data linkage and data privacy for a card that is mandatory in usage.
Keywords- e-ID; privacy; transparency; applications.
I. EVOLUTION OF BELGIAN E-ID
As this paper focuses on the evolution of e-ID in Belgium, we will not go into the concept of citizen acceptance of identity cards (paper or digital), as there are a number of papers that cover this from social and political aspects [1] [2] [3]. Acceptance has never been the issue, unlike in the UK [4], as Belgium has mandated the use of identity cards with the creation of a National Register of natural persons in 1983 [5]. The register issues unique identifiers for each Belgian citizen in the form YYMMDDNNNCC, where YYMMDD refers to the date of birth of the citizen, and NNN is an even number for females and odd for males. CC is a checksum so errors can be detected when processing the number automatically. The register also keeps track of current and past addresses and keeps a record of all the citizen’s identity-related documents: passport, driving license and other relevant data. So citizens and residents cannot opt-out, but must carry the e-ID for identification and for service to be provided [4]. So choice is not part of our discussion.
In terms of acceptance, Belgians have already been used to showing national ID cards for identification for services, but the use of these paper-based ID cards have been on an event-oriented basis, and these respective events have not been tracked on a longitudinal basis. For example, a citizen may make a photocopy of his ID card for his bank to open a bank account. But that particular event of opening the bank account using the ID card is not recorded on a digital format in a public data facility where someone can use this event on a longitudinal basis [6].
For reasons of both efficiency and service provisioning, Belgium decided in early 2000 to be an early adopter and to trial the concept of a digital version of the paper-based national ID card. In Spring 2003, a pilot project across 11 municipalities was a trial of the e-ID and its implementation. In Spring 2004, the Belgian government decided, after approval of its legislative body, to mandate the e-ID for the whole country. A timeline of the e-ID implementation can be seen in Figure 1.
Although at the e-ID launch, there was only one application one can use the card for, ‘Tax-on-Web’ [7], it was envisioned that this will be the basis for future service provisioning for several layers (federal, regional, local) of Belgian governance. However, this was initially concerning to Belgians because of ‘digital trails’: how the numerous events of usage of their ID card are used, and by whom.
This paper will examine the transparency of what data is held on the card, as who has the right to use/view that data have been concerns for the changeover from paper-based to digital ID cards. We discuss what data is held on the e-ID card, and what applications are available for e-ID at present in Belgium. We end with an explanation of privacy and transparency in Belgian e-ID cards, and how multi-application usage may be in the near future of this card.
II. E-ID IDENTITY DATA
The initial paper-based document included the following pieces of information on the citizen: name (family name, up to two given names, and the initial of a third name), address, title, nationality, place and date of birth, gender, and a photo of its holder.
For the e-ID card, it is visually similar to the previous identity card and shows the same information as the paper-based document, except for the address. It also contains a hand written signature of its holder and also of the civil servant who issued the card. It also mentions the validity dates of the card (the card is valid for five years), the card number, the national number of its holder, and the place of delivery of the card [8].
All this information is also stored on the chip in a so-called “identity file”. The identity file is around 200 bytes long, and is signed by the National Register (RRN). In addition to the identity file, there is also an address file (about 150 bytes). This address file is kept independently as
the address of its holder may change within the validity period of the card. The RRN signs the address file together with the identity file to guarantee the link between these two files. The corresponding signature is stored as the address file's signature. As biometric feature, Belgium decided to use a photo (3 KBytes, JPEG format). This photo is (indirectly) signed through the RRN, as its hash is part of the user's identity file [8]. An example of a Belgian citizen can be seen in Figure 2. Cards for kids and for foreigners look differently.
Kids cards contain a unique safety feature to contact parents in case of emergency. This feature allows third parties to enter a list of preset phone numbers by way of a unique phone number and the child’s RRN, both visible on the kids-ID card. If a child is injured the parents can then be easily be contacted. The child’s parents are the ones that determine the preset list of phone numbers via a secure online database [9].
For usage of the e-ID card, there have been initial teething pains, with police cars needing to be equipped with readers in their glove compartment so people stopped for a possible violation can have their e-ID card read. When first issued, citizens had to carry an extra piece of paper with the e-ID card, as a police officer without a reader could not see the address of the citizen, which is one of the fundamental pieces of information requested from the ID card [6].
III. DATA HANDLING AND E-ID BENEFITS
The basic identity data are now digitally included in a microchip on the identity card, with a reader mechanism that allows a person to identify himself digitally and to place an electronic signature using the card and a password. In this way, storage and usage of citizens’ data becomes a bit more user-centric.
According to the National Register [10], the benefits of an e-ID card are:
- Self identify on Internet;
- 24/7 availability of particular documents via Internet;
- Ability for the card holder the possibility to check information regarding themselves, that in the register or in the Rijksregister of the natural persons stands, to consult, and in order to know, which authorities, institutions and persons during the last six months have consulted or improved have, with exception of the municipal and judicial authorities, that entrusted are with the investigation and the repression of punishable facts;
- A protected electronic connection, online information exchange with the authorities or with private enterprises;
- A protected manner via the Internet commercial operations export, as well as a buyer as in the quality of seller (online buy and sell);
- Via the web numerous forms fill in: load declaration, request of a study appropriation or of an excerpt from the register;
- Through self to identify, get entry to various places: container park, building of an enterprise, library, sporthal ….;
- Mails sign or recorded send mails.
The development team at FEDICT (the Federal ICT office) has developed add-ons in Mozilla and other browsers to enable the citizen use of e-ID. It is already supported on several operating systems, including Linux (Open SUSE).
IV. MULT-APPLICATION E-ID CARDS
Having an application that is mission critical to the citizen is one driver to get users to want to switch from paper to digital form. By promoting government applications such as “tax on web”, registered mail, social security registration of new personnel, online consultation of government data, as well as the distribution to twelve-year olds of a free smart card reader when they get their e-ID card, the home penetration with readers was expected to increase in the short term [9].
These new applications offered by use of the eID are expanding, and the government feels that it will create a surplus value form for the citizen and for the concerned authority. But it is not clear if the citizens feel the same way, especially with data linked for different applications on the same card.
In August 2011, it was reported in the press [11] that the amount of personal information being stored on the compulsory Belgian ID card is being extended. In future personal social security information will also be stored on the e-ID card. The SIS (social security) card is being discontinued. Within the next year, pharmacies and doctors will start using e-ID cards instead of the SIS card in order to obtain personal social security information about their patients and customers. The two systems will operate in parallel for a while, and then the current SIS card will disappear by the end of 2013. The social security authorities and health insurance bodies are already paving the way for the switch-over [11]. At the time of the e-ID initial launch, although it was technically feasible to integrate the SIS data, at the political level it was considered to be too high an infringement on personal privacy and the integration was blocked [9]. The question might be at this time, what has changed? The willingness of the public, or the need for cost efficiency? As the public cannot opt-out, the sensitivity of the use of e-ID for multiple data applications needs to be of concern to the government.
V. PRIVACY AND TRANSPARENCY
Unlike the Austrian e-ID, which from the onset has attempted to be privacy-friendly through the use of unlinkability schemes, the Belgian e-ID card has not addressed any aspects of privacy such as unlinkability, or anonymity, as discussed by Pfitzmann et al. [13]. At present no other national e-ID card design scheme in Europe puts emphasis on privacy beyond data protection and retention [4].
Where event-based transaction data is retained in identified form, it can result in a collection of data that reveals a great deal about the individual and their behaviour. Such ‘data trails’ may be used to trace back over a person’s past, or analysed to provide an abstract model of the person, or ‘digital persona’. This digital persona may then used by government agencies as a means of social control, for example.
Since the 1980s, basic mechanisms for privacy-enhancing identity management under control of the user have been proposed [12] [13] [14]. Control by the user requires that he firstly knows about actual and potential processing of his personal data and secondly that he in principle can decide case-by-case on data disclosure to specific parties, possibly in the limits given by law and society. The most effective, yet not always realistic way to protect one’s privacy is data minimisation, i.e., to disclose as little personal data as possible.
From a privacy point of view, the main issue in addition to the data-minimisation principle is the purpose-binding principle – data should only be collected and used for a specific purpose.
In Belgium, the citizen can ask for and use his data, see if they are correct and see who has used them, as government workers also have to use their own e-ID card to provide the service. There is a website maintained by FEDICT, the national IT organization, that allows citizens to track their national ID number and who has been using it for what purpose. With their e-ID card, the citizen can open the data cabinet in which his data are safely stored with his identification key. The citizen can verify the data, eventually ask for correction, use his data, and see who, and at what time, entered the data cabinet. This level of transparency of the process and privacy authentication has been important in the enforced uptake.
A data privacy error was made in Belgium by including the structured register number in the certificates stored in the electronic ID card. This is something that must be avoided: the number leaks too much personal information about the citizen; in this case, age as the register number uses the date of birth in the number.
The only biometric included on Belgian e-ID cards is the holder’s photo, which is about three kilobyte in size and not suited for automatic recognition of the cardholder. Correct implementation of biometric features is a very complicated issue, and may not be realistic and cost-effective. The Belgian eID card costs about €12.50, including the chip, maintenance of the infrastructure and two certificates per cardholder with a validity of five years [8].
The authorities have switched over on 17 October 2008 to the production of e-ID cards on the New Belgium Root certificate. As each e-ID card has been initialized with a genuine copy of the Belgian Root CA certificate, the e-ID card can be used as a “trusted source” as users can verify the chain of trust within the Belgian PKI system by loading the Belgium Root CA certificate from her/his smart card. Apart from revoking the use of an e-ID card’s keys when it is stolen, card holders also have the possibility to have the electronic signature capability of an e-ID card revoked, even before using a card [8].
VI. CONCLUSIONS ON THE FUTURE OF MULTI-APPLICATION DATA
The data from the SIS card will add information on the kind of health insurance the citizen holds (and that the citizen is insured, which is required in Belgium). Health records are not stored on the e-ID card. But the e-ID card is the linkage to the Crossroad Banks of Belgium, which are internal governmental information brokers on social security status, business information and car registration.
The question remains how cross linking of data may be used in a manner not fit for purpose, and what kind of legislation or audit trails will be utilized to protect citizen data privacy going forward as the government pushes to add multiple application usage to the e-ID card.
By promoting government applications, registered mail, social security registration of new personnel, online consultation of government data, together with the distribution to twelve-year olds of a free smart card reader when they get their e-ID card, the home penetration with readers is expected to increase in the short term.
However, privacy in a technological sense has not yet been included in the current version of the e-ID card. Belgian reliance on e-ID as a form of authentication and access means that no one can opt-out of the scheme, which makes security, transparency and privacy paramount to longer term interoperability within the EU. As new applications are added to the card, Belgians may get more wary of what can and cannot be linked together on the same card.
REFERENCES
[1] Walker, G. de Q. "Information as Power: Constitutional Implications of the Identity Numbering and ID Card Proposal", *CIS Policy Report*, (1986) Vol. 2 No. 1, Centre for Independent Studies, St Leonards, NSW, February.
[2] Clarke, Roger. "Human Identification in Information Systems Management Challenges and Public Policy Issues", *Information Technology & People*, 1994, Vol. 7(4), p. 6-37.
[3] UK Performance and Innovation Unit (PIU), 'Privacy and Data-sharing: The way forward for public services', published by the PIU in April 2002.
[4] Arora, S. National e-ID card schemes: A European overview, Information Security Technical Report (2008) Vol. 13, p. 46-53.
[5] Belgisch Staatsblad, 28 March 2003, Retrieved 3 Dec 2011 at: http://eid.belgium.be/nl/binaries/WT1_tcm147-9980.pdf
[6] Fairchild, A and de Vuyst, B. Privacy, Transparency and Identity: The Implementation of the e-ID card in Belgium, Proceedings of the 9th European Conference on Electronic Government (ECEG 2009), University of Westminster, London, UK, June 29-30.
[7] Tax-on-Web, Federal portal, Retrieved 3 Dec 2011 at: http://www.tax-on-web.be
[8] De Cock, Danny; Wolf, Christopher; and Preneel, Bart (2007) The Belgian Electronic Identity Card (Overview), Working Paper of ESAT/COSIC, KU Leuven.
[9] Marien and Van Audenhove. The Belgian e-ID and its complex path to implementation and innovational change, *Identity in the Information Society*, 2010, Vol. 3(1), p. 27-41.
[10] Rijksregister, Retrieved 26 Nov 2011 from URL: http://www.lbz.rzn.fgov.be/index.php?id=141&L=1
[11] FlandersNews.be. More personal info on electronic ID card soon, (20 August 2011). deredactie.be. Retrieved September 3, 2011, from http://www.deredactie.be/cm/vrtnieuws.english/news/1.1091792
[12] Chaum D. Security without identification: transaction systems to make big brother obsolete. Communications of the ACM, vol.28; Oct. 1985. no. 10, p. 1030–1044.
[13] Pfitzmann B, Waidner M, Pfitzmann A. Secure and anonymous electronic commerce: providing legal certainty in open digital systems without compromising anonymity. IBM research report RZ 3232 (#93278) 05/22/00 computer science/mathematics. Zurich: IBM Research Division.
[14] Leenes R, Schallabock J, Hansen M, editors. PRIME white paper. Retrieved 3 Dec 2011 from: https://www.prime-project.eu/prime_products/whitepaper/. 2007.
---
**Figure 1.** Timeline—Introduction and development of eID in Belgium [9]
**Figure 2.** Belgian eID card’s visual aspects [8]
Analyzing Social Roles using Enriched Social Network on On-Line Sub-Communities.
Mathilde Forestier, Julien Velcin, Djamel A. Zighed
Eric Laboratory, University of Lyon
Lyon, France
email@example.com,
firstname.lastname@example.org,
email@example.com
Abstract—Analyzing the social roles inside on-line communities became a big challenge nowadays. The on-line communities formed around exchange platforms (e.g., forums) create an increasing source of data for analyzing user’s behavior. This paper proposes an exploratory analysis of communities in news website based on its sub-communities. Actually, we assume that people who participate in forum debate in news websites focus their participation in one or a very few topics (also called context), i.e., they formed the sub-communities. These sub-communities, will help us to find the contextual celebrity: the pertinent users in the sub-communities. We based our analysis on a dataset composed by 11,143 users writing more than 35,000 posts on 57 different forums grouped in 3 topics, and on social networks enriched with relations extracted from the content of the users’ posts.
Keywords—Social role; Social network; On-line community.
I. INTRODUCTION
During the Roman era, the forum was the public place of the city, i.e., the social, political and economic center. The forum allowed people to communicate, exchange, debate and socialize. Forums still exist nowadays in a different way: thanks to the forums on the Web 2.0, users communicate interactively on a common interest.
People who participate in these forums (also called users) form an on-line community. Schoberth et al. [1] use this term “to describe the communication and social interaction that is seen in the Internet and web-based list servers, bulletin boards, Usenet newsgroups and chats”. We can complete the definition with the one given by Hymnes [2] about the speech community which represents “a group of people who share rules for the conduct and the interpretation of speech, and rules for the interpretation of at least one linguistic variety”. So, people who participate in forums form an on-line speech community. People in this on-line speech community, as in the real life [3], play a social role, as define in [4] “beside having personalities, by being part of the social group, people occupy positions in the social structures of groups that allow them to do and say certain things, as well as constrain them from saying or doing other thing”. Golder and Donath follow the Goffman’s theory [3] through which a role represents the “rights and duties attached to a given status”. Still, in a Goffman’s position, Gleave et al. [5] specify that a social role can only be apprehended in the interaction, i.e., people play a role depending on others.
In this paper, we focus on the understanding of sub-communities (from a whole community) in order to find good clues to comprehend the contextual celebrity social role. We define a sub-community as a sub-part of a whole community depending on a topic (also called context). In other words, a sub-community represents all the users who participate in a specific topic in a news website (e.g., politic, media, etc.). We assume that users participate in one a very few topics in their interest. So, the contextual celebrity represents a user particularly interested in a specific kind of topic compared to the whole on-line speech community. This user is recognized as a pertinent one by the other members of his sub-community.
So, the contributions of this paper are to explore an on-line community based on the analysis of the sub-communities which belong to it. The general idea is to confirm that users participate depending on a context and find some clues to detect the contextual celebrity for each kind of sub-communities. Note that, in this paper, we use the term topic or context independently.
This paper is organized as follow: first, we explain some related work and we position our work according to the existent one. Then, we describe the dataset we use to make the analysis of the social role inside sub-communities. We continue by briefly presenting the construction of our enriched social network using the structure and the content of the data. Finally, we explore the on-line speech community with its sub-communities and the concept of contextual celebrity social role.
A. Related Work
The social role analysis was highlighted by Goffman in [3]. According to his theory, human being adopts a “pre-established pattern of action, which is unfolded during a performance and which may be presented or played through on other occasions”. According to him, individuals play a role during the interaction. This notion had a great
repercussion through the apparition of Web 2.0 and the emergence of new media of exchange. Some researchers used database of email exchange and probabilistic model as blockmodel to define some social roles in firms [6][7][8]. Other researchers looked at predefined roles as the expert [9] (who is the most expert?) or the influencers in social network [10][11] (who gets the power to convince people in the social network). In an other perspective, computer scientists and sociologists found a great interest to analyze the social roles in forum debates. Their works aim to extract social roles as a predefined behavior in the on-line speech community using a social network analysis and the user participation behavior. This double analysis allows to capture the place of the user inside the community based on his implication and his reputation. Golder and Donath made an ethnological study and found out six kinds of social roles: the celebrity, the newbie, the lurker, the flamer, the troll and the ranter (refers to [4] for the definitions). These social roles can be positive (e.g., the celebrity) or negative (e.g., the flamer or the troll). This ethnological approach considers that a content analysis of the posts brings a lot of informations. In our work, we use a content approach to extract our social network with the aim to define the social roles. We will see in Section I-C how we enrich our social network with new relations extracted from the content of the discussion.
Others social roles have been discovered in this on-line speech community as the answer people and the discussion people [12]. In a political discussion context, Himelboim et al. [13] looked for the discussion catalyst. This kind of users influences the information that enters in a newsgroup and affect the discussion evolution within it. Kelly et al. [14] found three social roles in this kind of discussion: the friends, the foes and the fringes. The authors highlighted that people prefer to speak to users who are in another political affiliation than themselves. The great majority of the users in political discussion looks for virulent debate on society and way of life. Furthermore, the authors found the fringe social role which refers to a marginal group of people that raises interesting questions for qualitative study. Fisher et al. [15] took more largely into account the context of participation to analyze social roles. According to them, the user’s participation is different if he participates in help forums opposed to a flame forums. Their idea makes us think that in Usenet, there are some specialized forums for flame, for help etc. But in a news website the configuration of participation is quite different, there is no specialized forum as in Usenet, but there are some topics where users are more interested to debate in. Very close to our work, Angeletou et al. [16] and Chan et al. [17] explain some online sub-communities by their composition of users roles, but each sub-community represents one community: there is no overlapping, no confrontation between the sub-communities. In this paper, the context of participation is represented by the topic which the forum belongs to, e.g., politic, media, living, etc. So, we propose a new way to understand social role depending on the context in on-line sub-communities. Finally, we refer the reader to Gleave et al. [5] and Forestier et al. [18] in order to have a larger state of the art and analysis about social roles.
B. Dataset introspection
In this section, we present the data we used to analyze the sub-communities and the contextual celebrity. We based our analysis on the forums of the HuffingtonPost (www.huffingtonpost.com) news website. We extracted 57 forums dealing with three topics, i.e., context: politic, living and media. The dataset is composed of 19 forums of each topic. The whole dataset contains 11,443 users and 35,175 posts. Table I presents the basic statistics on each topic. The overlapping of the sub-communities implies that the sum of the users from the three sub-communities is upper than 11,443 users. Note that the on-line speech community represents all the users and we are looking to the contextual celebrity in sub-communities (communities depending on a context).
| | Politic | Living | Media |
|----------------|---------|--------|-------|
| # of users | 4547 | 3667 | 5973 |
| # of posts | 12725 | 8274 | 14176 |
| Average number of posts per user | 2.8 | 2.3 | 2.4 |
| % of users who exclusively participate in this kind of topic | 58% | 68% | 65.5% |
| % of users having one post on all users who participate in this topic | 50% | 58% | 54% |
| % of users having between [1,5] posts | 39% | 34% | 38% |
| % of users having between [6,11] | 7% | 5.7% | 5.5% |
| % of users having between [12,16] | 2% | <2% | <2% |
| % of users having between [17,∞] | <2% | <1% | 1% |
| Total | 100% | 100% | 100% |
Table I shows that it exists in all sub-communities a hard core of specific users, i.e., users who participate only in one topic. Furthermore, the ratio between the number of posts and the number of users is quite the same in the three sub-communities. Users in living topic (respectively media topic) post an average of 2.3 messages (respectively 2.4). In politic forum, the ratio is a little bigger, i.e. 2.8, posts per user. Most of the users concentrates their participation on one topic and for each sub-community, at least half of the users post just one post in one topic. This compartment seems really interesting and, even if this is not the object of this paper, and in a perspective way, the study the behavior of these users through the sub-communities and in a temporal way, can be really interesting.
The three sub-communities follow the same rule of participation: most of the users posts less than six messages. There is a real gap between people who write less than six messages and those who write more. For each sub-community, an average of 6% of the user post between five and ten messages. Finally, a very few users posts more than ten messages in a topic. The *contextual celebrity* is being more likely among them.
Figure 1 shows the overlapping of the sub-communities. The bold numbers represent the attributes of the politic sub-community, the underlined numbers represents the attributes of the living sub-communities and the ones in italic represent the attributes of the media sub-community. So, the numbers underlined, in bold and in italic represent the percentages for the whole community. The living (underlined on Figure 1) sub-community represents 30% of the community. Inside the living sub-community, 68% of the users participates only in this topic (it corresponds to 22% of the whole community). The statistics are quite the same in the two others sub-communities. A very few users participates in living topic and an other topic (9% of the users in living sub-community participates also in the politic topic (in bold), and 13% in both living and media topics (in italic). This minority represents only 3% and 4% of the whole community. Note that less than 30% of the media sub-community wrote at least one post in politic. These two sub-communities seem to be closer than the politic and living sub-communities. Finally, only 3% of the population participates in the three topics (it represents 9% of the users of living, 8% of politic and 6% of media sub-community). In conclusion, only 21% of the users participate in more than one topic.
### C. Enriched social network
Web forums have the particularity that they structure the debate. Users who participate can, using this structure, reply to the post they want to reply to. This structure is used to extract social networks in existent works treating social roles [9][12][13][15]. But, reading the forums shows that people not only interact using the structure (reply to) but also through quotations. We find two kinds of quotations: the text quotation and the name quotation [19]. These two quotations allow an user to reply to several ones through one post; and people who read the forum automatically understand when an author is quoted (by the name, or by the quotation of a previous post). The idea is when a person quotes the name of another one, he adopts certain community codes and he considers himself to be entitled to refer to the person by his pseudonym, e.g., a newbie (i.e., new user) never feels the right to call other users by their pseudonym. So quoting the name implies the user’s integration in the online speech community. The text quotation relation brings some important information during the analysis. Actually, more a user quotes another, more these two users are linked. Furthermore, the text quotation frequently implies a precise conversation between the two users, i.e., if I quote a part of your post, I really reply to you, and in most of case I argue your discourse with an opinion. To make a finer interaction analysis, we wanted that the analyze taking into account these quotations in a an automatically way. So, we created an enriched social network where users can be linked by three relations:
- The structural: a user replies to another one using the structure of the forum;
- The name quotation: when a user quotes the name of an other user in his post;
- The text quotation: when a user quotes a part of a previous post in his post.
Finally, we have three separates but complementary social networks, i.e., one social network for one kind of relation. Each of this social networks give some clues to understand the user behavior. The social network constructed with the name quotation relation gives some informations about the user’s reputation: is he known by his sub-community? Is he often quoted? Is he often quotes? The social network constructed with the text quotation relation give some others clues: does the user like to debate? Does he bring some interesting informations to debate?
Our model reaches a quite good score in term of precision (ratio between number of quotations found by both evaluators and system compared to the number of quotations found by the system). We refer the reader to [19] for more information about the social network extraction.
Figure 2 shows the three separate social networks. We used the Jung Java toolkit for SNA to build the graphs. The social networks on Figure 2 are built only with users having written more than 15 posts in all the dataset. The gray scale of nodes represents the kind of forum a user participates in. Black nodes make the connection between subgraph of gray
users (users who participate only in one kind of forum). We can also see that name quotation (b) is used more than the text quotation relation (c) by user who participate in the whole website. Referring to Table II, the name quotation represents the double of the text quotation (994 name quotations against 476 text quotations). The media sub-community uses more the name quotations compared to the two others sub-communities, we can interpret this result as a closer sub-community, i.e., the users of the media sub-community know better the others who participate in it. Surprisingly, it is not the users who post the most in a topic who quote or are quoted the most. This result is important concerning the detection of the *contextual celebrity*: users who are quoted by the name and the text and who post a lot of messages have more chance to be recognized by the others and to have a good reputation [20] in their sub-communities.
II. A NEW SYSTEM TO ANALYZE SUB-COMMUNITIES AND SOCIAL ROLES
As we saw in Section I-A, social role analysis became an important research study. Nowadays, it seems really important to understand who is who in the on-line speech communities. But, as we saw before in these works, most of the researchers use Usenet to extract social roles. The fact is that forums on news websites become increasingly generic while Usenet is quite specific. Furthermore, news websites allow users to treat several kinds of forums, e.g., politic, societal, etc. and the social role is dependent of the context[3][5]. The goal, here, is to retrieve social roles depending on the context, i.e., the kind of forum treated. Finally, the three relations between users (see Section I-C) allow a finer perception of the interaction. These relations will help us to a better extraction of the social roles.
Figure 3 shows the process of our system from the website to the analysis. First of all, we collect the forums from the website using a parser. Note that the parser is specific for each website. The parser retrieves the forum topic, the users pseudonyms, the posts and the structural relation, i.e., which post replies to which one? Who replies to whom? Then, the system analyzes the content of the posts in order to extract the name and the text quotation relations. All the data is scored in a database. Finally, using the enriched social network (with the relations extracted from the content of posts) and the user’s participation behavior, we analyze the social role based on the context, i.e., the topic of the forum.
We will present in the next section the way we choose to analyze the social roles taking into account the context and
| | Politic | Living | Media |
|----------------------|---------|--------|-------|
| # of TQ | 177 | 146 | 153 |
| # of users who use TQ| 151 | 119 | 118 |
| # of NQ | 350 | 183 | 461 |
| # of users who use NQ| 256 | 128 | 118 |
| # of users having used TQ | 151 | 119 | 118 |
| # of users having more than 15 posts and are quoted by text | 17 | 15 | 13 |
| # of users having more than 15 posts and quoting by text | 19 | 0 | 19 |
| # of users having used NQ | 256 | 128 | 375 |
| # of users having more than 15 posts and are quoted by their name | 33 | 15 | 23 |
| # of users having more than 15 posts and quoting by the name | 28 | 7 | 28 |
the kind of relations between users.
III. Sub-communities and Social Role Analysis
To analyze the sub-communities and to find the *contextual celebrity(ies)*, we decided to perform a principal component analysis (PCA)[21]. This unsupervised method aims to create a new description space of the data. We use Tanagra [22] to compute the PCA.
A. Criteria of analysis
We created several criteria to analyze the on-line community in a contextual perspective. These criteria are based on the individual’s behavior and the analysis of the social network. We calculate for each individual who participate in the forums:
- Number of politic / living / media forums the user participates in;
- Number of posts in politic / living / media forums;
- In-degree with the structural relation function of the topic;
- Out-degree with the structural relation function of the topic.
So, each user is defined by 12 criteria measuring the user’s interest in the topics and his place inside the sub-communities. Actually, the participation is comprehended by the user’s participation behavior metrics; and his place inside the sub-community by his place in the social network using the in-degree and out-degree with the structural relation. These criteria allow us to create an unsupervised method to explore the on-line speech community.
B. Principal Component Analysis
The Principal Component Analysis (PCA) consists in transforming the criteria of analysis (see Section III-A) in new variables, each independent of each other. The aim of this method is to create a new space where the dimensions are not correlated one to the other. It also allows to reduce the information description to a limited number of components, less than the initial number of criteria of analysis. PCA is really interesting for several reasons. First of all, we want to explore the sub-communities in a unsupervised way. The social role of the users depends on the interest of the user for one topic and his place inside the sub-community. We expect that PCA finds three groups (one group for each topic) that are not correlated one to the others. Furthermore, this is an unsupervised method of analysis because our dataset does not allow the usage of supervised methods: we do not have labels to learn rather we have to discover and interpret the knowledge from the data. Finally, this old method (more than one century) made proof of its performance and it is still used today.
The first three axes found by the PCA resume 75% of the knowledge contained in the data. The fourth axis only adds 5% of supplementary information, so we keep the first three axes. Note that a resume of 75% of information is a quite good score for real data.
The first axis is described on the positive part by the politic topic: number of posts, in- and out-degree with the structural relation. On the negative side, the axis is described by the living topic. The second axis is constructed on the positive part with the politic forum. The third axis is constructed with all the criteria concerning the media topic. This construction proves that the on-line speech community is divided into sub-communities function of the topic. Nevertheless, the sub-communities are not completely separate (otherwise the PCA gives some correlations about one) and some users being part of several sub-communities.
Figure 4 represents the correlation scatter plot created with the two first axis of the PCA. The three forums are visibly separated. We have on the top left of the graphic the forums about living, on the bottom right the forums dealing with media and on the top right the forums dealing with the politic. This graphic proves that individuals have certain behavior depending on the kind of forum they participate in. Figure 4 shows that the angle between the living topic and the media topic is about $180^\circ$ function of the gravity center (see the right line on figure 4 between the two groups). It means that it exists a negative correlation between the two groups. In other words, the more the users participate in forums dealing with media topic, the less they participate in living topic and vice versa. In another way, the politic topic is almost on a right angle compared to both media and living forums. There is a statistical independence between the politic topic compared to the media and the living topic. The PCA does not find a correlation between them. It seems that the participation in the politic topic does not influence the participation in media and/or living topic.
Finally, the PCA confirms that users mostly participate in one kind of topic (i.e., in a context). To find who are the *contextual celebrities* we propose to find users who maximize all the criteria on one topic and who have no or very few participation in the others. So, in a perspective way, we are thinking to use multicriteria aggregation so that we find not just one *contextual celebrity* per topic but a list.

Figure 4. Correlation scatter plot
of contextual celebrities for each sub-community.
IV. CONCLUSION AND FUTURE WORKS
This paper presents a new exploratory approach to understand on-lines communities based on its sub-communities and give good clues to comprehend the contextual celebrity in these sub-communities. A lot of people interact on news websites, this media became increasingly widespread, the dimensionality of data makes it difficult to comprehend. We use the Principal Component Analysis (PCA) to understand how people interact starting from the hypothesis that people focus their participation in one or a very few topics (i.e., context) and not in the website as a whole. The PCA confirmed this hypothesis. This exploratory method finds three kinds of groups defined by the kind of context the users participates in.
The contextual celebrity, i.e., a user who participates in one kind of topic and be recognized by his sub-community as a pertinent user, needs to maximize the criteria in one topic. Furthermore, using an enriched social network allows a finer perception of the real interaction between users and brings interesting informations to characterize the community, the sub-communities and the contextual celebrity himself.
In perspective, we want to extract the contextual celebrity and evaluate the model using a temporal evaluation. We are also interested in the analysis of the users who participate a few (one post) in one topic Who are these people? Why do they participate so little?
REFERENCES
[1] T. Schoberth, J. Preece, and A. Heinzl, “Online communities: a longitudinal analysis of communication activities,” in Proceedings of the 36th Annual Hawaii International Conference on System Sciences, 2003. IEEE, pp. 1–10.
[2] D. Hymes, Foundations in sociolinguistics: An ethnographic approach. Psychology Press, 2003.
[3] E. Goffman, The presentation of self in everyday life. Harmondsworth, 1978.
[4] S. Golder and J. Donath, “Social roles in electronic communities,” Internet Research, vol. 5, pp. 19–22, 2004.
[5] E. Gleave, H. Welser, T. Lento, and M. Smith, “A conceptual and operational definition of ‘social role’ in online community,” in System Sciences, 2009. HICSS’09. 42nd Hawaii International Conference on. IEEE, 2009, pp. 1–11.
[6] F. Lorrain and H. White, “Structural equivalence of individuals in social networks,” Social networks: a developing paradigm, vol. 1, p. 67, 1977.
[7] A. McCallum, X. Wang, and A. Corrada-Emmanuel, “Topic and role discovery in social networks with experiments on enron and academic email,” Journal of Artificial Intelligence Research, vol. 30, no. 1, pp. 249–272, 2007.
[8] A. Wolfe and D. Jensen, “Playing multiple roles: Discovering overlapping roles in social networks,” in Proceedings of the 21st International Conference on Machine Learning, Workshop on Statistical Relational Learning and its Connections to Other Fields., 2004.
[9] J. Zhang, M. Ackerman, and L. Adamic, “Expertise networks in online communities: Structure and algorithms,” in Proceedings of the 16th International conference on World Wide Web, 2007, pp. 221–230.
[10] N. Agarwal, H. Liu, L. Tang, and P. S. Yu, “Identifying the influential bloggers in a community,” in WSDM ’08: Proceedings of the international conference on Web search and web data mining. New York, NY, USA: ACM, 2008, pp. 207–218.
[11] P. Domingos, “Mining social networks for viral marketing,” IEEE Intelligent Systems, vol. 20, no. 1, pp. 80–82, 2005.
[12] H. Welser, E. Gleave, D. Fisher, and M. Smith, “Visualizing the signatures of social roles in online discussion groups,” Journal of Social Structure, vol. 8, no. 2, 2007.
[13] I. Himelboim, E. Gleave, and M. Smith, “Discussion catalysts in online political discussions: Content importers and conversation starters,” Journal of Computer-Mediated Communication, vol. 14, no. 4, pp. 771–789, 2009.
[14] J. Kelly, D. Fisher, and M. Smith, “Friends, foes, and fringe: norms and structure in political discussion networks,” in Proceedings of the 2006 international conference on Digital government research, May, 2006, pp. 21–24.
[15] D. Fisher, M. Smith, and H. Welser, “You are who you talk to: Detecting roles in usenet newsgroups,” in Proceedings of the 39th Annual Hawaii International Conference on System Sciences, 2006, pp. 59b–59b.
[16] S. Angeletou, M. Rowe, and H. Alani, “Modelling and analysis of user behaviour in online communities,” The Semantic Web–ISWC 2011, pp. 35–50, 2011.
[17] J. Chan, E. Daly, and C. Hayes, “Decomposing discussion forums and boards using user roles,” in AAAI Conference on Weblogs and Social Media, 2010, pp. 215–218.
[18] M. Forestier, A. Stavrianou, J. Velcin, and D. A. Zighed, “Roles in social networks: Methodologies and research issues,” Journal of Web Intelligence and Agent Systems, p. To appear, 2011.
[19] M. Forestier, J. Velcin, and D. Zighed, “Extracting social networks to understand interaction,” Proceedings of the International Conference on Advances in Social Network Analysis and Mining (ASONAM 2011), pp. 213–219, 2011.
[20] J. Donath, “Identity and deception in the virtual community,” Communities in cyberspace, pp. 29–59, 1999.
[21] K. Pearson, “Principal components analysis,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. 6, no. 2, p. 559, 1901.
[22] R. Rakotomalala, “Tanagra: un logiciel gratuit pour lenseignement et la recherche,” Actes de l’ÉGC, vol. 2, no. 3, pp. 697–702, 2005.
Unique Domain-specific Citizen Identification for E-Government Applications
Peter Schartner
Institute of Applied Informatics
System Security Group
Klagenfurt University
9020 Klagenfurt, Austria
Email: firstname.lastname@example.org
Abstract—When discussing the security of e-government applications one of the most crucial aspects is the identification of the users (aka citizens). On the one hand, the authorities and the users want to be sure beyond doubt that a certain action or record is related to the correct individual. On the other hand users do not want to have their actions or data in different domains (like health-care, taxes, register of residents, legal authorities) being linked to each other by the authorities. In this paper, we propose an efficient mechanism, which guarantees both, unique identification and inter-domain privacy protection. First of all, the proposed scheme is a replacement for the domain-specific citizen identifier defined by the Austrian authorities, but the scheme may be used as well in other scenarios, depending on unlinkable and unique identifiers.
Index Terms—e-government; system-wide unique identifier; domain-specific identifier; pseudonyms; anonymity; UUIDs; GUIDs.
I. INTRODUCTION
Concerning e-government, accountability of actions or records is one of the most important requirements. On the one hand, authorities would like to know, which user (citizen) has taken a certain actions or, which user is the owner of a certain record. On the other hand, the users do not want their actions or records being mixed up with actions or records of other users. So both groups need and want accountability, which strongly depends on unique identification of the related instances.
Despite the need for unique identification of citizens, most commonly data protection acts (or similar legal requirements) prohibit the (direct) use of unique identifiers (like passport serial numbers or social insurance numbers) outside the scope of these identifiers. Additionally, the users demand privacy protection, i.e., users do not want their actions (or records) to be linked across different domains. For example, data related to health care should not be linkable to data of social insurance and vice versa. So for both reasons, legal regulations and privacy protection, we need some sort of digital pseudonym, which uniquely identifies a citizen, but hampers the linking across domains.
The remainder of this paper is structured as follows. First we will briefly discuss related work concerning the generation of unlinkable (and unique) identifiers. After analyzing the drawbacks of the different schemes, we introduce the so called concept of collision-free numbers, which are used to generate system-wide unique domain-specific citizen identifiers. The paper will close with some modifications of the proposed scheme and open problems, which are the scope of future research.
II. RELATED WORK
In this section, we will briefly discuss internet/industrial standards and some straightforward techniques for the generation of unlinkable unique identifiers. Besides these, we will discuss the approach of the Austrian authorities in more detail, as flaws in this approach brought up the idea of designing a replacement. Basically, all generation processes described, “try” to provide two properties for the identifiers at the same time:
- **Uniqueness**: No two (or more) citizens should be assigned the same identifier. If this happens, this could result in records or actions of different persons becoming inseparably mixed up.
- **Privacy**: Identifiers used in different domains should not be linkable to each other. In some scenarios even the linking between the person and its identifier should be impossible, which results in complete anonymity. In principle this results in the requirement that identifiers “should look” random.
A. UUIDs and GUIDs
A widely adopted approach for system-wide unique system parameters are universally unique identifiers (UUIDs, see [1]) and globally unique identifiers (GUIDs, see [2]), Microsoft’s implementation of UUIDs). There exist several variants of GUIDs, but these variants either use the MAC address to guarantee uniqueness or they employ hash-functions or purely pseudo-random values. Except the first one, which violates the privacy requirement (the MAC address may be linkable to the user), none of them can guarantee uniqueness (since cryptographic hash-functions always come with the risk of duplicates).
B. National Citizen Identifier
In Austria, each individual is assigned a unique so called base number ($B$ – Basiszahl), which is either the individual’s
number in the central register of residents, or $B$ is the number in the so called supplementary register, if the person is not subject to registration. Since the Austrian data protection act prohibits the direct use of the base number $B$, the derivation scheme for unique unlinkable domain-specific identifiers consists of two major phases (see Figure 1):
1) Disguising the base number $B$ by use of an injective transformation, which results in the so called base identifier ($bID$).
2) Deriving the domain-specific citizen identifier ($dcID$) by use of the base identifier ($bID$) and the domain identifier ($dID$).
**Phase 1: Disguising the base number** consists of the following steps:
1) Input: base number $B$ (12 decimal digits)
2) Binary encoding of $B$ (5 byte)
3) Extension of $B$ to fill two 3DES blocks (16 byte = 128 bit) by use of the following format:
$$b = B || seed || B || B,$$
where $||$ denotes the concatenation of bit strings and $seed$ is a secret constant (8 bit), only known by the authority, which holds the register of residents.
4) Encryption of the binary representation of $b$ by use of 3DES [3] in CBC mode [4], [5] (no padding needed since the input is a multiple of the block size):
$$c = 3DES_k(b),$$
where the secret key $k$ is only known by the authority, which holds the register of residents.
5) For the ease of further usage, the result is Base64-encoded [6] to form the base identifier:
$$bID = \text{Base64}(c).$$

*Fig. 1. Original derivation of the domain specific citizen identifier dcID*
**Analysis:** The system-wide unique base number $B$ is encrypted by 3DES (a block cipher) using a fixed key and seed. Hence this is an injective function and the output, the base identifier $bDI$, is system-wide unique as well. From the security point of view it has to be mentioned, that in case the secret key $k$ becomes publicly known, all base identifiers can be decrypted and actions identified by use of the base identifier can be linked to persons by use of the base number $B$. Additionally, each individual is assigned exactly one base identifier. Hence, actions or records identified by use of the base identifier may be unlinkable to persons directly, but at least linkable to each other. If one of the linked actions or records provides information about its initiator or holder, all other linked actions or data sets can be linked to this specific person.
To overcome the problem of inter-domain linking discussed above, the Austrian authorities proposed to use a derivation scheme, which generates a so called domain-specific citizen identifier ($dcID$) based on the individual’s base identifier ($bID$) and a domain identifier ($dID$). In order to avoid duplicates, the domain-specific citizen identifiers should be unique with high probability.
**Phase 2: The derivation of the domain-specific citizen identifier dcID** from the base identifier $bID$ and the domain identifier $dID$ consists of the following steps:
1) Input: Base identifier $bID$ (Base64-encoded) and domain identifier $dID$ (according to the corresponding regulation [7] two to five ISO/IEC 8859-1 [8] upper case characters)
2) Concatenation ($||$) of base identifier $bID$, a fixed prefix and the domain identifier $dID$ to form the string $s$:
$$s = (bID || " +" || URN-prefix || dID),$$
where $URN-prefix$ is the ISO/IEC 8859-1 string “urn:publicid:gv.at:cid+”.
3) Calculation of the SHA-1 hash [9] of $s$, which results in a 160 bit value $h$:
$$h = \text{SHA-1}(s)$$
4) Finally, $h$ (as a binary string) may be directly used as domain-specific citizen identifier $dcID$ or may be Base64-encoded before transmission or printout.
**Analysis:** Since domain-specific citizen identifier are derived by the use of a hash-function, there is the risk of duplicates regardless the fact, that the input to the hash-functions are system-wide unique. Hence there is the risk of inseparable records of different individuals e.g., in E-Government databases.
**C. Other Approaches**
There exist at least three straight forward solutions for generating random *and* system-wide unique parameters:
- **Centralized generation and check** obviously avoids duplicates but is quite inefficient concerning storage (all previously generated parameters have to be stored for later comparison) and communication (each instance, which needs a parameter has to wait for the centralized generator to send it). Additionally, the centralized generator has full control over the generating process and knows all parameters.
• With **Local generation and (centralized) check**, only the generation itself is done locally, but the comparison against all previously generated parameters has to involve all other generators or a centralized service. Again, efficiency and security are quite questionable.
• **Local generation based on pseudo-random number generators** (PRNG, see [10] for details) can avoid centralized storage and comparison and is efficient in terms of memory and communications. But in order to avoid duplicates, all PRNGs have to use a common key or common secret parameters. So, if one of them is compromised, all of them become insecure. Additionally, the generated parameters are no longer random, but pseudorandom and this approach is not suitable for software implementation, because by use of software, the system-wide key (or secret parameter) cannot be protected sufficiently.
A more sophisticated approach is the so called **location-and time-based generation**, which simply uses location and time provided by a GPS receiver to derive a unique seed for the generation process. The idea behind this concept: two generation processes cannot take place at the same place \textit{and} the same time. Besides the fact that the GPS signal will not be available at all locations, the according paper does not specify, how (pseudo-) randomness and uniqueness are maintained (see [11] for details).
### D. Summary of Related Work
Summarizing the related work, we see that none of them fulfills both requirements at the same time: system-wide uniqueness and privacy protection (full or inter-domain unlinkability).
## III. Preliminaries
After briefly revisiting basic cryptographic algorithms used in this paper, we will present the core building block of unique domain-specific identifiers: so called collision free number generators (CFNG, introduced in [12], [13]).
### A. Cryptography
We assume that the reader is familiar with **Symmetric Encryption** (like DES [14], 3DES [3], or AES [15]) and **Hash-functions** (SHA-1 [9] or RIPEMD160 [16]), and refer to [10] for further details.
In order to keep the output of symmetric encryption as short as possible, we will employ **Ciphertext Stealing**. Let $l_B$ be the block-length of a symmetric encryption function $E$. Let $u$ be a plaintext, where $l_B < l_u \leq 2l_B$. If $u$ is encrypted straightforwardly by padding $u$ up to $2l_B$ bits and then encrypting two blocks, the length of the corresponding ciphertext $c$ is $l_c = 2l_B$. Using the CBC mode [4], [5] with ciphertext stealing [17], $c$ can be generated such that $l_c = l_u$. This works as follows: First $u$ is cut into the blocks $u_1$ and $u_2$, where $l_{u_1} = l_B$ and $l_{u_2} = l_u - l_B$. Then $u_1$ is encrypted by use of $E$ and a properly chosen key $k$ resulting in a block $c_1 || c_2$, where $l_{c_1} = l_u - l_B$ and $l_{c_2} = l_B - l_{c_1}$. Then the block $c_2 || u_2$ is encrypted by use of $E$ and the same key $k$ resulting in the block $c_3$. This works, since $l_{c_2} + l_{u_2} = l_B$. The ciphertext of $u$ is then $c_1 || c_3$ and contains sufficient information to compute $u$, if $k$ is available. The length of $c$ is $l_c = l_{c_1} + l_{c_3} = l_u$. An example for a 64 bit block cipher (like DES) encrypting an 79 bit input can be found in Figure 2.

Fig. 2. CBC mode with ciphertext stealing
Details on **Elliptic Curve Cryptography (ECC)** can be found in [18]). For the ease of reading this paper we will just define the basics of ECC.
**Definition:** Let $E(\mathbb{Z}_p)$ be an elliptic curve group, where $p$ is an odd prime. Let $P \in E(\mathbb{Z}_p)$ be a point of prime order $q$, where $q \# E(\mathbb{Z}_p)$. The **Elliptic Curve Discrete Logarithm Problem (ECDLP)** is the following: Given a (random) point $Q \in \langle P \rangle$ and $P$, find $k \in \mathbb{Z}_q$ such that $Q = kP$.
By $SM(k, P)$ we henceforth denote the **Scalar Multiplication** $kP$ in $E(\mathbb{Z}_p)$. It is believed that the ECDLP using $l_p \approx l_q \approx 160$ is secure against powerful attacks like Pollard’s rho algorithm [18].
**Point Compression** [19]: A point on an elliptic curve consists of two coordinates and so requires $2l_p$ bits of space. It is clear that for every $x$-value there exist at most two possible $y$-values. Since they only differ in the algebraic sign, it suffices to store only one bit instead of the whole $y$-value. A point $(x, y)$ can hence be stored as $x||b$, where $b = y \mod 2$, and then only requires $l_p + 1$ bits of space.
This has the only drawback that if we want to include this point in some computations, we first have to compute the two possible $y$-values and then decide by $b$, which of them is correct. In our case, we are only interested in saving space. There is no necessity to compute $y$ here.
### B. Collision-free Number Generators
In [12], we proposed so called collision-free number generators (CFNGs) as a mechanism for generating random but system-wide unique (cryptographic) parameters. Basically, these generators disguise a unique (eventually publicly known) parameter by use of a randomizer. In the scope of e-government identifiers, the information being disguised will be the digital identity of the citizen. The resulting parameter will be a system-wide unique domain-specific digital pseudonym.
The output $o$ of a basic – type 1 – CFNG (denoted as CFNG 1 in the remainder of this article, also see Figure 3) is of the form
$$o = f(u, r) || r = f_r(u) || r = \text{CFNG1}(),$$
with $f$ being an injective mixing transformation for an arbitrary but fixed randomizer $r$ and $u, r$ defined as above. We suggest to either use an injective one-way mixing-transformation for $f_r$ according to Shannon [20] (e.g., symmetric encryption) or an injective probabilistic one-way function, based on an intractable problem (e.g., the discrete logarithm problem [10]).
In this paper, we will just revisit the proofs of uniqueness. For a detailed discussion of randomness, efficiency and privacy protection, we refer the reader to [12].
**Theorem:** Outputs of Type 1 CFNGs are unique during their lifetime.
**Proof:** Consider two outputs of two arbitrary type 1 CFNGs: $o_1 = \text{CFNG1}_1() = f_{r_1}(u_1) || r_1$ and $o_1 = \text{CFNG1}_2() = f_{r_2}(u_2) || r_2$, with $r_1, r_2$ being random and $u_1 = ID_1 || cnt_1$ and $u_2 = ID_2 || cnt_2$. With respect to the randomizers $r_1$ and $r_2$, there are two cases:
1) $r_1 \neq r_2$: This directly means that $o_1 \neq o_2$.
2) $r_1 = r_2 = r$: Now, both calls of the generators employ the same randomizer and $f_r$ becomes injective. Hence $f_r(u_1)$ and $f_r(u_2)$ will be different if and only if $u_1 = ID_1 || cnt_1$ and $u_2 = ID_2 || cnt_2$ differ in at least one bit. This is always true, because
a) different generators use different identifiers ($ID_1 \neq ID_2$), and
b) if we call the same generator twice (i.e., $ID_1 = ID_2$), the values $cnt_1$ and $cnt_2$ will differ, because the counter is incremented at each call of the generator.
Hence the outputs $o_1$ and $o_2$ will be different again. $\square$
When analyzing CFNGs, which employ a block cipher E (CBC mode with ciphertext stealing) for $f$ ($o = E_r(ID || cnt) || r = c || r$), it is obvious that the identity of the generator is not protected sufficiently. Everybody who gets hold of an output $o$ can retrieve the identifier $ID$ of the according generator by simply decrypting $c$ by use of $r$: $ID || cnt = D_r(c)$.
We will see that this may not be a problem in certain application scenarios; but, in order to guarantee the protection of the generators $ID$ we have either to change our requirements on $f$, or we have to slightly change the design of CFNGs.
- To provide privacy, $f$ has to be a cryptographic one-way function. Candidates include injective probabilistic one-way functions based on an intractable problem like the (ECC) discrete logarithm problem [10].
- In the case that $f$ is a (bijective) symmetric encryption function, we can employ an additional (injective) one-way-function $g$ to the output or to the randomizer of the original CFNG, which results in the variants depicted in Figure 4 (CFNG 2 and CFNG 3):
1) The first variant simply hides the output of a type 1 CFNG by use of function $g$:
$$o = g(\text{CFNG1}()) = \text{CFNG2}(),$$
2) The second variant only hides parameter $r$ (which is needed to invert function $f$) by use of function $g$:
$$o = f(u, r) || g(r) = f_r(u) || g(r) = \text{CFNG3}(),$$
**Corollary:** Outputs of type 2 and type 3 CFNGs are unique during their lifetime.
**Proof:** Function $g$, applied to the unique inputs in type 2 and type 3 CFNGs is a injective one-way-function. Hence $g$ applying $g$ cannot destroy the uniqueness of the outputs. $\square$
### IV. Our proposal: Unique domain-specific Citizen Identifiers
In this section we present three methods to generate unique and unlinkable domain-specific Citizen Identifier:
- **Method 1** (the basic principle) may be directly used as a replacement for the scheme described in Section II-B as it uses the same inputs (and inputs lengths) and generates outputs of equal length.
- **Method 2** uses slightly different (shorter) inputs, but employs more randomness to disguise the inputs. Nevertheless it may also be used as a replacement for the old citizen identifiers.
• **Method 3** uses the base number (60 bit) as the source of uniqueness instead of the base identifier (128 bit) as methods 1 and 2 do. As with method 2, shorter inputs to the encryption function allow more randomness.
### A. Basic Principle
Based on a type 2 CFNG employing elliptic curve cryptography (ECC – see [18] for details), elliptic curve scalar multiplication (SM) and point compression (PC) we will now present a generator for system-wide unique and inter-domain unlinkable identifiers. As in the original scheme, our replacement (see Figure 5) generates 160 bit identifiers. But in contrast to the original scheme, these outputs are provably system-wide unique, as we employ type 2 CFNGs (see Figure 4 left) parameterized as follows:
1) Inputs: Base identifier \( bID \) (128 bit) and domain identifier \( dID \) (five uppercase letters encoded in 24 bit).
2) Starting from the output length of 160 bit we have to subtract one bit to encode the y-coordinate of the ECC-point, 24 bit to encode the domain identifier and 128 bit to store the base identifier. This results in 7 bits remaining for the randomizer.
3) The unique and inter-domain unlinkable identifiers \( dcID \) is of the following form:
\[
dcID = PC(SM((DES(u, k) || r), P)),
\]
where where \( P \) is a so-called generator point of the elliptic curve, \( |r| = 7 \) and \( k = msb_{56}(H(r)) \). In order to reduce redundancy and the bit length of the input of the encryption function, we omit the constant URN-prefix.
Since we employ DES to encrypt the base identifier, we need to expand the randomizer \( r \) (7 bit) to 56 bit. This can easily be achieved by use of a hash-function H (e.g., RIPEMD160 [16] or SHA-1 [9]) and a trimming function msb, which extracts the 56 most significant bits: \( k = msb_{56}(H(r)) \). Note that the low entropy of key \( k \) is not a severe problem here, because the only purpose of \( k \) (based on randomizer \( r \)) is to hamper brute force attacks (by a factor of \( 2^7 = 128 \) in this setting).
### B. Variant 1
Up to now, the Austrian e-government act [21] and the corresponding domain regulation [7] define just 35 different domain identifiers (see Table I).
So spending 24 bit to store the domain identifier \( dID \) is a massive overhead. A more practical solution is reducing the bit length of \( dID \) by half (i.e., to 12 bit) and using a binary encoding instead of the text encoding. By this, the length of the randomizer \( r \) can be enlarged by 12 bit, which results in \( |r| = 19 \) bit.
### C. Variant 2
This variant directly uses the base number \( B \) (5 byte = 40 bit) instead of the base identifier \( bID \) (128 bit) and hence shortens the input of the encryption function by 88 bit. We will use some of the bits to embed additional data \( X \), which might hold a counter in order to provide different identifiers within the same domain. The remainder of the bits will be used to enlarge the randomizer to 80 bit and replace DES with SKIPJACK [22] (block length 64 bit and key length 80 bit) in CBC mode with ciphertext stealing. This finally results in unique and unlinkable domain-specific identifiers of the form:
\[
dcID = PC(SM((SKIPJACK_r(B || X || dID) || r), P)),
\]
with \( |B| = 40 \) bit, \( |X| = 15 \) bit, \( |dID| = 24 \) bit and \( |r| = 80 \) bit.
Note that the direct use of the base number (which can also be the passport or social insurance number) may be prohibited by law. In this case, variant 1 or the basic scheme have to be used.
### V. Conclusion and Future Work
We are aware of the fact that the proposed scheme is first of all a replacement for a national standard for generating unlinkable domain-specific identifiers (which does not completely fulfil its own requirements). But nevertheless, provably system-wide unique unlinkable and domain specific identifiers based on collision-free number generators (CFNGs), parameterized as defined in Section IV-A, may be employed in other application scenarios as well. These scenarios include identifiers in the context of e-business, the replacement of UUIDs and GUIDs [12], temporary MACs for untraceable network devices [23], and digital pseudonyms [24], [25].
### REFERENCES
[1] P. Leach, M. Mealling, and R. Salz, “RFC 4122 – A Universally Unique IDentifier (UUID) URN Namespace,” 2005, (retrieved: 12/2011). [Online]. Available: http://www.ietf.org/rfc/rfc4122.txt
[2] Microsoft Developer Network, “Globally Unique Identifiers (GUIDs),” http://msdn.microsoft.com/en-us/library/cc246025.aspx, 2008, (retrieved: 12/2011).
[3] National Institute of Standards and Technology (NIST), “FIPS Special Publication 800-67: Recommendation for the Triple Data Encryption Algorithm (TDEA) Block Cipher,” 2008.
[4] ISO/IEC, “ISO/IEC 10116: Modes of Operation of an n-bit Block Cipher,” ISO/IEC, 1991.
[5] National Institute of Standards and Technology (NIST), “FIPS Special Publication 800-38A: Recommendation for Block Cipher Modes of Operation – Methods and Techniques,” 2001.
[6] S. Josefsson, “RFC 4648 – The Base16, Base32, and Base64 Data Encodings,” 2006, (retrieved: 12/2011). [Online]. Available: http://www.ietf.org/rfc/rfc4648.txt
Fig. 5. New implementation of the domain specific citizen identifier $dcID$
[7] Republik Österreich, “Verordnung des Bundeskanzlers, mit der staatliche Tätigkeitsbereiche für Zwecke der Identifikation in E-Government-Kommunikationen abgegrenzt werden (E-Government-Bereichsabgrenzungsverordnung – E-Gov-BerAbgV) SiF: BGBl. II Nr. 289/2004. (Fassung vom 14.9.2011),” 2004, (retrieved: 12/2011). [Online]. Available: http://www.ris.bka.gv.at
[8] ISO/IEC, “ISO/IEC 8859-1:1998, Information technology – 8-bit single-byte coded graphic character sets – Part 1: Latin alphabet No. 1,” ISO/IEC, 1998.
[9] National Institute of Standards and Technology (NIST), “FIPS Publication 180-2: Secure Hash Standard,” 2002.
[10] A. Menezes, S. Vanstone, and P. V. Oorschot, Handbook of Applied Cryptography. CRC Press, Inc., 1996.
[11] IPCOM, “Method of generating unique quasi-random numbers as a function of time and space. PriorArtDatabase. IPCOM#000007118D,” 2002, http://priorartdatabase.com/IPCOM/000007118 (retrieved: 12/2011).
[12] M. Schaffer, P. Scharner, and S. Russ, “Universally Unique Identifiers: How To Ensure Uniqueness While Protecting The Issuer’s Privacy,” in Security and Management, S. Aissi and H. Arabnia, Eds. CSREA Press, 2007, pp. 198–204.
[13] P. Schanner, “Random but system-wide unique unlinkable parameters,” JIS Journal of Information Security, vol. 3, no. 1, January 2012, ISSN Print: 2153-1234, ISSN Online: 2153-1242, in print. [Online]. Available: http://www.scirp.org/journal/jis
[14] National Institute of Standards and Technology (NIST), “FIPS Publication 46-3: Data Encryption Standard (DES),” 1999.
[15] ——, “FIPS Publication 197 – Advanced Encryption Standard (AES),” 2001.
[16] H. Dobbertin, A. Bosselaers, and B. Preneel, “Ripemd-160: A strengthened version of ripemd,” in Proceedings of Fast Software Encryption (FSE), ser. LNCS, D. Gollmann, Ed., vol. 1039. Springer, 1996, pp. 71–82.
[17] C. Meyer and S. Matyas, Cryptography: A New Dimension in Computer Data Security. John Wiley & Sons Inc, 1982.
[18] D. Hankerson, A. J. Menezes, and S. A. Vanstone, Guide to Elliptic Curve Cryptography. Springer-Verlag, 2004.
[19] IEEE, “Std 1363-2000: IEEE Standard Specifications for Public-Key Cryptography,” 2000.
[20] C. Shannon, “Communication theory of secrecy systems,” Bell System Technical Journal, vol. 28(4), pp. 656–715, 1949.
[21] Republik Österreich, “Bundesgesetz über Regelungen zur Erleichterung des elektronischen Verkehrs mit öffentlichen Stellen (E-Government-Gesetz – E-GovG), BGBl. I 10/2004. (Fassung vom 3.3.2011),” 2010, (retrieved: 12/2011). [Online]. Available: http://www.ris.bka.gv.at
[22] National Institute of Standards and Technology (NIST), “SKIPJACK and KEA Algorithm Specifications, ver. 2, 29,” 1998.
[23] M. Schaffer and P. Scharner, “Untraceable Network Devices,” Klagenfurt University (Austria) – System Security Group (syssec), Tech. Rep. TR-syssec-06-04, November 2006.
[24] P. Scharner and M. Schaffer, “Unique User-Generated Digital Pseudonyms,” in MMM-ACWS, ser. Lecture Notes in Computer Science, V. Gorodetsky, I. Kotenko, and V. Skormin, Eds., vol. 3685. Springer, 2005, pp. 194–205.
[25] ——, “Efficient privacy-enhancing techniques for medical databases,” in BIOSTEC (Selected Papers), ser. Communications in Computer and Information Science, A.L.N.Fred, J. Filipe, and H. Gamboa, Eds., vol. 25. Springer, 2008, pp. 467–478.
Sociological Reflections on E-government
Maria João Simões
Department of Sociology
University of Beira Interior, Covilhã
Researcher at Research Centre of Social Sciences (CICS), University of Minho, Braga
Portugal
e-mail: email@example.com
Abstract — The objective of this paper is to present dimensions of sociological analysis that allow a more comprehensive and interpretative analysis of e-government. This effort will contribute to a more critical analysis of its implementation, chosen devices and assessment. The analytical dimensions presented are: (i) citizenship models; (ii) metatheoretical frameworks on society and technology; (iii) the concept of e-government and its articulated domains. It intends to demonstrate that the choice between options of each dimension contributes for different kinds of e-government and results. The e-government is not a neutral issue. The citizenship model adopted, in a very incisive way, makes all the difference in the conception, design, working and results of e-government. The theoretical framework that is underlying to e-government shapes also its design, working and results. But the devices chosen per se are insufficient to characterize an e-government, as their potentialities can be used in a completely different way by people and rulers. Research and projects on e-government are principally focused in e-administration, underestimate e-democracy and e-society that have been analysed in a separate way, which makes difficult a more comprehensive and all-encompassing analysis and assessment of e-government.
Keywords – e-government; participation; technology; society.
I. INTRODUCTION
Most research projects have a strong descriptive approach, probably because, on the one hand, they have, to a large extent, a practically oriented approach focusing in development projects, applications or case studies. On the other hand, most researches come from the information systems field where the major focus is the conception, design and application of devices. So, it can be said that e-government is still an under-analysed area, from a theoretical and conceptual point of view, as referred by Simões [1], Heeks and Bailur [2], and Lindblad-Gidlund and Axelsson [3].
E-government lacks deepening of theoretical and conceptual frameworks from the social sciences, particularly sociology, which can better explain, in a more comprehensive, interpretative and all-encompassing way, what e-government is, why, what for and how it is implemented. Such frameworks would allow a critical analysis of different visions on e-government, the purposes of its creation in each social context, the interests that underlie its creation, the adopted models of e-government and applications, and also to better understand why different social and technological results are achieved.
What we say above allows us to state that e-government is clearly an interdisciplinary area; more intensive interdisciplinary research is crucial especially between researchers both from information systems and social sciences, particularly sociology of science and technology, political sociology and sociology of organizations. Surprisingly, although political sociology is a widespread field, sociologists have underestimated research on e-government.
This presentation shares, thus, the challenge of Lindblad-Gidlund and Axelsson [4] that argues to be necessary to establish vessels among different scientific areas for rigorous and relevant e-government research.
In this way, based on literature review regarding different theories on the relationship between society and technology, a critical reading of crucial literature on the subject, namely Oliver and Sanders [5], Mayer-Schönberger and Lazer [6], Cunningham and Cunningham [7], based on our experience in projects on local e-government [8] and even on e-participation [9][10], our purpose in this paper is to present critical dimensions, within a sociological point of view, that can allow a more comprehensive and critical approach on e-government research, implementation and assessment.
The dimensions of the critical analysis focused from a sociological perspective are stated in a triptych presentation. Firstly, introducing two ideal types of citizenship models. Secondly, debating different metatheoretical frameworks on society and technology. Thirdly, discussing the concept of e-government and its articulated domains: e-administration, e-democracy and e-society. As a conclusion, final considerations will be presented.
II. E-GOVERNMENT IS NOT A SEPARATE ISSUE OF CITIZENSHIP
First of all, government is one of the most important components of a state: it is the way how it was organized and how rulers establish its interaction with people that we can say if we are dealing, for example, with a dictatorial or democratic state. In this sense, as government is a polysemic term, thus e-government is also polysemic.
But the same happens with a democratic government which is not also a neutral term. The history of democracy was undergone by maximalist and minimalist versions of citizenship. When we talk on e-government, what citizenship version are we talking about? So, we affirm that in any research project, either more theoretical or more empirical, or even in any project of implementation, we have to explain which conception of citizenship is used.
A more active or passive concept of citizenship will induce significant variations on the type of services, its contents, on quantity, quality and kind of available information, and on communication patterns, that is, on the kind of adopted e-government and its working. In that sense it is important to reflect on the different impacts that these different conceptions of citizenship have in e-government and also on chosen applications, as we will discuss further on.
Taking in account the weberian methodology of ideal type, two opposite kinds of political participation are presented [11], constructed for clarification purposes, knowing that there are other models between two ideal types where it can be found different combinations of both.
The passive citizenship is embedded into a liberal perspective, which inspires western democracies and where the citizen role has an individualist and instrumentalist approach, the citizen being granted full rights. The individual has, as Oldfield [12] sustains, not only epistemological priority, but also an ontological and moral one.
For the author, citizenship is seen as a legal status which must be sought, and sustained when accomplished. The state and other institutions are looked in an utilitarian way. It is only expected that they allow the conditions for individuals to maximize their own benefits and reach their goals, without any notion of common welfare present. Though, citizens are demanded to follow a certain set of civic obligations towards the state, namely, to vote, to pay taxes and to defend the country, in the case of external menace.
To liberals, politics is a realm of the government, considered only as what politicians, specialists, political parties and bureaucrats do [13][14].
Although political communication consists of emission and reception, verbalization and listening, liberal theory values the speech and neglects the listening part. It is easier for those in charge to speak rather than to listen.
Participation is largely reduced to choose between several options, thus giving the winner the power to establish the direction of the world we live in. Vote is included in a negotiation model within which choices are predetermined, thus limiting not only the choices opportunities but also the imagination. As Barber [15] says, there are few other possibilities that allow the voters to express their opinions, leaving the citizen as a simple spectator.
In an active participation model, the citizen is a member of a political community, in which he/she assumes a central position. Citizenship is not just a *status*: participation is an objective by itself. In this political *praxis*, being a non participant means, in many ways, that he or she can be an individual but not a citizen [16][17].
For the last author, in order to people being engaged in citizenship practice three conditions are requested all of them necessary but none alone sufficient: resources, participation opportunities and motivation. In the resources domain, beyond the assurance of civil, political and social rights, the economic and social resources (a reasonable living income, education, health, among others) as well as competences regarding the political activity are also crucial.
On the other hand, the participation opportunities have to be assured, which implies the creation and widening of an appropriate institutional setting at several levels (local, national, global and also at horizontal and specialized level) that stimulate the civic participation in general and, in a particular way, a rational understanding and better information of public issues, the participation in agenda setting, deliberation and decision making, among other activities.
Individuals also have to be encouraged to participate, to execute their political rights and duties, that is to say, to be citizens. One cannot expect, as Oldfield [18] writes, that citizenship *praxis* and civic conscience to appear spontaneously. As Steinko [19] states, mobilization implies that people feel there is a link between their daily life, in all spheres, being them local (namely the education, employment, environment issues), national or global.
Actually, considering the growing distance between rulers and people, with the option of these procedures, (e-)government, especially at local or regional context, can become a setting not only to a closer interaction between both but also to reduce the citizens’ scepticism concerning politics.
In this model, there is a broader conception of politics, involving all public issues in which the citizens have the right to be involved; «politics describes the realm of *we*» [20].
The access to information is indispensable for the practice of citizenship, but it is only a sufficient condition. Equally important is the kind of information that is delivered. Information has to focus on problems faced by citizens, it has to be contextualized, justified and it should explain the consequences of the political choices that can be made. But the removal of information barriers is not enough [21][22][23].
Speech is equally valued as listening, a recurring and permanent interaction, established upwardly and downwardly, between rulers and citizens.
In this sense, Hacker’s [24] political interactivity model has heuristic value. As daily interaction can be simplified, just including a message and its answer, and even get another message from the first user, political interaction requires two additional interactions, as seen in the Figure 1.
The first message (m1) comes from the citizen towards the politician, who, in return, sends his/her feedback to the citizen (m2). The content of this message will determine what happens after the established interaction. In order to reply (or not) to the requested information or the citizens’ expectations, citizens can answer back (m3), and the government can answer through political action (m4) or an
explanation (m5) explaining why such course of action can not be fulfilled.
![Figure 1 – A basic model of political interactivity [25]](image)
More messages can be exchanged, but this five step flux of interaction model is the basic political interactivity model from which more complex models can be built, emphasizing upward and downward communication between rulers and citizens.
Besides vertical communication, the horizontal kind is also considered to be crucial. On the one hand, the political choice includes deliberation, because individuals, when involved in collective participation, do not always agree on their civic and political concerns. On the other hand, the deliberation help them to overtake their narrow interests; it is through the debate that individuals frequently rejoin themselves, re-evaluate and can reformulate values, beliefs and opinions based upon which they engage in their political participation (Barber [26]; Yankelovich [27]; Oldfield [28]).
According the citizenship model adopted the e-government conception, the design, the implementation and the results and yet the assessment process will be different. Consequently, the services, the kind of information delivered, the communication patterns and the applications will be different. So, the choice of one of these citizenship models makes all the difference from the analytical and empirical point of view and for the achieved results.
III. THEORETICAL FRAMEWORKS ON TECHNOLOGY AND SOCIETY
Chosen the citizenship model to use, we are facing with different theoretical frameworks whenever we engage in further research or when we intend to present and to implement an e-government project. The metatheoretical framework selected has to be clarified because it has consequences on the chosen e-government models, on its implementation and also on achieved results and their assessment.
Many e-government projects and its implementation are based on technological determinism, where the underlying idea is that the properties of technology, namely those used by e-governments, are transposed and absorbed by societies, producing the same effects upon them. This notion forgets that the technological devices are not neutral, since the results will depend on: the specific social and organizational context in which e-government will be implemented; the values and the interests of promoters that influence the choice of e-government models; leadership; the degree of commitment of the staff and the strategy followed; the resistance or involvement on its implementation and yet the way how people and rulers appropriate and shape the devices for their use.
The fact that technological determinism has been dominating this discussion is one of several reasons for the deficit on a more comprehensive and interpretative analysis of e-government.
The option for that metatheoretical framework implies that the e-government projects and their implementation, as well as another projects focused on infra-structures, hardware and software and the assessment indicators are to a large extent, including in European Union (EU), predominantly technological [29].
As an alternative to technological determinism, a sophisticated model of analysis on the relation between society and technology can be used, in which technology and society are mutually related, that Simões [30] nominate reciprocal conditioning. This metatheoretical framework has more heuristic potential because it takes in account crucial social aspects (as power, interest groups, conflict, values and so on) that are present in the conception, implementation and execution phases, and therefore in the outputs reached by e-government. It is into this framework that it can be said “e-government is more about government than about ‘e’” [31]. On the other hand, it does not underestimate the fact that each technological device can condition our action in a specific direction and not in any other.
In this sense, in such metatheoretical framework, the conception of e-government projects, its implementation and the back office, process, output and demand indicators embraces social and technological aspects.
Contradicting the technological deterministic authors and several designers, the applications choice is not sufficient to characterize an e-government.
Firstly, they can think or install, for example, applications to a horizontal communication (from the more “traditional” as fora to the more recent as web 2.0: facebook, twitter and so on), but the rulers or the people, depending of their interests and goals, can make a unexpected use of them. As an example, political parties and rulers in several countries use facebook to communicate with people being interdict the possibility of reply [32]. So, applications designed to a horizontal communication can be used to a vertical and downward one.
Secondly, when a communication device is available, communication might not be started, whether because rulers consider themselves the legitimate representatives of the citizens, whereas these should confine themselves to the episodic election of those, or because citizens are in apathy or do not believe that it is worthwhile, that is, that nothing will come out of their participation. During the timeframe of the Digital Cities Program, the Portuguese Operational
Program for Information Society (POSI) and the Operational Program for Knowledge Society, programs which endured from 1998 to 2006, cities and administrative regions submitted projects to turn themselves into digital cities and territories, e-government being one of the major focus. The great concern with technological modernization and the prevailing technological deterministic perspective lead the promoters to focus mainly on technological infrastructures and software. Most projects encompass devices, although different from one another, allowing horizontal and vertical communication.
We did not find differences neither in the chosen devices in municipalities ruled by either leftist or right parties, nor in the concerns about the actual use of these devices. We present only an exception: in one municipality, where the mayor invested in a more active citizen participation, facing the apathy of people, the mayor said he would focus mainly in face to face participation modalities; only later would he take into consideration information and communication technologies [33]. Nowadays, in some Portuguese cities, new experiences on e-government based in higher citizen participation have to be researched.
Thirdly, there can be some stimulus for citizens to participate, even if there is not any concern from the government with its citizens' worries and anxieties. This is just an illusion of participation, which can be amplified by an automatic answer by e-mail where the citizen participation is thanked, even if there is not a real intention of actually answering and there is a vague promise of taking the citizen participation into account.
Some researchers have yet «observed that the same information system in different organizational contexts leads to different results. Indeed, the same system might produce beneficial effects in one setting and negative effects in a different setting» [34]. Once again, we can point out that technological deterministic authors underestimate several social factors that make the difference in the results of e-government.
IV. E-GOVERNMENT: CONCEPT AND ITS DOMAINS
In the concept of e-government underlies a normative and evaluative component. From a sociological point of view, it is important to understand if there is a political or normative position or rather a scientific one. For example, it is often said that “e-government is better government” [35]. From a scientific point of view, only through empirical evidence can we verify if the e-government can or cannot foster a greater engagement with citizens and enables or not better quality services and policy results.
Several authors, as St-Amant [36], referred three interrelated domains of e-government: e-administration, e-democracy and e-society. The first stands on the administrative modernization issue, on efficiency and efficacy of services and whether electronic services do or do not improve services to citizens, being these principally seen as customers.
In the domain of e-democracy it is debated to which extent ICT can enhance or not the citizen participation and the relationship between rulers and ruled ones. The debate on e-democracy is wide but has been, by large extent, carried out disconnected from e-government. On the other hand, we can face more pessimistic points of view, as Sunstein's [37], or very optimistic ones, as Rheingold's [38], or even more realistic perspectives, stated namely by Simões [39], that identify new opportunities but also new constrains on e-democracy. Either way, this discussion is not the focus of this paper. Regardless of these perspectives having different empirical implications and results, we have different models of political participation in real or virtual context. The chosen model of participation within e-government implies different devices, different uses, different ways of implementation and different achieved results. This is one of the central issues of this paper.
In the domain of e-society it is attempted to verify if the ICT contributes or not to the strengthening of relations between government and civil society organizations, namely NGO, trade unions, universities, I&D institutions, cultural associations, sport clubs and also corporations.
These domains have been frequently studied separately as they were completely different issues. We state that although a research or a project can focus more in one of e-government domains, it has to take into account all them, because they are all closely interconnected as we have emphasized along the paper.
V. FINAL REFLECTIONS
The objective of this paper was to present sociological dimensions of analysis that allow a more comprehensive and interpretative analysis of e-government, its implementation, chosen applications and its assessment.
The dimensions analysed allow a more critical and deeper debate about the interconnection between social and technological factors concerning e-government. Thus, we point out to a more intensive interdisciplinary among different scientific areas for a relevant and rigorous e-government research.
E-government is not a neutral issue. A more active or passive conception of citizenship have significant implications on e-government conception, design, implementation and results.
According to the chosen participation model we will find differences regarding the kind of information and services delivered, the patterns of communication, the intensity and frequency of the interaction between rulers and people.
The adoption of a technological deterministic or a reciprocal conditioning perspective between technology and society have also different implications in e-government, leading to different kinds of e-governments and necessarily different applications. As users can shape applications according to their interests and necessities, the chosen applications per se are insufficient to denominate the kind of e-government. Such is only possible with an on-going assessment and with indicators embracing technological and social aspects.
E-government research is, in a large extent, centred in e-administration, it underestimates the e-democracy and e-society, domains largely analysed apart. Although efficiency and efficacy of services are crucial for e-government working, e-government is not a corporation. E-government is more related with people government, with e-democracy and e-society. So, even if a research or a project focuses more on a unique e-government domain, it has to take all of them into account, as they are all closely interconnected. If we do not head towards this path we are drifting apart of the essence of the e-government concept.
Further researches could point to deepen this theoretical reflection on e-government connecting it to more extended empirical research and identifying assessment indicators of e-government that encompass social and technological aspects.
REFERENCES
[1] M. J. Simões, Política e Tecnologia – Tecnologias da Informação e da Comunicação e Participação Política em Portugal, Oeiras: Celta, 2005.
[2] R. Heeks and S. B. Bailur, “Analyzing e-government research: Perspectives, philosophies, theories, methods, and practice”, Government Information Quarterly, vol. 24, 2007, pp. 243-265.
[3] K. Lindblad-Gidlund and K. Axelsson, “Communicating Vessels for Relevant and Rigorous eGovernment Research”, P. Cunningham and M. Cunningham (Eds.), Collaboration and the Knowledge Economy – Issues, Applications, Case Studies, vol.5, Part 1, 2008, pp. 255-261.
[4] K. Lindblad-Gidlund and K. Axelsson, “Communicating Vessels for Relevant and Rigorous eGovernment Research”, P. Cunningham and M. Cunningham (Eds.), Collaboration and the Knowledge Economy – Issues, Applications, Case Studies, vol.5, Part 1, 2008, pp. 255-261.
[5] L. Oliver and L. Sanders, (Eds.), E-Government Reconsidered: Renewal of Governance for the Knowledge Age, Regina: University of Regina, 2004.
[6] V. Mayer-Schönberger and D. Lazer, (Eds.), From Electronic Government to Information Government, Cambridge: The MIT Press, 2007.
[7] P. Cunningham, and M. Cunningham, (Eds.), Collaboration and the Knowledge Economy – Issues, Applications, Case Studies, Amsterdam: IOS Press, vol. 5, part 1, 2008.
[8] M. J. Simões (coord.), Dos Projectos às Regiões Digitais - Que Desafios?, Lisbon: Celta, 2008.
[9] M. J. Simões, Política e Tecnologia – Tecnologias da Informação e da Comunicação e Participação Política em Portugal, Oeiras: Celta, 2005.
[10] M. J. Simões, A. Barriga and N. A. Jerónimo, Brave New World? Political participation and new media, SOTICS 2011: The First International Conference on Social Eco-Informatics, pp. 55-60, Copyright (c) IARIA, ISBN: 978-1-61208-163-2, available at http://www.thinkmind.org/index.php?view=article&articleid=sotics_2_011_3_10_30040.
[11] Barber (1984), Oldfield (1998) and Held (1996) are crucial authors for our elaboration of citizenship typology; this typology is further discussed in Simões (2005) and in M. J. Simões and E. Araújo (2009).
[12] A. Oldfield, Citizenship and Community: Civil Republicanism and the Modern World, London: Routledge, 1998, (2ª Ed.).
[13] B. Barber, Strong Democracy: Participatory Politics for a New Age, Berkeley, University of California Press, 1984.
[14] D. Held, Models of Democracy, Cambridge: Polity Press, 1996, (2ª Ed.).
[15] B. Barber, Strong Democracy: Participatory Politics for a New Age, Berkeley, University of California Press, 1984.
[16] B. Barber, Strong Democracy: Participatory Politics for a New Age, Berkeley, University of California Press, 1984.
[17] A. Oldfield, Citizenship and Community: Civil Republicanism and the Modern World, London: Routledge, 1998, (2ª Ed.).
[18] A. Oldfield, Citizenship and Community: Civil Republicanism and the Modern World, London: Routledge, 1998, (2ª Ed.).
[19] A. Steinko, “Herramientas para un chequeo de la dinámica democrática”, REIS , 1, 1994, pp. 9-35.
[20] B. Barber, Strong Democracy: Participatory Politics for a New Age, Berkeley, University of California Press, 1984.
[21] B. Barber, Strong Democracy: Participatory Politics for a New Age, Berkeley, University of California Press, 1984.
[22] D. Yankelovich, Coming to Public Judgement – Making Democracy Work in a Complex World, Syracuse: Syracuse University Press, 1991.
[23] M. Hale, J. Musso and C. Weare, “Developing digital democracy: evidence from Californian municipal web pages” in Barry Hague and Brian Loader (Eds.), Digital Democracy: Discourse and Decision Making in the Information Age, London, Routledge, 1999, pp. 96-115.
[24] K. Hacker, “The White House Computer-mediated Communication (CMC) System and Political Interactivity” in K. Hacker and J. Dijk (Eds), Digital Democracy, London : Sage, 2000, pp. 105-129.
[25] K. Hacker, “The White House Computer-mediated Communication (CMC) System and Political Interactivity” in K. Hacker and J. Dijk (Eds), Digital Democracy, London : Sage, 2000, pp. 105-129, p. 123.
[26] B. Barber, Strong Democracy: Participatory Politics for a New Age, Berkeley, University of California Press, 1984, p.123.
[27] D. Yankelovich, Coming to Public Judgement – Making Democracy Work in a Complex World, Syracuse: Syracuse University Press, 1991.
[28] A. Oldfield, Citizenship and Community: Civil Republicanism and the Modern World, London: Routledge, 1998, (2ª Ed.).
[29] M. J. Simões and E. Araújo, “A sociological look at e-Democracy”, in Patrizia Bitonti and Vanessa Carrieri (Eds.) e-Gov 2.0: pave the way for e-participation, Roma: EuroSpace S.r.l., 2009, pp. 155-161.
[30] M. J. Simões, Política e Tecnologia – Tecnologias da Informação e da Comunicação e Participação Política em Portugal, Oeiras: Celta, 2005.
[31] OECD, The e-government imperative: main findings, www.oecd.org/publications/POL_brief, 07-09-2007, 2003, pp.1-7, p.1.
[32] M. J. Simões, A. Barriga, N. A. Jerónimo, Brave New World? Political participation and new media, SOTICS 2011: The First International Conference on Social Eco-Informatics, pp. 55-60, Copyright (c) IARIA, ISBN: 978-1-61208-163-2, available at http://www.thinkmind.org/index.php?view=article&articleid=sotics_2_011_3_10_30040.
[33] M. J. Simões (coord.), Dos Projectos às Regiões Digitais - Que Desafios?, Lisbon: Celta, 2008.
[34] J. Fountain, “Central Issues in the Political Development of the Virtual State”, The Network Society and the Knowledge Economy: Portugal in the Global Context Conference, March 4-5, http://www.umass.edu/digitalcenter/research/pdfs/jf_portugal2005_centralissues.pdf, 26-09-2007, 2005, pp.1-29, pp. 4-5.
[35] OECD, The e-government imperative: main findings, www.oecd.org/publications/POL_brief, 07- 09 -2007, 2003, pp. 1-7, p.1.
[36] G. St-Amant, “E-gouvernement: cadre d’évolution de l’administration électronique”, Revue Management et Système d’Information, vol. 10, n°1, ABI/INFORM Global, 2005, pp. 15-39.
[37] C. Sunstein, Republic.com, Princeton: Princeton University Press, 2001.
[38] H. Rheingold, Electronic Democracy Toolkit, available at www.well.com/user/hlr/electrondemoc.html, 06-09-2000, 1996.
[39] M. J. Simões, Política e Tecnologia – Tecnologias da Informação e da Comunicação e Participação Política em Portugal, Oeiras: Celta, 2005.
SSEDIC: Building a Thematic Network for European eID
Victoriano Giralt
Central ICT Services
University of Málaga
Málaga, Spain
e-mail: firstname.lastname@example.org
Hugo Kerschot
IS Practice
Brussels, Belgium
e-mail: email@example.com
Jon Shamah
EJ Consultants
Harrow, United Kingdom
e-mail: firstname.lastname@example.org
Abstract—Digital Identity is a critical element for a digital society as proposed by the Digital Agenda for Europe. The width and breadth of the subject makes it mean different things in different sectors, even to different projects funded by the European Commission. Thus, having a network that provides a platform for all the stakeholders of electronic identity to work together and collaborate to prepare the agenda for a proposed Single European Digital Identity Community, is of prime importance to the achievement of said goals. The network, SSEDIC, is working on identifying the actions and the timetable for the Digital Agenda and the successful launch of the European Large Scale Action and European Innovation Partnerships, as well as to provide a multi stakeholder planning resource to assist its implementation. A first batch of deliverables will be presented to the European Commission at the end of February 2012 and then made available to the public. This paper will present the SSEDIC expert network as it is now, what has been done to build the network and the accomplishments of its first year. But, the most important aim of this paper is to increase awareness about SSEDIC and reach out to some valuable contributors that had not yet been identified to get them involved in the network.
Keywords—Electronic Identity; Single European Digital Community; Digital Agenda for Europe.
I. INTRODUCTION
Every European digital [1] (by 2020) this is the ambitious goal set by Commissioner Neelie Kroes for the Digital Agenda for Europe (DAE)[2]. In order to achieve this goal, a single European digital community is needed, and the DAE (Digital Agenda for Europe) sixteen key actions [3] show how it could be achieved. Also, the DAE calls for stakeholder involvement to reach the goals.
Key action 16 in the Digital Agenda for Europe [3] proposes a Single Digital Identity Community and scoping that is the purpose of SSEDIC. The network has built a platform where stakeholders can identify the actions and the timetable for their resolutions to result in the successful launch of the European Large Scale Action and European Innovation Partnerships (ELSA/EIP). This cannot happen out of thin air, thus SSEDIC [8] is building upon the ELSA/EIP thematic consultations carried out by the ELSA/EIP eIDM (Electronic IDentity Management) Expert Working Group [4] and today’s Large Scale Pilots (LSPs) such as, but not limited to, STORK [5], PEPPOL [6] or SPOCS [7].
The SSEDIC thematic network [8], during its first year in existence, has established a series of stakeholder groups in sectors outlined in the ELSA/EIP report [4]. Each of the groups will consider, through further consultations, the political, economic, social, technical, legal and environmental aspects of a single European digital community.
This network is built gathering experts from 35 partners and an initial group of associated partners. The former provide 67 experts in electronic identity (eID) who are picked for their knowledge of the eID or stakeholder domain rather than just as representatives of organisations. This has been a fundamental criterion for partners to ensure that the views and consultations are of the highest value and relevance to this highly important thematic. The later can grow as much as wanted and one of the main aims of the present paper is to increase the visibility of SSEDIC in as much pertinent communities as possible, because the ambition of this network is to build a community of high level European and international experts up to 2013 and, if possible, beyond.
A stakeholder is defined as any group or individual that can affect or is affected by the achievement of SSEDIC [8] objectives. They often have differing interests and may put conflicting pressures on the project. The consortium needs to attend to a rich variety of claims and interests of stakeholder groups in the industry, yet at all times needs to profile a coherent identity of the project to each and every one of these groups. A wide range of persons and groups exist with differing legitimate interests in SSEDIC. Recognising and addressing their needs and interests will in turn enhance the performance of the project, ensure that it is aligned with market realities, and secure its continued acceptance.
Additionally an overarching and integrated view of the accumulated results and inputs from the various stakeholder sectors will be taken in order to build an overview and impact assessment of a single European digital community on the overall European Community and also on individual EU Member States.
The high level outcome of the SSEDIC thematic network [8] is to provide a wide ranging and valuable consultation-based resource and consensus which will enable the European Commission to understand the roadmap that must be addressed within the ELSA/EIP programme to progress Europe’s single
digital community vision as outlined by the DAE [2] across each and every sector of the European Community. And this output is intended to be a thought-through and widely agreed blueprint for step-by-step actions which can feed directly into the future ELSA/EIP-programme and drive that programme towards a successful conclusion delivering a European digital identity community.
In order to have real impact on society, the SSEDIC thematic network [8] is not intended to be an academic exercise, it is an action plan with a roadmap for the DAE for the coming decade. As such, SSEDIC results need to reflect a transformational shift in the way everyone in the European community will think, behave, transact and indeed live in the coming years. The vision to be established by SSEDIC can be a beacon to the rest of the world, demonstrating how the efficiency of a digital community can be translated into cultural and economic leadership. Although the vision will need to be technology led, it cannot be technology targeted; rather, the vision should integrate goals derived from stakeholder sector needs and benefits and within a holistic framework of actions.
The present paper will describe the SSEDIC thematic network background and work methodology, that has lead to a first batch of deliverables that will seed the final results expected by the end of 2013.
II. BACKGROUND
In 2009, the European Large Scale Actions consultations commenced the description of a Single European Digital Community. A number of SSEDIC partners made contributions via the ELSA/EIP eIDM Expert Working Group, which resulted in the *ELSA/EIP report* [4]. This consultation described an interoperable network of independent but regulated Identity Service Providers, many possibly Public-Private partnerships, which would make an eID (not *National* eID) available to each citizen within each Member State, while retaining full freedom of choice for the individual.
SSEDIC partners include representatives of member states with a *National eID* infrastructure as well as countries with alternative eID models.
SSEDIC partners include experts and organisations that have participated in the eID initiatives of Norway, Denmark, Austria, Italy, Belgium and Germany. Also, SSEDIC keeps contact with the eGov subgroup via the Commission, in order to assess eGovernment policy and in particular eID policies of the Member States.
The Large Scale Pilots such as STORK [5], SPOCs [5], epSOS [10], PEPPOL [6] and other projects are critical to the success of the Single European Digital Community. The technology and standards being evolved will form the cornerstones for interoperability, not only for cross-border use cases, but also to establish and cement trustworthy relations between Identity Service Providers in the same countries. Many of these projects could not join SSEDIC as full partners themselves, but are contributing as observers with strong inputs to the consultations, thus providing an opportunity to ensure the continuity and sustainability of their key outputs. On the other hand, many of the contributors to these projects are SSEDIC partners, thus there is strong involvement of the coordinators of the LSPs in the SSEDIC Network and overlap of many members. This will ensure that full mutual benefit is realised and the standards, components and demonstrator experience can be incorporated into the consultative outcomes. As SSEDIC progresses, contacts with other EU projects in all sectors will be fostered.
The Higher Education sector is a special case as they already have electronic identity infrastructures in production across Europe and beyond, that connect research and educational institutions both inside Member States and inside and outside the Union. Three partners of project SEMIRAMIS [11], that deals with eID supporting the movement of students inside the European Higher Education Area [12], are SSEDIC partners. Other partners connect SSEDIC to experts networks in Europe dealing with eID federations in research and education.
STORK [5] is essentially a proof of concept of technical interoperability, but the project has established the basic building blocks of the infrastructure that will ensure eID interoperability at European level, including common code for an architecture and interoperability platform which will be released under EUPL. These building blocks address other dimensions beyond technical interoperability, such as multilateral trust mechanisms, framework for security assessment of national infrastructural components, harmonisation of Quality Authentication Assurance mechanisms, etc. Additionally, the pilots that the project have set-up, and which make use of the above mentioned infrastructure, have a strong potential beyond the project time frame. SSEDIC will use as basis the studies, technology overviews, and prototypes on new and upcoming technologies produced in STORK for the consultations with the experts groups. SSEDIC has a strong link to STORK through six common partners. These strong relationships will result in additional benefits and allow for making suggestions as to how to exploit the achievements of STORK and its pilots into the future.
III. WORK PLAN AND METHODS
There is a **communication plan** supported by an **action plan**, that cover the 36 months duration of the project, to use and disseminate knowledge at different levels. The main areas addressed by the plan are:
- promoting the SSEDIC thematic network identity and results within the network and beyond;
- sharing general knowledge, specific information and documents through open source collaborative tools;
- organising meetings, seminars and workshops in different formats for the use of the entire thematic network team, using traditional as well as innovative techniques;
- continuously integrating the SSEDIC thematic network knowledge in partners dissemination channels
A. **PESTLE**
The aggregation and organisation of the consultation data will be based on PESTLE, which stands for:
• Political
• Economic
• Sociological
• Technological
• Legal
• Environmental
PESTLE analysis is an audit of an organisation’s environmental influences with the purpose of using this information to guide strategic decision-making. A PESTLE analysis is a useful tool for understanding the big picture of the environment in which any organisation is operating [13].
All consultation work will take into account these six aspects, where relevant.
The PESTLE analysis will allow for the production of timelines with specific actions to be carried out in each of the six aspects.
B. Activities
SSEDIC has established a series of stakeholder groups in sectors contributing to the EIP. Each of the groups considers, through further consultations, the political, economic, social, technical, legal and environmental aspects of a single European digital community. This group is formed off the experts from the 35 partners plus experts from associated partners, which number should grow by accretion over the 36 months lifespan of the project. Each stakeholder work programme consists of brainstorming workshops, strategy papers and joint meetings with more general sector organisations to gain a fuller understanding of the requirements and prerequisite actions for delivery of the vision in that sector. Hard data will be built to consider the impact and opportunities of the single European digital community in the short, medium and long term.
C. Work Packages
SSEDIC has organised the consultation at 3 levels and each of these levels has a dedicated work package:
1) Stakeholders Sector Consultation
2) Technology and Infrastructure Consultation
3) Business Model and Regulations Consultation
plus three global work packages dedicated to coordination, dissemination and outcome management.
The materials produced by the three consultation work packages will be merged by the outcomes management one and used for dissemination.
The stakeholders sector consultation, due to the large number of stakeholders sectors, has required these to be formed into 6 groupings described in Figure 1.
The consultation on technology and infrastructure is split into natural areas of interest:
• Security
• Privacy and Ethics
• Enrolment
• Identity Models
– Nonrepudiation
• Interoperability
• Identity Service Provision
• Authentication
• High Level Architecture
– Standards
– Integration
– Resilience
– EU projects
• Accessibility
– Credentials
– Accessibility
• Operations
– Regulations
– Monitoring
– Quality of Service
The Business Model and Regulatory consultation is considering business models, revenue models, and regulatory regimes needed to establish a successful vision. It is looking in depth at the ELSA/EIP Thematic consultation [4] and further expanding these business aspects. It is also looking at Member State issues, interoperability and also cross stakeholder benefits. This work package overlaps many stakeholder and technology issues such as privacy, ethics and standards.
D. Tools
The ambition of the SSEDIC thematic network, is to build a community of high level European and international experts. This community is being built via virtual tools such as a dedicated online workspace and online conferences as well as via real live events integrated in the EEMA (European association for eIdentity and security) [9] conference programme and other major European events.
Our strength is the quality of our network individuals, not just their affiliate organisations. Each of the experts is picked for their knowledge of the eID or stakeholder domain rather than just as representatives of organisations. This has been a fundamental criterion for partners to ensure that the views and consultations are of the highest value and relevance to this highly important thematic project.
The network is composed by four groups of partners who check and balance each other in their different sectors:
• Industry: who have strong contacts in the private sector
• Public sector: with access to government and local agencies and stakeholders
• Academic partners: with critical reflections on industry services and public sector requirements and interests such as Erasmus students.
• Small and medium sized consultancies: who have strong influence across all domains.
As much dedication and knowledge as persons can dedicate to the SSEDIC network, there is a clear need for technological tools to support them. Thus, SSEDIC has established a main dissemination web site, http://www.eid-ssedic.eu/ with both public and private areas for document and information sharing.
and download. Final results will be published in this web site when available.
A second technological tool is an online social network and community building one provided by one of the SSEDIC partners, located at http://ssedic.syncsphere.com/. This tool is used for discussions on the consultation process and allows the network experts to share knowledge and opinions.
The third big technological tool is the on-line surveying one, that has allowed the network to carry out a first survey on eID gathering input from 211 experts on the matter.
Of course, other tools such as email or teleconferencing are also being used to coordinate the experts network.
Non technological tools like publications and presentations in relevant events are also being used for the dissemination and outreach of the SSEDIC thematic network.
E. Barriers
The SSEDIC thematic network identified four barriers that could hinder its efforts and prepared mitigation actions:
1) Lack of response from stakeholders
A large, and increasing, number of experts minimises this risk
2) Non-representative opinions
Minimised by carefully monitoring each sector
3) Lack of funds for in-depth citizen consultation
Mitigated by the use of online surveying tools
4) Contrary interests of stakeholders
Mitigated by already existing consensus for the need of a common vision of eID and the wide involvement of stakeholders in SSEDIC
IV. PROJECT FIRST YEAR
The project officially started on December 15th 2010 with the SSEDIC kick-off meeting. The main aims of this first year the project have been to:
- Introduce the project to all stakeholders;
- Make stakeholders aware of basic information regarding the SSEDIC project
- Inform stakeholders of the portal as an information resource
- Help promote the project in conferences and other events
- Initiate interaction with stakeholders and receive feedback and reactions about the project that will be used in media relations and in designing the dissemination plan
- Promote participation of institutions and organisations through the Project Forums and surveys in the portal
- Focus from the start on establishing a favourable reputation for the project and consortium
- Profile a coherent identity of the project to each and every one of the stakeholder groups
V. RESULTS
It must be emphasised that SSEDIC is a thematic network for consultation and is not mandated to make decisions on technologies. However it is envisaged that technical recommendations and statements of Best Practice will be agreed and presented among the final outcomes.
A. Expected Final Outcomes
SSEDIC [8] final outcomes should enable the European Commission to instigate measures to allow Member States fulfil the vision.
1. SSEDIC will create for each Stakeholder Sector an electronically retrievable resource, containing the main consultations, consensus and impacts. This should enable the entire European Community to conduct actions that will ultimately contribute to or benefit from the Single European Digital Identity Community.
2. SSEDIC will create, at the technical level, an electronically retrievable road map of critical actions, milestones and time lines. This road map will outline how to achieve the vision of the Single European Digital Identity Community.
3. SSEDIC will create a combined topological mapping of all the Stakeholder and Technical Sectors, which will be integrated into this single high level road map. This mapping will ensure that role/responsibility divisions and expectations are clear to all stakeholders.
4. SSEDIC will create a combined impact assessment summary, across all the Stakeholder and Technical Sectors integrating all business and regulatory issues.
B. Achieved Results
Over its first year of existence, the SSEDIC thematic network has already been able to produce some good quality deliverables with interesting information, relevant. These materials will be available on the main dissemination web site, http://www.eid-ssedic.eu/, once they are presented to the
European Commission. We are not authorised to reveal the contents until this presentation has occurred. The results will be available under a Creative Commons license in order to achieve as high an impact as possible.
Many of the partners have done presentations in sector events as well as in institutional meetings like the Euro Parliament or the International Telecommunications Union, and events organised by network partner EEMA [9].
Each of the relevant work packages have produced a report with information of the activities and consultations that have been carried out in their area, interim reports and conclusions, calls for action and future activities.
211 eID experts all over Europe where surveyed on items about
- The impact (or not) of eID in a professional environment
- Their policy views on different aspects of eID
- The adoption of the eID technology in a business environment
- Possible future business and governance models on eID infrastructure
- Security and privacy aspects of eID
The results of this survey have already been processed and transformed into a report that will also be presented to the European Commission at the end of February 2012 and the made publicly available at the SSEDIC main dissemination web site, http://www.eid-ssedic.eu/.
VI. Conclusions and Future Work
The network start has not been an easy one because it is difficult to coordinate such a big and heterogeneous group of people as busy as field experts. But, the willingness of this same group of people to collaborate towards a common vision of eID in Europe, and beyond, has been key to a successful start.
Work is already progressing as predicted and the first results have seen the light, though not yet, as the writing of the present paper, general availability.
There are clear definitions for further work during the two remaining years of the project into 2013 and ways to improve what has already been done such as:
- More sector reports
- Surveys focused on different sectors
- Increase the level of discussion
- A big eID event involving all interested parties, stakeholders and LSPs
ACKNOWLEDGEMENTS
The present paper is not in any way personal work, it is just a recount of the work done by the SSEDIC thematic network consortium partners for winning a bid and to set up the project, and the work of all partners, consortium and associate, and other contributors to the consultations who have given excellent input to the results achieved so far. Of course, the authors would like to thank all of these people and also those LSPs and CIPs that have paved the way for our network to be a need.
REFERENCES
[1] N. Kroes, Every European Digital, Neelie Kroes Blog, URL: http://blogs.ec.europa.eu/neelie-kroes/every-european-digital/ retrieved: November 21st, 2011.
[2] European Commission, Digital Agenda for Europe, URL: http://ec.europa.eu/information_society/digital-agenda/index_en.htm retrieved: November 21st, 2011.
[3] European Commission, Digital Agenda for Europe: key initiatives, URL: http://europa.eu/rapid/pressReleasesAction.do?reference=MEMO/10/200 &format=HTML&aged=0&language=EN&guiLanguage=en retrieved: November 21st, 2011.
[4] ELSA Thematic Working Group on Electronic Identity Infrastructure, European Large Scale bridging Action (ELSA/EIP): Electronic Identity Management Infrastructure for trust worthy services, European Commission, Directorate-General for Information Society and Media, URL: http://ec.europa.eu/information_society/activities/e-government/docs/studies/elsa_eid_thematic_report_final.pdf retrieved: November 21st, 2011.
[5] STORK project, Secure identity across borderers linked, URL: http://www.eid-stork.eu/ retrieved: November 21st, 2011.
[6] PEPPOL project, Pan-European Public Procurement OnLine, URL: http://www.peppol.eu/ retrieved: November 21st, 2011.
[7] SPOCS project, Simple Procedures Online for Cross-Border Services, URL: http://www.eu-spocs.eu/ retrieved: November 21st, 2011.
[8] SSEDIC project, SSEDIC: Building a Thematic Network for European eID, URL: http://www.eid-ssedic.eu/ retrieved: November 21st, 2011.
[9] EEMA, European Association for eIdentity and Security, URL: http://www.eema.org/ retrieved: November 21st, 2011.
[10] epSOS project, Smart Open Services for European Patients, URL: http://www.ep sos.eu/ retrieved: November 21st, 2011.
[11] SEMIRAMIS project, Secure Management of Information across multiple Stakeholders, URL: http://www.semiramis-cip.eu/ retrieved: November 21st, 2011.
[12] EHEA, European Higher Education Area, URL: http://www.ehea.info/ retrieved: November 21st, 2011.
[13] F.J. Aguilar, Scanning the business environment. Macmillan, New York, 1967.
Designing National Identity:
An Organisational Perspective on Requirements for National Identity Management Systems
Adrian Rahaman, Martina Angela Sasse
Computer Science Department
University College London
London, United Kingdom
email@example.com, firstname.lastname@example.org
Abstract - Many National Identity Management Systems today are designed and implemented with little debate of the technologies and information required to fulfil their goals. This paper presents a theoretical framework detailing the organisational requirements that governments should consider to implement effective identity systems. Analysis is based on publicly available documentation on the implementation of National Identity Systems in the countries of Brunei, India, and the United Kingdom. The findings and the framework highlight the importance of clearly defining the purpose of the system, which has implications on the authenticity, uniqueness, and uses of identity; failure to consider these components is likely to lead to ineffective identity systems and policies.
Keywords – identity; identity management system; organisation; government; policy
1. INTRODUCTION
Identity is a valuable resource that shapes and defines social interactions [1] by reducing uncertainty and building trust between parties. Governments have traditionally provided identities for their citizens, and used them to manage the provision of services. In an age of growing travel and migration, and facing threats such as illegal immigration, crime and terrorism, “many governments today are now trying to reassess their identity policies in light of technological changes” [2].
Governments have tended to view National Identity Management Systems (N-IDMSs) technology as a silver bullet – or at least cornerstone – to tackling these problems, but fail to consider the complexity of delivering such systems [3]. In the UK, attempts to short-cut debate and deliver a system quickly led to adoption of a system that has now been scrapped [4]. Without proper consideration of purpose and operational requirements of the N-IDMS, it is unlikely they will deliver their stated goals.
Convinced that requirements for a strong proof of identity means an increase in security, governments have not examined the use of identity beyond it. But personalised and customer/citizen-centric services mean that identity is no longer just a mechanism for individuals to access resources – it has itself become a valuable resource being accessed by organisations to inform their decisions [5-7].
Still, most research on this topic focuses on identity as a security mechanism. For example, a very comprehensive model for governments’ transition to digital N-IDMSs [8] mainly describes its use for online authentication purposes; [9] developed an IDMS architecture that places identity as a layer below information resources.
The research presented in this paper moves beyond the security perspective, viewing identity as a strategic resource. The aim of the study was to uncover organisational identity requirements, and their effects on the design and implementation of IDMSs (The term organisation as used within this document refers to the organisation that is implementing the IDMS).
Section II below explains the methodology followed in this study. In Section III, we present our findings, and explore the processes of identity construction and identity use. Section IV highlights the importance of purpose, which then ties all the findings into a single framework. In Section V, we discuss the implications for future N-IDMSs: to meet their defined purpose, the key factors of authenticity, uniqueness, and the objectives of the relying parties have to be clearly defined.
II. METHODOLOGY
Our research used a case study approach – a systematic analysis of the identity phenomenon in 3 different cases [10] of N-IDMS implementations in Brunei, India, and the United Kingdom.
The data analysed on the UK and India N-IDMSs was publicly available system documentation published by the respective lead agencies (IPS and UIDIAI respectively); for the Brunei case study, interview sessions with key government officials were recorded and transcribed; interviews were conducted with:
• 3 employees from the lead agency (BruNIR) that deal with strategy and implementation of the system.
• 2 employees from a security organisation that works with the lead agency on the N-IDMS.
• 3 employees from a Relying Party that makes use of the N-IDMS as an authenticator
• 1 employee from a Relying Party that was seen as a prime candidate during the initial phases but is now considering launching its own IDMS system.
The data was analysed using Grounded Theory, a method to develop theory that is grounded in data [11]; i.e., it does not start with a preconceived theory, but seeks to generate new theory through a systematic collection and analysis of data [12].
III. RESULTS
Our analysis revealed that organisational identity requirements, and its eventual impacts on the final design of the system, can be divided into two main areas; **identity creation** and **identity application**.
A. Identity creation
When an identity system is first implemented, a new and unique context is created, within which identities need to be instantiated. It is within this newly created context that an organisation needs to ensure the **correctness** of identities being enrolled. This process is important because it affects the integrity of the identity, and has an impact on the type and amount of personal information being collected and stored.
The challenge of the enrolment process is that it involves the verification of unknown individuals. Organizations typically fall back on two main criteria when enrolling new identities: **authenticity** and **uniqueness**.
1) Authenticity
Authenticity describes the truthfulness of an identity created within the IDMS. It seeks to answer the question, *is the individual who he says he is?* Organisations typically ensure authenticity of an individual’s identity by verifying his/her biographical information against different sources. Organisations can vary the source of information by choosing between two different schemes:
- **Introducer-based schemes** build on the concept of personal referrals - having an already enrolled individual vouch for the authenticity of the individual who is attempting to enrol in the system.
- **Document-based schemes** are designed around the use of available identification documents provided by other organisations (bank statements, utility bills, etc). Such schemes rely on third-party organisations confirming the authenticity of enrolling individuals.
While an organisation can choose between the two sources of information, it is limited by the context of its implementation; the main contextual factors that influence the applicability of these schemes are **universality** and **intimacy**.
a) Universality
This concept captures how many members of the target population already possess widely accepted forms of identity documents. These are identities that individuals have established with third-party organisations with whom they have a trust relationship; examples of such organisations include banks, utilities, and municipalities that an individual has interacted with for a period of time.
The degree of universality in the target population will affect an organisation’s ability to rely on a document-based scheme for authenticity. Specifically, having little to no universality would remove this option, because many individuals would not be able to provide the required documents.
The case study of the Indian NIDMS provides an example of the problem arising from low universality. A large section of the population has been locked out of both public and private services; the weak identity infrastructure has resulted in a fragmented approach to the enrolment in current systems, placing large burdens on most of the poor population to prove themselves, and being denied access to basic services as a result.
"...every time an individual tries to access a benefit or service, they must undergo a full cycle of identity verification. Different service providers also often have different requirements in the documents they demand, the forms that require filling out, and the information they collect on the individual. Such duplication of effort and identity silos increases overall costs of identification, and cause extreme inconvenience to the individual. This approach is especially unfair to India's poor and underprivileged residents, who usually lack identity documentation, and find it difficult to meet the costs of multiple verification processes." [13]
Given the aim is to provide access to its poorer citizens, India cannot create an N-IDMS that relies on a document-based scheme. Therefore, the UIDAI has chosen an introducer-based scheme, "where introducers authorized by the Registrar authenticate the identity and address of the resident" [14].
In contrast, the abandoned UK N-IDMS was strongly motivated by prevention of criminal activities and illegal immigration. While the system documentation does state that the UK N-IDMS will make it easier to prove identity [15], UK citizens were not being denied services because of a lack of identity - most of the population had recognised forms of identity provided by third-party organisations.

Figure 1. Organisation's Identity Creation process - Authenticity
The UK system took a document-based approach, requiring individuals who enrol for an identity to provide several different documents as proof for the authenticity of the claimed identity [16]. The government required that individuals provide documents that have some form of unique identifier such as passport numbers, driving license numbers, national insurance numbers and "any number of any designated document, which is held by him" [17]. This
creates an *information net* around the claimed identity, which the government can then use to ensure authenticity by verifying the individual’s personal information with the relevant third party organisations.
b) *Intimacy*
The concept of intimacy captures how much of the targeted population is already known to the organisation. High levels of intimacy imply that the organisation can have more confidence in an introducer-based scheme, because it can support a transitive trust arrangement that extends from known individuals to unknown ones.
The effects of intimacy can be seen in the Bruneian context and its combined approach to ensuring authenticity, incorporating elements of both a document-based and introducer-based scheme. Running an identity system since 1949 [18], the government has been enrolling identities of all individuals born and staying within the country, and thus have established a great deal of intimacy with its population. While individuals are required to provide their birth certificates as proof during enrolment, the government also records the identity numbers of the individual’s parents. This in effect creates a hybrid document-introducer-based scheme where the authenticity of the individual is being proven with a minimal amount of documentary evidence, which is further supported by linkages to introducers that are already enrolled within the system.
While India has an introducer-based scheme, the government’s choice in the matter is forced by unsatisfactory levels of universality. However, India now faces the problem that there is not enough intimacy to support introducers, as used in the Brunei case. Having never registered identities of past populations, the UIDAI in India cannot currently rely on parents as introducers to the system. Therefore, the government has devised a scheme to artificially boost intimacy through a set of defined trusted recognised introducers.
Introducer and document-based schemes are not orthogonal. Both make use of transitive trust to ensure the authenticity of the claimed identity. The document-based scheme is basically an institutionalised version of the introducer-based scheme. At the centre of the document-based scheme is the reliance on identity documents that have been produced by third-party institutions, which fulfil the role of introducer. In the end, the authenticity of the claimed identity is verified by a trusted third party.
2) *Uniqueness*
Apart from authenticity, organisations also need to consider uniqueness – that is to ensure that identity cannot be enrolled more than once into the identity database. Organisations’ desire for uniqueness is driven by concerns of identity fraud, where individuals might attempt to enrol multiple times, potentially using multiple personas, to gain extra benefits. Organisations typically attempt to tackle this issue of *de-duplication* through the use of biometric data [19].
Today, organisations can choose between various different biometric solutions; facial, fingerprint, and iris recognition being current solutions of choice. Organisation’s choice of biometric are influenced by 3 main criteria; *obligations, performance, and population*.
a) *Obligations*
The first hurdle an organisation faces when choosing a biometric technology are the obligations that it must conform to, such as *international standards and current practices*.
International standards influence the choice of biometrics, especially if individuals’ identity is meant to be portable across different countries, organisations, or contexts. If, the organisation aims to achieve interoperability, this determines not only the type of biometric used, but also the format in which the data is stored. For example, the UK government defended its choice of fingerprints with the need to comply with standards published by International Civil Aviation Organisation (ICAO) [20]; however, the ICAO standards only proscribe *how* fingerprints should be implemented *if* they are used on such documents – but do not proscribe the use of fingerprints itself [21].
Similarly, although the UIDAI did not focus on ensuring compatibility with other countries, adhering to an accepted standard remained an issue, to help create a consistent and portable identity within India’s large borders. The report from the Indian Biometric Committee recommended the implementation of biometrics based on international standards (ISO 19794), stating that the “*standards are widely accepted, and best embody previous experiences of the US and Europe with biometrics*” [19].

Organisations also face obligations around current practices that it, or other organisations that it might work with, have already implemented. The existence of current practices around the use of certain biometrics implies the availability of experience, expertise, and infrastructure around that particular biometric. Having such familiarity with a particular biometric can help to ease the
implementation of a new identity system that makes use of the same biometric.
In the UK context, this can be seen in the relationship between the Identity and Passport Service (IPS) and the Immigration and National Directorate (IND) [20]. Prior to the plans for an N-IDMS, the IND had already been processing, recording, and storing facial and fingerprint biometrics of foreigners for the purpose of UK visa applications. Thus, when the IPS finalised its plans for the N-IDMS it chose to build on IND’s systems, directly storing fingerprints and facial biometrics on IND databases. In the Bruneian context, the biometrics deployed in the previous N-IDMS was carried forward into the new, making use of fingerprints and facial photographs that they were already familiar with.
b) Performance
Aside from its obligations, organisations are also influenced by the performance of the various biometrics; these can be expressed in terms of **accuracy** and **human interpretation**.
**Accuracy** captures the ability of the biometric technology to correctly match biometrics presented for verification against biometric templates that have been previously recorded. During enrolment, organisations typically want to prevent individuals from enrolling more than once. This is achieved by choosing biometrics that provides the required levels of accuracy. Failure to match comes in form of False Acceptance - an impostor being wrongly accepted against an enroled identity - and False Rejection, an enroled individual being rejected by the system [22] (Further discussion of these measures is outside this scope of this work). Organisations should also consider the ease of which the biometric can be circumvented. For example, Facial Biometrics is “considered a poor biometric for use in de-duplication”, as an individual can easily avoid identification through “the use of a disguise, which will cause False Negatives in a screening” [19].
While the use of biometrics to ensure uniqueness is typically an automated process, a manual form of checking identity is required when a false rejection is encountered. Since the system is unable to accurately distinguish between two or more biometrics, some form of backup authentication is required to confirm or deny the false rejection. Therefore, having a biometric that enables quick manual checking becomes a necessity. Most biometric do not lend themselves easily to manual inspection. As a result, facial biometrics becomes attractive to organisations simply because it provides a backup option through **human interpretation** [19].
“We use AFIS, Automated Fingerprint Identification System. All the fingerprints captured will be processed with the fingerprint matching, and this is very useful when the citizen does registration of the card. This is to ensure that one citizen holds one card and number only. Those who register will go through the AFIS matching, and if it is OK, then we will do the registration. Otherwise there will be human intervention; a matching process, the system will list the possible candidates that match, but normally we go for a 100% match. There is a possibility of 70, 80, 90 and 100% match by fingerprints. The system also makes use of facial image, from the entries identified by AFIS. So it’s easy for us to do the matching, we can even assign the matching tasks to the clerk, by looking at the facial image and the percentage. It is very straightforward and user friendly.” [23]
c) Population
An organisation’s performance considerations are in turn mediated by the population characteristics in 3 ways: **size**, **compatibility**, and **geographic diversity**.
First of all, organisations need to consider the **size** of the target population. Large population sizes can negatively affect the accuracy of the biometric. The choice of the 10-finger biometrics proposed in the UK and Indian scheme was made on those grounds. The Indian Biometric committee [19] established that “False Acceptance Rate is linearly proportional to gallery size”; using a 2 fingerprint scheme with a population size of 1.2 billion, the FAR was estimated to be 14%, which is well above the 1% mark that they required. Therefore, it was recommended to proceed with a 10-fingerprint scheme, which was estimated to provide a 0% FAR, maintaining the uniqueness of individuals in the database.
The second population characteristic is **compatibility**, which captures the suitability of the biometric for use on the targeted population. Compatability is commonly captured by tests demonstrating that accuracy is not affected by characteristics of the target population (e.g. skin tone); the lack of such studies was highlighted by the Indian Biometric committee [19].
However, organisations must also consider other real world cultural compatatability factors that are not captured by these tests. For example, the Indian Biometric Committee highlighted the use of Lawsonia Interims (Henna) by women, stating that it can prevent the accurate collection of fingerprints as “sensors may not properly capture fingerprint features.” Another example is the large percentage of population in India who are “employed in manual labour”, and thus provide “poor biometric samples”, as their fingerprints have been worn away by the nature of their work. Because of these issues, iris is now seen to provide a better match for this population [19]. In Brunei, the BruNIR has encountered problems with the compatibility of fingerprints:
“…only one, the taking of the fingerprint. Because they can get worn out, and those are very difficult to capture. We identified that since the beginning of the project, and we came up with a solution to make use of moisturizer. It helps, but that is the major problem” [23].
**Geographic diversity** deals with the dispersion of the targeted population across the nation. This can affect the accuracy of the biometric because of varying conditions under which the biometric data is collected and used, and because procedures may be used differently. When a population is spread across a large geographic area, the organisation is unlikely to be able to collect all the information on its own; it will likely adopt an accredited enrolment strategy, where authorised third parties collect
information on their behalf. UK and India are prime examples of third-party enrolment, using private organisations to enrol and capture individual biometrics. This can result in "several non-technical factors that can impact accuracy more significantly than technical accuracy improvement efforts such as the lack of adherence to operational quality" [19].
B. Identity Application
In addition to identity construction, the organisation is also concerned about the mechanism with which enrolled identities are accessed and used. There are four main dependent constructs that affect organisations identity access policies: **relying parties**, **objectives**, **conditions**, and **accessibility**.
a) Relying parties
At the most basic level, the organisation needs to specify the relying parties that require access to individuals' identity. There are two main types of relying parties: **organisational** and **individual**.
First of all, there are **intra- or inter-organisational** entities that require access to the identity. Intra-organisational access is typically a requirement since the organisation needs to create and manage identities in the first place. But access to identities within the organisation can support other functions that the organisation needs to carry out. For example, the BruNIR is not only responsible for the distribution of the identity cards in the country, but also for the monitoring of identities across borders. Recent developments have meant that the Brunei identity card can now be used as a passport at land borders with Malaysia. Therefore, the BruNIR requires other forms of internal access to support these activities.
This is not so in the Indian context, where the UIDAI was setup solely to handle the registration of identities. The main focus here lies on the inter-organisational access of identity. In its plans to introduce the identity system the UIDAI clearly established and discussed plans with several different third party organisations that include PDS, NREGS, as well as the general education and health provision systems.
In the UK, the IPS has defined both intra-organisational use of its systems (identity cards as passports), as well as its inter-organisational aims by identifying various agencies, such as law enforcement agencies and the Department of Work and Pensions. The Bruneian context, on the other hand, has comparatively ill-defined inter-organisational obligations, only stating its intent of creating a multi-purpose smart card, which can be used by any third-party organization.

Figure 3. Organisations Identity Application requirements
In addition to organisational reliance on identity, the organisation also needs to recognise the **individual** as a relying party who may be able access his own identity and personal information. This is especially the case in the UK scenario, where IPS has specified that individuals should be able to access all their information on the system, which is envisioned to eventually be an online service [15], [20], [24]; India and Brunei have not specified any mechanisms by which individuals can directly access or view their identity records.
b) Objectives
Each relying party that the organisation identifies will have its own separate set of objectives. These can be expressed in terms of **enablement** and **proof**.
The use of identity to mediate the provision of services will always create a division between those who have access, and those who do not; identity systems are either primarily used to enable, or deny access. Whether the intention is to use identity to either enable or disable individuals is captured by the **enablement** construct. In India, the main intention of the relying parties is the enablement of poor citizens to access services that they have a right to, but currently do not find accessible. Additionally, Indian banks are focused on introducing new forms of mobile banking, thus enabling individuals to access new services that are to be developed.
The primary objectives in the UK context are to preventing undesirable activity (benefit fraud, crime, illegal immigration, and terrorism). The Bruneian context has described a largely enabling use of identity, with its intention to support the introduction of new on-line services introduced by third parties.
**Proof** describes the objective of the relying party in using individuals’ identity as a single proof of identity, or as a key that enables the tracking of an individual across multiple interactions or contexts. The Indian case provides an illustration of a high-linkage scenario, where all relying parties are advised to use individuals’ identity numbers as a foreign key to their own systems. It even suggests that relying parties make use of individuals’ identity internally, so as to keep track of employees. The Bruneian case makes
no such recommendations, nor enforces any rules to such linkages, resulting in a mixed approach between parties where certain relying parties make use of the identifier as an index to their records, while others merely use the identity as a proof.
c) Conditions
The organisation will need to identify the conditions under which the access to the identity will take place, and may thus affect access requirements; this can be expressed as risk level and timeliness.
Risk level is a measure of the security sensitive nature of the information access. Information access that is done under high-risk conditions, such as that involving terrorism, would have greater access privileges, when compared to a low risk situation that has little implication for the country, organisation, or relying party.
The importance of risk level in the development of the identity system and the information access policies is most evident in the UK scenario. While most access by third parties would be recorded, any access for the purpose of counter-terrorism would take place without consent, and would not be recorded [15], [20], [24].
The BruNIR has created an official channel, through which law enforcement can send a written request, with supporting reasons, to obtain information. The UIDAI has not specified any direct access to the information by third parties, but, its N-IDMS plans state that one unique identifier per individual would be useful for third parties to keep track of employees that might pose a risk of corruption – for example, to track inspection officials who come into contact with food that is given out to the poor [25], or the presence of doctors and teachers ensuring they are where they need to be [14].
In addition to risk level, the timeliness of information access is another factor to consider. Since one of the many cited benefits of an identity system is improving efficiency, it is not surprising that the need to access information quickly is an important factor.
An example is the planned use of the UK N-IDMS for the purposes of Criminal Background Checks (CRB), which are required for persons applying for employment work in a range of sectors. Existing CRB procedures take a long time to verify individuals’ identity to check their CRB status, leading to a backlog of applications. It would be beneficial if the agency carrying out these background checks could verify applicants’ identity more quickly, and thus it was seen as a prime candidate for gaining some form of access to the identity system; “the time for issuing Criminal Records Bureau disclosures could be reduced from 4 weeks to 3 days” (ID Card Benefits Overview).
The UIDAI has also highlighted the time-sensitive nature of third parties, stressing the importance of addressing the application of current ration cards due to “prolonged delays in processing the application”, and the advantages in using the unique ID number in the distribution of rice grain [25]. The Brunei N-IDMS has no specific examples regarding the timeliness of information, but improved efficiency was a main factor in the introduction of the smart card system, as it would allow the transfer of information in digital format reducing the overhead for filling in forms [23].
d) Accessibility
Once the organisation has identified the relying parties their respective objectives, and the conditions under which they are operating, it can then go on to define the accessibility of the system to these parties. The access to the system can be described in terms of information set, localised, and direction.
Information set describes the type and amount of identity information that the relying party will have access to. For the UK system, with its emphasis on national security, certain authorities would be able to gain access to all the personal information without individuals’ consent. The scenario in India is such that no relying party will have access to the personal information - the system will only confirm or deny the accuracy of personal information. The BruNIR has stated that third-party organizations will not have any access to the database, and can only access the information that is visible on the card and stored on the smart chip.
Localised refers to the spatial mode of access to the identity system: at one end of the spectrum, a check of identity can be limited to a local point, at which an individual physically presents the identity, and at the other end is the remote access of identity through a networked database from any number of parties. The Bruneian N-IDMS does not provide third parties with any remote access to their database; all the information and authentication functions that the relying party can access, is stored on the card itself. The raison d’etre of the Indian N-IDMS is remote authentication, so third parties have access across a network. The UK N-IDMS specified a range of access options including local options (such as visual authentication and local chip authentication), but also fingerprint authentication across a network.
Meanwhile, the direction of information access describes the push or pull nature of identity access; this in turn defines the read (including authentication) and write capabilities of the relying party. The Indian N-IDMS does not provide relying parties with any ability write information to the database. The transactions are a pull of information, where the third party requests confirmation of identity. The UK N-IDMS is able to record information about the third party access when performing authentication procedures. A new entry is created on the database recording the time and location of the authentication; this represents a combined push-pull operation, where information is read from and written to the identity database.
The Bruneian N-IDMS does not provide any remote access, but law enforcement can send in a written request, which is a remote pull of information. However, third parties can store information onto the chip when required. This represents a local push of information onto the card, and therefore affects the overall information access policies that need to be set in place.
IV. FIT FOR PURPOSE
The previous sections have catalogued organisations’ options in the construction and use of identity – and the choice of options has to match the purposes for which the system is deployed. *Who are the relying parties that require access, and what identity information does the system need to hold?* To ensure that the system being implemented will be fit for purpose, an organisation needs to tailor the identity construction to support the requirements of those purposes.
The Indian system, with its stated purpose of enabling access to services for the poor, was quick to identify welfare agencies as relying parties, and to ensure that individuals are able to enrol (by devising the appropriate authenticity requirements for a target population that suffers from both low universality and intimacy).
In the UK, with the main purpose being the reduction of crime and terrorism, the organisation identified law enforcement agencies as a core relying party, as well as defining strict authenticity and uniqueness requirements that would support its security goals.
In Brunei, the main aims of the system were firstly to modernise their current identity infrastructure, and secondly to create a multi-function digital identity infrastructure that could be used by various third parties (especially in the provision of e-government services). As of now, there has been relatively low uptake of the system by third parties. This investigation reveals that this due to the lack of specifying relevant third parties, and thus catering for their needs and requirements. However, recent efforts to engage with a relevant stakeholder in neighbouring country of Malaysia has resulted in the use of the identity cards as digital passports [26].
V. CONCLUSION AND FUTURE WORK
Using a case study approach, three different implementations of N-IDMS were examined and compared, and this uncovered a set of choices that organisations can over **identity construction** and **identity use** processes. These choices must be made in line with the **purpose** of the IDMS.
The organisation’s requirements for identity construction will determine the amount and type of information that is collected and stored. The choice of biographical information is influenced by organisations’ **authenticity** requirements, which is further mediated by the **universality** of current identity documents, and the levels of **intimacy** of the organisation to the target population. The organisation’s **uniqueness** requirements influence the choice of biometric information; it is affected by the organisation’s **obligations** to which it must adhere, as well as the **performance** of the biometric, which must be considered within the real-world **population** parameters.
The requirements for the use of identity will affect the identity access policies implemented. Beginning with the **relying parties** that need to access the system, the organisation must consider the various **objectives** of each party, as well as the **conditions** in which they operate. Only then will the organisation be able to specify **accessibility** of the system, and hence the identity access policies.
It should also be noted that the purpose also has an influence over the authenticity/uniqueness requirements, and vice versa. Certain purposes might require different sets of information, and the type of information within the system will place limitations on the purpose of the system; for example, a system that provides proof of age only needs to collect individuals’ date of birth, whereas one designed to counter terrorism may require address information, and possibly audit trails of use.
The findings of this study further the current understanding of factors that should be considered in the design and operation of NIDMS; the codification of the identity requirements into a framework can be used to aid discussions and critiques of IDMSs. For example, [27] state that attention should be paid to issues of purpose, population scope, data scope, and users of the data. In our framework, those elements are refined into more detailed concepts and the relationships between the various concepts are elaborated. Similarly, [3] describes a short-circuiting of identity debates through the use of international obligations, language ambiguity, technological focus, and expertise. Our framework addresses these concerns by explicitly listing the considerations, thus reducing ambiguity and short-circuiting, while also introducing non-technological decisions such as relying parties, and their unique purposes.
The organisation’s uniqueness consideration provides another area of comparisons to current work in the field. Drawing from [28] recommendations when implementing biometric systems, organisations should not only pay attention to the False Acceptance and False Rejection rates of biometric technology, but also consider how well they match and population characteristics, and how easy they will be to present; these are all present in the framework as sub-dimensions of the performance and environment constructs.
Therefore, the framework here serves as a guide for organisations and system designers to build effective N-IDMSs. It encourages focused debate, consideration, and definition of various critical components, ensuring that the identity information collected and the technology chosen are both fit for purpose, thus assisting in the implementation of successful identity systems.
A limitation of our current research is its emphasis on biometric identifiers; all the systems in all 3 case studies depend on them. Not all IDMS use biometrics systems, which limits the generalisability of the uniqueness framework. Future research will need to address these concerns, and further develop the framework to be applicable to non-biometric implementations.
Work will also need to be done to develop guidelines to effectively express requirements for uniqueness, authenticity and purpose; doing so will further help to increase communication in the field and encourage proper debate, while keeping the scope of the system concise and to the point.
Figure 4. Complete framework displaying organisational identity requirements
REFERENCES
[1] A. Rahaman and M. A. Sasse, “A framework for the lived experience of identity,” Identity in the Information Society, vol. 3, no. 3, pp. 605-638, 2011.
[2] E. A. Whitley and G. Hosein, “Global identity policies and technology: do we understand the question?,” Global Policy, vol. 1, no. 2, pp. 209-215, May. 2010.
[3] E. A. Whitley and G. Hosein, Global challenges for identity policies. New York, USA: Palgrave Macmillan, 2010.
[4] BBC, “Identity cards scheme will be axed ‘within 100 days’,” BBC News, no. 2002, 2010.
[5] M. Meints, and H. Zwingelberg, "Identity Management Systems – recent developments," Deliverable 7.2, Future of Identity in the Digital Society, 2009.
[6] J. Taylor, M. Lips, and J. Organ, “Information-intensive government and the layering and sorting of citizenship,” Public Money and Management, vol. 27, no. 2, pp. 161-164, Apr. 2007.
[7] J. Taylor, M. Lips, and J. Organ, "Citizen identification, surveillance and the quest for public service improvement: themes and issues," paper to the European Consortium of Political Research Privacy and Information: Modes of Regulation Joint Session Helsinki 7-12 May 2007.
[8] H. Kubicek, “Introduction: conceptual framework and research design for a comparative analysis of national eID Management Systems in selected European countries,” Identity in the Information Society, vol. 3, no. 1, pp. 5-26, Apr. 2010.
[9] P. White, “Identity Management Architecture: a new direction,” in 8th IEEE International Conference on Computer and Information Technology, 2008, pp. 408-413.
[10] B. L. Berg, "Qualitative research methods for the social sciences," Boston: Allyn and Bacon, 2001.
[11] K. Punch, Introduction to social research: quantitative and qualitative approaches. Thousand Oaks, California, Sage Publications, 1998.
[12] J. M. Corbin and A. Strauss, Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Thousand Oaks, California, Sage Publications, 1990.
[13] Unique Identification Authority of India, UIDAI strategy overview: creating a unique identity number for every resident in India. India: UIDAI, 2010.
[14] Unique Identification Authority of India, Aadhaar handbook for registrars. India: UIDAI, 2010.
[15] Identity and Passport Service, National Identity Service: delivery update 2009. London, England: Home Office, 2009.
[16] D. Blunkett, Identity Cards: the next steps. London, England: Home Office, 2003.
[17] Identity Cards Act London, England: House of Lords, 2006.
[18] R. Yunos, “Immigration services through the ages,” Brunei Times, 01-Feb-2009.
[19] Unique Identification Authority of India, Biometric design standards for UID applications. India: UIDAI, 2009.
[20] Identity and Passport Service, Strategic action plan for the National Identity Scheme. London, England: Home Office, 2006.
[21] London School of Economics, The Identity Project: an assessment of the UK Identity Cards Bill and its implications. London, England: LSE, 2005.
[22] J. Ashbourn, Practical biometrics: from aspiration to implementation. London, England: Springer, 2004.
[23] A. Rahaman and B.I.N.R. Department, “Interview - BruNIR.” 2010.
[24] Identity and Passport Service, National Identity Scheme Delivery Plan 2008: a response to consultation. London, England: Home Office, 2008.
[25] Unique Identification Authority of India, Envisioning a role for Aadhaar in the Public Distribution System. India: UIDAI, 2010.
[26] A. Razak, “Brunei, M’sia first in SEA to use IC as passport,” Brunei Times, 2007.
[27] S. Kent and L. Millett, IDs? Not that easy: questions about nationwide Identity Systems. Washington, United States: National Academies Press, 2002.
| | Brunei | India | UK |
|------------------------|-----------------|-----------------|-----------------------------|
| **Population Size** | 407,000 | 1,170,938,000 | 62,218,761 |
| **Date Implemented** | 2000 – today | 2010 – today | 2008 – 2010 (abolished) |
| **Purpose** | Multi-function smart card | Support poor in accessing services | Prevent terrorism, crime, benefit fraud, travel card |
| **Mandatory** | 18 and above | All citizens | Voluntary (mandatory for high risk personnel; airport staff, etc.) |
| **Unique ID Number** | Yes | Yes | Yes |
| **Identity Card** | Yes | No | Yes |
| **Smart Chip** | Yes | No | Yes |
| **Centralised Database** | Yes | Yes | Yes |
| **Authentication (Against Card)** | Yes | No | Yes |
| **Authentication (Against Database)** | No | Yes | Yes |
| **Record Authentications** | No | No | Yes (stored on Database) |
| **Information Read** | Third Parties can access biographical information on card and chip. | Third parties can confirm information accuracy (yes/no response). | Third parties can access biographical information on card and chip. Information can be pushed from the database to third parties. Security organisations can get access to all information on the database (through information commissioner). |
| **Information Write** | Third parties can write to the smart card | None | Information can be pushed from third parties to the database. |
Towards the Automatic Management of Vaccination Process in Jordan
Edward Jaser
Princess Sumaya University for Technology
King Hussein School for Information Technology
Amman, Jordan
email@example.com
Islam Ahmad
Royal Scientific Society
Information and Communication Technology
Amman, Jordan
firstname.lastname@example.org
Abstract—Rural communities in developing countries are faced with many challenges due to its geographical and demographic conditions. This has been evident in many studies and surveys. Health issues are among the top priority challenges in governments’ agendas. One important example is the vaccination of new born babies and young children. Vaccination is generally considered to be the most effective method for preventing infectious diseases. The rate for non-vaccination is much higher among communities in rural and remote regions. Information and Communications Technology can play an important role in assisting the government to manage the process and help reduce the rate of non-vaccination. In this paper, we describe a mobile system developed to electronically manage the vaccination process. Early evaluation demonstrates the benefits of such system in supporting government activities.
Keywords- information and communications technology for development; health systems; mobile application; vaccination; rural areas.
I. INTRODUCTION
Governments in developing countries, and even in developed ones, face challenges with relation to services provided to communities living in rural and remote areas. Among these, health services occupy high priorities in government planning and funding. Quality health services are offered in capital and big cities. This is mainly because those cities offer more opportunities to medical staff to forward their careers in addition to the ease of life and many other advantages. This leaves rural areas and remote communities deprived of specialized and experienced medical staff. It is not difficult to imagine that many medical cases will have to travel to the capital city or other big cities to obtain needed treatment; or wait till the next medical day in their region (where a consortia of medical doctors visit rural areas) to happen. This has been a challenge even in advanced societies. A good example is [1], a study by Lenthal et al., describing the challenges facing rural Australia as a result of decreasing numbers of nurses and midwives.
Another characteristic challenge in remote areas, due to geography and the dynamic demography, is coverage. Governments face daunting task to outreach for those communities with awareness information, health warnings, medical specialists’ visits and other events. In the case of Jordan, the government and related NGOs spend considerable budget to produce and print leaflets and produce TV and radio content. However, the question remains about efficiency of coverage among intended population.
Information and Communication Technologies (ICTs) are now widely considered by developing countries as the motor of growth, the driver of efficiency and effectiveness and the tool to enhance human development. With the advancement of ICTs and the Internet, communication and web-based technologies can be exploited to address many challenges with relation to improving coverage and obtaining a much needed accurate statistics and information.
In recent years there has been concrete evidence on the impact of social networking website in many aspects of life. An obvious and recent example is current events in the Middle East and North Africa. Many claim major roles for social networking tools such as Facebook and Twitter in the dynamics of these events [2]. These tools are changing the way people communicate, receive and exchange information. Such tools easily attract users as they are discrete, connect large number of individuals and eliminate the middlemen. While most popular networking websites are social in nature, professional networking websites can also be used as an efficient and cost effective tool to tackle issues and problems in society.
Many ICT interventions have been introduced to address social challenges including those of rural communities. In [3], the authors addressed the role of mobile technology and the viability of this technology in enhancing productivity, facing poverty, and improving social conditions in general. Jun [4] provided several evidences in China on the impact of mobile applications socially such as addressing employment.
One very important and priority sector is health. As mentioned earlier, quality health services are specific to large communities and adequate services or support are not available for rural areas. Health is an obvious sector that can benefit from opportunities that the technology offers as shown in many studies. In a comprehensive study [5] carried out to assess the application of ICT in health sector in terms of accessing information and disseminating awareness content in Uganda, Omona and Ikoja-Odongo concluded that there is need to support and promote ICT as the most effective tool for health information access and dissemination. The opportunities and benefits of mobile and wireless technologies for healthcare service delivery, improving patient safety and reducing cost were also the subject of a research by Ping Yu et al. [6]. The study
researched m-health solutions and the challenges for developing and deploying m-health applications. Maeda et al. [7] proposed a framework for mobile application for health education and awareness.
Recognizing the important role ICT can play in improving the outreach and the feedback from health services, we started a pilot project concerned with enhancing health services to women and children in remote and rural communities in Jordan. The aim of the project is to evaluate the impact of ICT on improving such services and compensate for the lack of experts and medical staff. The project contains tool for medical practitioners to interact with the public regardless of their geographical proximity. The system allows contributions from medical doctors, medical students, nurses, pharmacists and other medical personnel in Jordan to assist stakeholders (whether doctors or patients) with questions related to health issues. Also, it allows interaction between users (patients) themselves to form common interest support groups. The system’s information channels, such as mobile phones allow access to health information to such groups in a cost effective manner.
In this paper, we report on one module of the project that recorded some encouraging results qualifying it to be adopted nationally. The module is concerning the management of the vaccination process of new born babies and children. The importance of such module comes from its impact on health. Most children who are not appropriately immunized are at risk of serious conditions. The automatic management of vaccination is therefore a necessary application.
II. THE CHALLENGE OF VACCINATION
Ever-changing vaccination schedules can be confusing for providers (clinics, doctors, hospitals) and parents. The Ministry of Health maintains a vaccination program that is updated and checked regularly. As soon as a new baby is born, parents are given a card with the vaccination schedule. It will be then the responsibility of the parent to follow the dates of each vaccine. Possibly, it’s not an issue in urban communities with all existing electronic gadgets to remind people. Nevertheless, compounding this problem is the fact that vaccination records are often scattered. In rural areas in particular, the process is manual and records are kept on cards given to parents and on papers kept at local clinics. When records are scattered, it is difficult to assess whether a patient is up-to-date or not. It makes it harder for parents in rural and remote communities to maintain the process especially that the process is stretched over a long period of time.
On the other hand, it is also the responsibility of the clinic or the doctor to make sure that enough vaccines are stored in the local clinic to cover the need for the area they serve. Clinics usually have no statistical information on volume of vaccines needed daily or weekly. This can also lead to other challenges like storage. Clinics in rural communities are occasionally not equipped to store vaccines for long period of time.
Recent research has demonstrated specific and practical procedures medical staff can adopt to improve effectiveness in immunizing children, including the following: 1) sending parents reminders for next vaccination; 2) using printed material during calling at local clinics to remind parents and staff about importance of vaccination and vaccination table; 3) contribute to keep a statistical records on immunization rates for improvement effort.
Vaccination coverage in Jordan nationwide is relatively high [8]. Statistics show that the rate is higher in urban areas than rural.

*Figure 1: Vaccination coverage in Jordan (source: Jordan 2007 Population and Family Health Survey)*
Figure 1 shows the vaccination rate among communities in rural areas (such as in the governorates of Ma’an, Tafila and Aqaba south of Jordan) are lower than other governorates. The reason can be attributed to lack of effective awareness, illiteracy rate, lack of reminders and lack of medical personnel. Any intervention should address these issues.
III. SYSTEM REQUIREMENT
From the challenges mentioned in the previous section, an ICT intervention to manage the vaccination process will contribute to improving the vaccination rate especially among residents of rural and unprivileged areas. The high-level requirements of the system are to:
1) Register new born babies with the system and calculate the vaccines based on the vaccination schedule maintained by the Ministry of Health.
2) Issue reminders to parents reminding them on the date and the type of the due vaccine.
3) Issue reminders and volume information to clinics on the number and types of vaccination they will be expecting to perform in a specific day to make sure they secure the needed quantities.
4) Provide awareness information to parents to help them establish the importance of the vaccines for their children.
To identify the interaction requirements of the system, we need to understand the main stakeholders and how they will use the system. We have two main stakeholders: parents and clinic staff. Following are description of the main stakeholders:
**Parents:** Usually parents in rural communities are not exposed to technology such as internet and all the tools that come with it. In a survey conducted prior to the design of the ICT intervention to investigate the best way for the system to interact with those users, it was noticed that more than 90% of the surveyed users own at least one mobile phone. This is quite significant penetration rate. Most of these phones are basic ones. The usage of mobile phones is for the purpose of making and receiving calls as well as communication through text messages. Our conclusion was that any project should have a mobile component to communicate information with the users.
**Clinics:** When we mentioned clinics servings rural and remote communities, we are assuming basic infrastructure. No internet or computer. Some of the clinics are even mobile clinics to provide services to the moving population (Bedouins) and they tend to have minimum equipments. Any solution to be adapted nationwide should take into consideration the cost factor i.e. minimum is to be spent on infrastructure. Clinic staff should have the option to interact with the system using the internet if possible, or using smart mobile devices which are cheap to acquire and install.
### IV. MOBILE APPLICATION
Given the identified requirements, we designed the vaccination management system based on a clear identified scenario and simple workflow.
**Workflow:** Clinic staff registers new born babies with the system. This can be done either using the internet (website) or, if the infrastructure is not there, using smart phone over 3G networks. The application can be downloaded and used with any java enabled phone. The clinic personnel need to capture basic information about the child (name, date of birth, weight, height and contact details of parents) and send the information to the system as an SMS message. Once the information of a new child have been received and stored, the system uses the vaccination schedule issued by the Ministry of health to calculate the dates of the vaccines for the registered child and store them on the database. The system continuously checks the database to produce a report of which children due to be vaccinated in a certain day. The system automatically sends the parents a reminder on the next vaccine for their child and at which clinic. Also, the system sends statistical and volume information to clinics about expected children and vaccines at a certain date. When the vaccination of a child takes place, the clinic personnel send this information to the system to maintain the child record. Figure 2 summarizes the workflow of the system.
The database maintains up-to-date record of vaccination to be used by decision makers at the ministry to obtain information for the purpose of reporting and planning.

Figure 3 depicts the architectural design of the system, the input/output channels and support information. Figure 4 provides more details about the process and the modules in the system.
**Objectives:** The main aim behind the design of the pilot system has been to measure the advantages and impact of ICT interventions in enhancing vaccination process and support health clinics and hospitals in rural and remote communities. During the life of the project we attempted to answer the following key research questions: (1) Could ICTs contribute to the enhancement of the general health of rural and remote communities? and (2) What is the minimum infrastructure needed for the deployment of the automatic vaccination management system?
Overall, and in spite of the advancement in mobile applications developments and innovations, there are certain challenges: (i) Concrete evidence: we are in need of a robust analysis and evaluation tools and standards of mobile intervention in health to help designing better and effective services; (ii) Legislations: this is quite important to establish a clear policies and laws to govern ICT interventions and their deployment; (iii) Sustainability of ICT interventions, sustainability is important issue for ICT4D projects. There should be clear understanding of how to fund these projects from both public and private sectors and to continue to provide resources and (iv) Capacity building: there should be focus on building the competency of various stakeholders in terms of ICT usage. Also mobilize resources to bridge the technological gap facing rural communities.
VI. CONCLUSION AND FUTURE WORK
Deploying the pilot vaccination management system in selected clinics serving remote communities demonstrated the impact ICT interventions can have on enhancing services and its outreach. More research and investigation is needed to deal with smooth and effective online communication between patient and health workers. Future work will focus on widening the evaluation to include more clinics and region in rural Jordan to reach a working system that can be adopted nationally.
REFERENCES
[1] Sue Lenthall, John Wakerman, Tess Opie, Sandra Dunn, Martha MacLeod, Maureen Dollard, Greg Rickard, and Sabina Knight; “Nursing workforce in very remote Australia, characteristics and key issues”. Australian Journal of Rural Health, 19; pp. 32–37. 2011.
[2] Marko Papic and Sean Noonan; “Social Media as a Tool for Protest” web article http://www.cfr.org/democracy-and-human-rights/stratfor-social-media-tool-protest/p23994, visited on the 20th January 2012.
[3] Lisa Cespedes and Franz Martin; “Mobile Telephony in Rural Areas: The Latin American perspective”; The i4d print magazine; Vol. VII No. 9 pp. 10-11, January-March 2011.
[4] Liu Jun; “Mobile Social Network in a Cultural Context”. In E. Canessa and M. Zennaro; m-Science: Sensing, Computing and Dissemination. ISBN:92-95003-43-8. Pp. 211-240. 2010.
[5] Walter Omona and Robert Ikoja-Odongo; “Application of information and communication technology (ICT) in health” Journal of Librarianship and Information Science; 38:pp. 45-55. 2006.
[6] Ping Yu, Mingxuan Wu, Hui Yu, and GC Xiao; “The Challenges for the Adoption of M-Health” IEEE International Conference on Service Operations and Logistics, and Informatics, 2006. pp. 181-186, 21-23 June 2006.
[7] Toshiyuki Maeda, Tadayuki Okamoto, Yae Fukushige, and Takayuki Asada; “Mobile Application Framework for Health Care Education” 7th IEEE Consumer Communications and Networking Conference, pp. 1-2. January 2010.
[8] Department of Statistics; “Jordan 2007 Population and Family Health Survey: Key Findings Department of Statics”. Amman – Jordan; 2007.
Three Dimensional Printing:
An introduction for information professionals
Julie Marcoux
Bibliothèque Champlain
Université de Moncton
Moncton, Canada
email@example.com
Kenneth-Roy Bonin
School of Information Studies
University of Ottawa
Ottawa, Canada
firstname.lastname@example.org
Abstract - Advanced by some as the next great emerging technology to enjoy overwhelming market penetration, three dimensional (3D) printing could have significant information implications, notwithstanding limited coverage in the information science literature. This review of complementary material from other sources provides the introductory definitions, technical descriptions and indications of future developments relevant to information professionals.
Keywords - 3D printing; three dimensional printing; additive fabrication; digital fabrication; rapid prototyping.
I. INTRODUCTION
Three dimensional (3D) printing has the potential to impact the transmission of information in ways similar to the influence of such earlier technologies as photocopying and telefacsimile. This review identifies sources of information on 3D printing, its technology, required software and applications. Although the subject initially may seem to be of particular interest to engineers, efforts have been made to identify resources relevant to exploring the implications of 3D printing technologies for those working in the information sciences: librarians, archivists, museum collection specialists, and managers of documentation centers and information services in the public and private sectors. Accordingly, the following presentation provides definitions, reports the results of a literature review, explains the technology, and outlines directions for future work.
II. 3D PRINTING DEFINED
All sources identified through a literature search on the subject of 3D printing shared the common characteristic of providing a definition. The fundamental idea varies little from one source to another. Most agree that 3D printing consists of downloading a blueprint or a special computer file to a printer capable of ‘printing’ sophisticated three-dimensional objects through an additive process that ‘prints’ layers of material [1]-[10].
Bradshaw et al. [3] distinguish three fundamental methods for fabricating objects: 1. cutting the object out of a block of material; 2. creating a mold and then filling it to create the object; or 3. adding shapes together in order to make an object. The technology of 3D printing falls into the latter category, in which objects with moving parts can be created, impossible using the two other methods alone [3] [9] [11]. There is however, considerable diversity in actual 3D printing production.
The different techniques have led to certain semantic disputes. Wiegler [10] quotes a few researchers who feel that the term should be reserved for the particular 3D printing technique created by Zcorp, the company credited with creating a cheaper 3D printing technique in which a nozzle similar to a ‘glue gun’ is used to print out objects. Others believe that the term ‘3D printing’ should be used generically, to include all types of additive manufacturing, because it is easily understandable by the general public [10]. It is this latter connotation which will be adopted in this review, employing the term ‘3D printing’ to encompass all techniques that lead to a three dimensional object being printed, including such variations as ‘selective laser sintering’ and ‘fused deposition modeling’, which will be explained later.
II. LITERATURE REVIEW
As summarized by Weinberg, “the line between a physical object and a digital description of a physical object may (...) begin to blur. With a 3D printer, having the bits is almost as good as having the atoms” [9]. Librarians and information professionals, well aware of the important contributions of other electronic technologies to the disintermediation of information, should be predisposed to understand the implications of 3D printing. To assess the prevalence of articles on 3D printing in the information science literature, a three phase search for relevant articles was conducted. The first phase consulted three information science databases: Library and Information Science Abstracts; Library, Information Science and Technology Abstracts; and Library Literature & Information Science Full Text. The search terms employed were ‘3D print*’, ‘three-dimens* print*’, ‘three dimens* print*’ and ‘tridimens*print*’. When search results proved disappointing, a more general literature search was conducted in a second phase of the literature review. This involved a greater variety of sources, including reports and conference proceedings, newspapers, industry publications,
electronic media and online information, and articles in engineering databases. When identified, relevant references were subsequently accessed to obtain a more comprehensive picture of the subject. Supplementary synonyms for the term ‘3D printing’ found in this broader literature were carefully noted so that the initial three information science databases could be interrogated again in a third phase of the literature review.
Results of the three literature searches reveal that most of the relevant material on 3D printing has been published within the last six years, including many sources less than two years old. As a technology, however, 3D printing has been around for some time, and commercial printers “have existed for years” [10] [12] [13]. Bradshaw et al. [3] confirm that the first patent was deposited in 1977. One reason for the recent nature of most of the literature is that prices for 3D printers have dropped sufficiently that individuals can now afford to purchase their own equipment [1] [4] [6] [13]. This has encouraged greater interest in the possibilities of the technology.
The various sources of information frequently approach 3D printing as a subject quite differently. Newspaper articles provide general overviews of the subject, the ‘meta subjects’ and related topics. While they describe printing techniques, they rarely employ scientific terms such as ‘fused deposition modeling’, for example. This can lead to confusion when the full range of 3D printing possibilities is not described. Reports provide more in-depth and accurate descriptions. Blogs generally post the most recent developments, but commercial blogs unsurprisingly tend to concentrate on what specific 3D printing companies have to offer. Manufacturing company websites tend to provide a considerable amount of information about 3D printing as a process, including detailed technical information. Those that print from files created by customers even provide links to web pages for non-commercial software. Unfortunately, meta subjects and related topics are rarely covered. Engineering articles range from a tight focus on elements of 3D printing as a process, such as improving the viscosity of material to be printed, to more general considerations of meta subjects. The *Rapid Prototyping Journal* is a particularly valuable source of engineering articles in this regard.
Given the information implications of the resources identified through the more general literature search, the initial search of information science databases was repeated to take into account the terms identified in the more general literature search. This search yielded only four additional results. Two of these articles [14] [15] were by the same author, giving a brief assessment of one commercial 3D printer and of a particular piece of modeling software. Another [16] discussed combining two databases to obtain a 3D printable file of the outline of buildings in Norway. The three articles were very focused on their specific topics, and none discussed the information implications of 3D printing.
Although also narrowly focused, the fourth article [17] does describe an application of 3D printing with information implications. It discusses the use of 3D scanners and a 3D printer to create replicas of wooden stamps. The article concludes by explaining that the stamps are now easier to share with other libraries and museums, an illustration of a potential contribution 3D printing might make to extending access while preserving original archival and museum materials.
Articles relevant to the information implications of 3D printing were also discovered in the more general search conducted in the second phase of the literature review. A kinematic library was identified which has made 3D printable files of kinetic models available online [18]. The metadata scheme developed for this library might have served as a good starting point for reflecting on the classification and cataloguing of 3D printable files, but unfortunately, it does not appear to be systematically maintained, and many of the supplied links are broken [19].
Two engineering articles also found in the same general literature search outlined attempts to create classification schemes for 3D printing. Ingole et al. [20] make relevant observations about the need for more formalized standards for 3D printing and discuss some of the difficulties associated with the proprietary standards associated with commercial machines. Certain components of the proposed classification scheme, such as the use of single digits to code subjects, would have benefited from input by information professionals. The second engineering article, by Mortara et al. [21] demonstrates an awareness of such important classification concepts as faceted classification, but the proposed classification scheme is clearly aimed at engineers, and would not be easy to use for publicly accessible repositories.
### III. Technical Overview
This section explains technical aspects of 3D printing found in reviewed sources. The utility of 3D printing rests in its capacity to cheaply print complex objects, such as already linked up chain-mail, using a variety of materials [2]. Certain accounts concentrate exclusively on a single 3D printing application. Seulin’s article [17] on wooden stamps is an example. While these resources explain the process involved in creating a 3D object, they often focus on a specific printing technique and on a specific 3D printer.
Areas of interest which have used 3D printing to create objects include aeronautics, architecture, automotive industries, art, dentistry, fashion, food, jewelry, medicine, pharmaceuticals, robotics and toys [2] [12] [22]. 3D printable files of physical models of educational concepts would also interest academic and school libraries. Knapp et al. [23] explain that commercial models “are expensive”, and give the example of “an anatomical model of the heart” which can “cost up to $600.” A 3D printer can be purchased for under $1,000 and materials for “even the largest model (...) would likely cost under $50” [23]. As noted previously,
the preservation of artifacts is another potential use of 3D printing that would be of interest to information professionals [7] [23].
Both open-source and proprietary software may be used to acquire digital files of 3D objects during the data acquisition phase of production. Lipson and Kurman [6] credit “the emergence of cheaper, and increasingly accessible computer aided design software (CAD)” for the increasing interest in 3D printing. A number of authors mention the use of 3D scanning, which uses basic cameras and freely provided software, rather than commercial systems, to obtain digital data of existing objects [2] [9] [17] [22].
Authors also confirm that there is no agreement on file formats for 3D printing [6] [20]. The range in the types of files that currently exist include PLY files, ObjDF files, RP files, STL files VRML files and ZPR files [4] [14] [23]. Inacu et al. [23] believe that STL “is, and for the foreseeable future, will be the standard mode of data exchange in the Rapid Prototyping industry.” Software used to create printable 3D files includes modeling software, file converters, model ‘repair’ software to clean-up files, and path generating software.
Help is also available online for anyone interested in creating printable 3D files. Google Sketchup [24], for example, does not require a user fee to access a basic level of service. Turning a model into a printable 3D file involves following instructions from a free tutorial provided by a for profit company called Shapeways [25].
Both Cornell University and the University of Bath have designed open-source 3D printers which are widely recognized for making all 3D printers more affordable: Fab@home and RepRap [7] [13]. To acquire one of these open-source 3D printers, interested parties obtain the basic building materials, follow construction instructions shared on Wikis, and then purchase printing supplies [7]. Bath even allows the commercial resale of its printers. Lipson and Kurman [6] note that Creative Commons initiatives have been inspired to work on open-source hardware licenses. Fab@home’s ultimate objective is to build a machine capable of producing “complete, integrated, functional electromechanical devices” [7]. The goal for RepRap is to enable it to replicate itself by printing out all parts for a new RepRap [6].
The cost of a RepRap printer in 2010 was about $525, and it could replicate 50% of itself [13]. The operating cost of 3D printing materials can be less than $1 per cubic inch [23]. Caution must be exercised when quoting published costing information, however, since prices have been dropping so fast that they are quickly outdated. Wiegler [10] cites the example of a professor who bought a commercial 3D printer in 2005 for $31,000, noticed that the price had dropped to $19,000 in 2008, and speculated that it would drop to $10,000 by 2013.
Commercial Printers that use more advanced techniques to print objects are usually equipped with proprietary software [14]. Companies that sell 3D printers include 3D Systems, Objet Geometries, Solido LTD, Stratasys and Z Corp [2] [18]. Lipson and Kurman (2010) report that both Hewlett Packard and Xerox “are investing in 3D printing research and technology development” [6].
Several types of material can be used to print objects. Various printers handle a variety of materials, and some can even produce objects using more than one type of material. While the most commonly used materials are plastic, metals and ceramics, more exotic materials, such as chocolate, may also be used [6] [7].
There are two major variations in printing techniques. In one instance, material is deposited on a surface, and the depositing implement of the printer pulls up after a layer has been deposited in order to deposit the next layer [2]. Support material might be put in place to protect the structural integrity of the object, but must be removed later [15]. In another instance, a layer of powder or liquid is present on the printing surface. A binding technique is used to change parts of the powder or liquid in the first layer of the object. The printing surface is lowered, more powder or liquid is added, and the process is repeated to form the next layer [26]. The remaining material supports the weight of the object as it is built. Binding techniques include adding glue to material, adding an ‘ink’ that solidifies when exposed to ultra-violet light, or using a laser to bind material [2] [6].
Although different techniques have specific names, a semantic shift in the terms used to describe generic 3D printing has resulted in a number of variations. Selective laser sintering, fused deposition modeling and stereolithography are among the most often mentioned techniques [4] [20] [26] [27]. Stereolithography uses a liquid polymer bonded by a laser; laser sintering uses a powder which is also bound by a laser; while fused deposition modeling simply deposits material on a printing surface [28]-[30]. Certain techniques also require post-processing of the printed objects in order to solidify them or to improve their appearance [26] [31]. Post-processing steps can include ‘bed manipulation’, which entails forcing a change to all the material that will only have an effect on bonded parts, removing powder or support materials, heating the object, or dipping it into something else (infiltration) [26].
IV. CONCLUSION AND FUTURE WORK
Not all technical information about 3D printing could be shared in this introduction of the subject. Documenting the technology, very much a work-in-progress, must also recognize that not all authors agree on the likelihood of 3D printing gaining wider dissemination into the homes of individuals [10]. Also, as a still emerging technology, 3D printing is not without its problems, such as slow printing speeds [6]. Nevertheless, as prices are decreasing, the number of 3D printers sold worldwide has been growing steadily. And as market penetration increases, the information implications of 3D printing technologies will
expand as well. These include legal considerations and parallels associated with the spread of desktop computers [6] [7] [9]. Published works related to the information economy [32] [33], the democratization of manufacturing [34], and on the concept of the 'long tail' [2] will also assume greater significance [6].
The lesson learned from this initial effort to introduce 3D printing to information professionals is that explanations of the technology will not, as yet, be found in their professional literature. Hopefully, however, as they begin to appreciate the potential of desktop 3D printing technology, information professionals will have more to contribute to a greater understanding of its implications.
REFERENCES
[1] American Public Media. “Brave new world of 3D printing.” [Podcast], Marketplace Tech Report, November 29, 2010. Retrieved from http://marketplace.publicradio.org/display/web/2010/11/24/tech-report-the-brave-new-world-of-3d-printing/ 25.11.2011
[2] A. Anderson, “A Whole New Dimension: Rich Homes Can Afford 3D Printers,” The Economist, November 15, 2007. Retrieved from http://www.economist.com/node/10105016?story_id=10105016 25.11.2011
[3] S. Bradshaw, A. Bower, and P. Haufe, “The Intellectual Property Implications of Low-cost 3D printing,” SCRIPTeD, vol. 7, (1), 2010, pp. 5-31, doi: 10.2966/script.070110.5.
[4] C. Inacu, D. Inancu, and A. Staniciu, “From CAD Model to 3D Print Via ‘STL’ File Format,” Fiabilitate si Durabilitate = Fiabilité & Durabilité, vol. I, 2010, pp. 73-80.
[5] G. Lacey, “3d Printing Brings Designs to Life,” Technology Education, 2010, pp. 17-19.
[6] H. Lipson and M. Kurman, Factory at Home: The Emerging Economy of Personal Manufacturing. Washington: U.S. Office of Science and Technology, 2010, n.p.
[7] E. Malone and H. Lipson, “The Factory in Your Kitchen,” 2007 World Conference on Mass Customization & Personalization (MCPC), Cambridge, MA: MCPC 2007. Retrieved from http://ccsl.mae.cornell.edu/papers/MCPC07_Malone.pdf 24.11.2011
[8] D. Smock, “Lower Prices Drive 3-D Printer Market”, Design News, May, 2010, n.p.
[9] M. Weinberg, “It Will Be Awesome if They Don’t Screw It Up: 3D Printing, Intellectual Property, and the Fight Over the Next Great Disruptive Technology,” Public Knowledge, November, 2010, pp. 1-15. Retrieved from http://www.publicknowledge.org/files/docs/3DPrintingPaperPublicKnowledge.pdf 24.11.2011
[10] L. Wiegler, “Jumping Off the Page,” Engineering & Technology, vol. 3, 2008, no. (1), pp. 24-26.
[11] C. Major and A. Vance, Desktop manufacturing [Video file]. The New York Times, 2010. Retrieved from http://video.nytimes.com/video/2010/09/13/technology/1248068999175/desktopmanufacturing.html 24.11.2011
[12] D. L. Bourrel, M. C. Leu, and D. W. Rosen, Roadmap for Additive Manufacturing: Identifying the Future of Freeform Processing. Austin, TX: University of Texas, Laboratory for Freeform Fabrication, 2010.
[13] G. Stemp-Morlock, “Personal Fabrication: Open Source 3D Printers Could Herald the Start of a New Industrial Revolution,” Communications of the ACM, vol. 53, 2010, no. 10, pp. 14–15, doi:10.1145/1831407.1831414.
[14] S. Ellerin, “The Art and Science of 3D Printing,” Emedia, vol. 17, 2004, no. (5), pp. 14–15.
[15] S. Ellerin, “Strata 3D Pro,” EMedia, vol. 17, 2004, no. 6, pp. 28–29.
[16] J. O. Nygaard, “Semiautomatic Reconstruction of 3D Buildings from Map Data and Aerial Laser Scans,” Journal of Digital Information Management, vol. 2, 2004, no.4, pp. 164–170.
[17] R. Seulin, C. Stolz, D. Fofi, G. Million, and F. Nicolier, “Three Dimensional Tools for Analysis and Conservation of Ancient Wooden Stamps,” Imaging Science Journal, vol. 54, 2006, pp. 111-121, doi: 10.1179/1743313106X98755.I.
[18] K. Walker and J. M. Saylor, “Kinematic Models for Design Digital Library,” D-Lib Magazine, vol. 11, 2005, no. 7. Retrieved from http://www.dlib.org/dlib/july05/07featuredcollection.html 24.11.2011
[19] CTS Metadata Services, KMODDL application profile, 2004. Retrieved from http://kmoddl.library.cornell.edu/aboutmeta2.php 24.11.2011
[20] D. Ingole, A. Kuthe, T. Deshmukh, and S. Bansod, “Coding System for Rapid Prototyping Industry,” Rapid Prototyping Journal, vol. 14, 2008, no. (4), pp. 221-233.
[21] L. Mortara, J. Hughes, P. S. Ramsundar, F. Livesey, and D. R. Probert, “Proposed Classification Scheme for Direct Writing Technologies. Rapid Prototyping Journal, vol. 15, 2009, no. 4, doi:10.1108/13552540910979811.
[22] S. Summit, Ok, so you can create anything. Now what? [Video file] Singularity University: Preparing Humanity for Accelerating Technological Change, 2010. Retrieved from http://www.youtube.com/watch?v=6lJ8v1Id4HF8 24.11.2011
[23] M. E. Knapp, R. Wolff, and H. Lipson, Developing printable content: A repository for printable teaching models, n.d.. Retrieved from http://www.3dprintables.org/printables/images/d/d7/3Dprintables_paper_final.pdf 24.11.2011
[24] Google Sketchup, 3D modelling for everyone, n.d. Retrieved from http://sketchup.google.com/ 24.11.2011
[25] Jed, SketchUp STL export tutorial, In Shapeways, n.d. Retrieved from http://www.shapeways.com/tutorials/sketchup_3d_printing_export_to_stl_tutorial 24.11.2011
[26] B. R. Utela, D. Sorti, R. L. Anderson, and M. Ganter, “Development Process for Custom Three Dimensional Printing (3DP) Material Systems,” Journal of Manufacturing Science and Engineering, vol. 132, pp. 1-9, doi: 10.1115/1.4000713.
[27] I. Serban, I. Rosca, and C. Druga, “A Method for Manufacturing Skeleton Models Using 3D Scanning Combined with 3D Printing.” In Annals of DAAAM for 2009 and Proceedings of the 20th International DAAAM Symposium, vol. 20, no. 1, Vienna, Austria: DAAAM International, 2009, pp. 1319-1320.
[28] Materialise, About fused deposition modeling, n.d. Retrieved from http://www.materialise.com/fused-deposition-modelling 24.11.2011
[29] Materialise, About our laser sintering prototyping service. Retrieved from http://www.materialise.com/laser-sintering-prototyping 24.11.2011
[30] Materialise, About our stereolithography prototyping service. Retrieved from http://www.materialise.com/Stereolithography 24.11.2011
[31] T. Ringdahl, “3d Printer Lets Designers Run with Shoe Design,” Machine Design.com, March 19, 2009, pp. 58-59.
[32] A. Fenner, “Placing Value on Information,” Library Philosophy and Practice, vol. 4, 2002, no. 2, pp. 1-6.
[33] C. M. Gayton, “Legal Issues for the Knowledge Economy in the Twenty-First Century,” Vine, vol. 36, 2006, no. 1, pp. 17-26.
[34] E. V. Hippel, Democratizing Innovation, Cambridge, MA: MIT Press, 2005.
Unsupervised Personality Recognition for Social Network Sites
Fabio Celli
CLIC-CIMeC
University of Trento
Italy
email@example.com
Abstract—In this paper, we present a system for personality recognition that exploits linguistic cues and does not require supervision for evaluation. We run the system on a dataset sampled from a popular Social Network: FriendFeed. We adopted the five classes from the standard model known in psychology as the “Big Five”: extraversion, emotional stability, agreeableness, conscientiousness and openness to experience. Making use of the linguistic features associated with those classes the system generates one personality model for each user. The system then evaluates the models by comparing all the posts of one single user (users that have only one post are discarded). As evaluation measures the system provides accuracy (measure of the reliability of the personality model) and validity (measure of the variability of writing style of a user). The analysis of a sample of 748 Italian users of FriendFeed showed that that the most frequent personality type is represented by the model of an extravert, insecure, agreeable, organized and unimaginative person.
Keywords—Social Network Sites; Personality Recognition; Information Extraction; Natural Language Processing.
I. INTRODUCTION AND RELATED WORK
Personality is a crucial aspect of social interaction. Under the computational perspective it can be very useful for marketing and for interesting tasks such as stylometry and sentiment analysis. Recent studies showed that there is a connection between the personality of individual users and their behavior online (see Amichai-Hamburger and Vinitzky [1]). Social Network Sites (SNSs henceforth, see Boyd and Ellison [2] for definitions and history) are huge, virtually infinite, corpora where authors (users) and sentences (posts) are found together. Many scholars used data from social networks for personality classification. In 2006 a pioneering work by Oberlander et al. classified four traits of blog authors’ personality using n-grams as features. Some very recent works such as Quercia et al. [10] and Golbeck et al. [4] predicted personality of users from social network data. In particular Golbeck et al. predicted personality from some users’ profiles on Facebook using machine learning techniques. Golbeck’s work is supervised because it required that subjects completed a personality test for evaluation. Here we introduce a novel technique for personality recognition that does not require subjects.
In the following section, we will present a system that builds on the fly one personality model for each user in a corpus in an unsupervised way and performs automatic evaluation of the models comparing all of his/her posts. Then, in Section 3, we will present the results of the analysis of personality on FriendFeed. In Section 4, we will conclude introducing possible directions for future works.
II. UNSUPERVISED PERSONALITY RECOGNITION
The large amount of data available from Social Network Sites allows us to predict users’ personality from text in a computational way, but there are at least four nontrivial problems:
1) The definition of personality, which is a very fuzzy and subjective notion;
2) The annotation of personality in the data from SNSs, that would require personality judgements by the author themselves or by other native speakers.
3) The construction of one model for each user in the dataset.
4) The evaluation of personality models.
In the next paragraphs we are going to discuss the solutions for those problems we adopted for the unsupervised personality recognition system.
A. Definition of Personality
Psychologists describe personality along five dimensions known as the “Big Five” (see Goldberg [5]), a model introduced by Norman in 1963 [8], obtained from factor analysis of personality description questionnaires that has become a standard over the years. The five dimensions are the following:
- Extraversion (E) (sociable vs shy)
- Emotional stability (S) (calm vs insecure)
- Agreeableness (A) (friendly vs uncooperative)
- Conscientiousness (C) (organized vs careless)
- Openness (O) (insightful vs unimaginative)
Those dimensions can be represented computationally as continuous numerical variables with 2 poles: one positive
(1) and one negative (0). Once we have the numerical values for each attribute (one attribute is one dimension in the “Big Five”), we can easily calculate whether a user has one trait of personality (y) or not (n) or we have no information about that trait (o). From this representation, we can formalize a personality model for each user simply taking the majority class for each attribute/dimension from all posts the user made. In the end personality models are formalized as string of five characters: one for each attribute, which one can take three possible values: positive (y), negative (n) or balanced (o). For example a the string ynooy is the model of an extravert, nervous and open-minded user.
B. Dataset
The dataset is a sample of 748 Italian FriendFeed users (1065 posts). It is a subset of the dataset sampled by Celli et al. [3]. The dataset has been collected from FriendFeed public URL, where new posts are publicly available. The dataset was already processed with a language identifier, whose performance is correct at 88%. This made easier the extraction of the Italian subset.
Our unsupervised system does not require direct annotation of the dataset, but just a set of correlations between linguistic factors and personality traits to build models. Either Mairesse et al., Golbek et al. and Quercia et al. report sets of correlations between some cues and the dimensions of personality in the “Big Five”. In our system we used a set taken from Mairesse et al. because it is the largest one and it is more focused on linguistics.
C. Building the Personality Model
Mairesse et al. provides a long list of correlation coefficients between linguistic factors and the personality traits. These coefficients are obtained from an essay corpus where authors and external observers provided personality judgements following the “Big Five” model. In order to develop an unsupervised personality recognition system we need to turn those coefficients into features that can be automatically extracted from text. Among those linguistic factors that correlates with certain aspects of personality there are some regarding topic (for example if a person writes about job, leisure, music, other people), some regarding word usage (for example the frequency of words used, the use of negative particles, first person pronouns, fillers, swears) and some regarding psychological aspects (for example age of acquisition of the word used, length of the words used, expression of positive and negative feelings). Factors are supposed to be valid for the western culture. We picked up and adapted 22 features from Mairesse et al. They are:
1) **all punctuation** (ap): the count of . , ; : in the post,
2) **commas** (cm): the count of , in the post,
3) **reference to other users** (du): the count of the pattern @ in the post,
4) **exclamation marks** (em): the count of ! in the post,
5) **external links** (el): the count of external links in the post,
6) **first person singular pronouns** (im): the number of first person singular pronouns in the post,
7) **negative particles** (np): the count of negative particles in the post,
8) **negative emotions** (ne): the count of emoticons expressing negative feelings in the post,
9) **numbers** (nb): the count of numbers in the post,
10) **parenthesis** (pa): the count of parenthetical phrases in the post,
11) **positive emotions** (pe): the count of emoticons expressing positive feelings in the post,
12) **prepositions** (pp): the count of prepositions in the post,
13) **pronouns** (pr): the count of pronouns in the post,
14) **question marks** (qm): the count of ? in the post,
15) **long words** (sl): the count of words longer than 6 letters in the post,
16) **self reference** (sr): the count of first person (singular and plural) pronouns in the post,
17) **swears** (sw): total count of vulgar expressions in the post,
18) **type/token ratio** (tt): defined in the formula below,
19) **word count** (wc): words in the post,
20) **first person plural pronouns** (we): count of first person plural pronouns in the post,
21) **second person singular pronouns** (yu): count of second person singular pronouns in the post,
22) **mean word frequency** (mf): simple mean of the frequency of words in the post, defined in the formula below.
\[
tt = \frac{w - T}{T} \quad mf = \frac{\Sigma wf}{T}
\]
where \( w \) is the count of words already used in the sentence, \( T \) is the total word count in the sentence and \( wf \) is the frequency count of the word in the dataset. Table I (from Mairesse et al.) shows how the linguistic features used correlate with personality traits. First the system extracts a random sample of the dataset for statistical purposes. The size of the sample can be decided a-priori, in this case we sampled 500 posts. From this sample the system extracts mean and standard deviation for each feature. The mean word frequency (feature mf) in this case is calculated using an external corpus of Italian (CORISsmall, see [11]) but in principle it can be calculated also from the dataset itself as relative frequency. Results are summarized in Table II. In the second step the system processes the entire dataset building a personality model for each post applying the following rules: if a sentence shows a feature correlating positively with one personality trait and the frequency of that feature is higher than mean plus standard deviation for that feature,
| F. | E | S | A | C | O |
|----|----|----|----|----|----|
| ap | -.08** | -.04 | -.01 | -.04 | -10** |
| cm | -.02 | .01 | -.02 | -.01 | .10** |
| du | -.07** | .02 | .01 | .01 | .06** |
| el | -.05* | -.02 | -.01 | -.03 | .09** |
| em | -.00 | -.05* | .06** | .00 | -.03 |
| in | -.04* | .01 | -.01 | -.03 | -.01 |
| im | .05* | -.15** | .05* | .04 | -14** |
| np | -.08** | .12** | .11** | -.07** | .01 |
| ne | -.03 | -.18** | -.11** | -.11** | .04 |
| nb | -.03 | .05* | -.03 | -.02 | -.06** |
| pa | -.06** | .03 | -.04* | -.01 | -.10** |
| pe | .07** | .07** | .05* | .02 | .02 |
| pp | .00 | .06** | .04 | .08** | -.04 |
| pr | .07** | .12** | .04* | .02 | -.06** |
| qm | .06** | -.05* | -.04 | -.06** | .08** |
| sr | .07** | -.14** | -.06** | -.04 | -.14** |
| sl | -.06** | .06** | -.05* | .02 | -.10** |
| sw | -.01 | .00 | -.14** | -.11** | .08** |
| tt | -.05** | .10** | -.04* | -.05* | .09** |
| wc | -.01 | .02 | .02 | -.02 | .06** |
| we | .06** | .07** | .04* | .01 | .04 |
| yu | -.01 | .03 | -.06** | -.04* | -.11* |
| mf | .05* | -.06** | .05 | .06** | -.07** |
Table I
Features used in the system and their Pearson’s correlation coefficients with personality traits as reported in Mairesse et al. 2007. * = p smaller than .05 (weak correlation), ** = p smaller than .01 (strong correlation)
| feature | mean | sd | min | max |
|---------|------|----|-----|-----|
| ap | 1 | 2 | 0 | 28 |
| cm | 0 | 1 | 0 | 19 |
| du | 0 | 0 | 0 | 3 |
| el | 0 | 0 | 0 | 3 |
| em | 0 | 0 | 0 | 7 |
| im | 0 | 0 | 0 | 3 |
| np | 0 | 0 | 0 | 4 |
| ne | 0 | 0 | 0 | 1 |
| nb | 1 | 4 | 0 | 64 |
| pa | 0 | 0 | 0 | 3 |
| pe | 0 | 0 | 0 | 2 |
| pp | 1 | 2 | 0 | 32 |
| pr | 0 | 0 | 0 | 8 |
| qm | 0 | 0 | 0 | 3 |
| sr | 0 | 0 | 0 | 4 |
| sl | 6 | 6 | 0 | 71 |
| sw | 0 | 0 | 0 | 1 |
| tt | 0.971 | 0.048 | 0.706 | 1 |
| wc | 7 | 7 | 1 | 79 |
| we | 0 | 0 | 0 | 2 |
| yu | 0 | 0 | 0 | 2 |
| mf | 101264 | 87192 | 68 | 567704 |
Table II
Summary of the behavior of features associated to personality traits in the dataset.
then the system increases the score of that personality trait. If a sentence shows a feature whose frequency is higher than mean plus standard deviation and it correlates negatively with one personality trait, the system decrease the score of that personality trait. Then numerical values are turned into nominal ones (“y”, “n” and “o”) simply checking if a value is positive, negative or it is zero. In the end the majority class of each personality trait is calculated for each user and the resulting string is taken as the user’s personality model.
D. Evaluation of Personality Models
The evaluation method is based on the assumption that one user has one and only one personality and that this personality emerges at different degrees from user’s posts. Hence the system evaluates the personality model comparing many posts of the same user. The drawback of this method is that we can only evaluate models for users that have more than one post in the dataset, and we have to discard all the other users.
The unsupervised system takes all the models built from the posts of a user and compares each value of the string. This evaluation method provides two measures, accuracy ($a$) and validity ($v$), defined in the formulas below:
$$a = \frac{tp + tn}{tp + tn + fp + fn} \quad v = 1 - \frac{a}{P}$$
where $P$ is the number of posts of one user; $tp$ is the sum of each personality attribute matching within the same user (for example “y” and “y”, “n” and “n”, “o” and “o”); $tn$ is the sum of opposite attributes within the same user (“y” and “n”, “n” and “y”); $fp$ is the sum of possible attributes turned to the balance value within the same user (“y” to “o” and “n” and “o”) and $fn$ is the sum of the zero attributes turned to positive or negative (“o” to “y” and “o” and “n”). Accuracy gives a measure of the reliability of the personality model and validity gives information about how much the model is valid for all the user’s posts, in other words how much the user writes expressing the same personality traits. A low validity score means that the user shows variability in his/her writing style.
III. Analysis and Discussion
We filtered out group posts (because many users with different personalities can post in a group) and kept only single users from the dataset. Most users (592) have just one post and the models obtained from those users were not considered reliable (accuracy is set to 0). Excluding the users with only one post the average accuracy is 0.631 and the average validity is 0.729. Accuracy is in line with the classification accuracies reported by Mairesse et al. 2007 for observer ratings evaluation. This fact is very encouraging because it is a clue that we developed a system that implements Mairesse’s model completely automatically. The results of the frequency of personality models in the sample is reported in Table III. Below rank 7 models become more and more sparse, with a long tail of models appearing only once. They do not appear in Table III.
The most frequent personality type in the Italian subset of FriendFeed is represented by the model of an extravert, insecure, agreeable, organized and unimaginative person. It is interesting to note that the features “insecure” and
“unimaginative” is present in the first four positions of the ranking and that no shy people is found in the first six positions. Pearson’s correlation test reveal that there is a strong (+0.79) and highly significant correlation ($p$-value = .0003) between the accuracy and personality model types, meaning that there are certain personality types that express strongly and reliably their personality in written language, and others that do not. Although there is no correlation ($p$-value = .413) between personality and posting activity, once filtered out the long tail of users with sparse personality models, emerges that there is one personality type that produces more posts than others, that is the extravert, insecure, friendly, not particularly precise and unimaginative person (ynyon).
A manual look to the data reveals that there are some users (the ones with higher validity) that are focused on a topic, and sometimes this topic is clear from their username: for example “styleandthecity”, or such users as “ultimora” or “cronaca24”, which appear to be journalists and have a very recognizable and normalized style, but not the same personality model.
IV. Conclusions and Future Work
In this work, we described and developed an unsupervised system for personality recognition that does not require subjects for evaluation. It exploits existing correlations between language cues and personality traits providing accuracy and validity as evaluation measures. We showed that it is possible to extract personality information from SNSs in an unsupervised way with acceptable accuracy with a process that is completely automatic. The results reported here show that the distribution of personality models in SNSs has a high peak of people sharing the same personality traits and a long tail of people with a unique personality model. Results also show that validity is a good measure of the recognizability of the style of a user.
In the future, we would like to improve the system exploiting different correlation sets. We would also like to sample and automatically annotate large corpora of Social Network data in order to facilitate the research in this field.
ACKNOWLEDGEMENT
This work has been realized also thanks to the support from the Provincia autonoma di Trento and the Fondazione Cassa di Risparmio di Trento e Rovereto.
REFERENCES
[1] Amichai-Hamburger, Y. and Vinitzky, G. Social network use and personality. In *Computers in Human Behavior*. 26(6). pp. 1289–1295. 2010.
[2] Boyd, D. and Ellison, N. Social Network Sites: Definition, history, and scholarship. In *Journal of Computer-Mediated Communication* 13(1). pp. 210–230. 2007.
[3] Celli, F. and Di Lascio and F.M.L. and Magnani, M. and Pacelli, and B., Rossi, L. *Social Network Data and Practices: the case of Friendfeed*. Advances in Social Computing. pp. 346–353. Series: Lecture Notes in Computer Science, Springer, Berlin. 2010.
[4] Golbeck, J. and Robles, C., and Turner, K. Predicting Personality with Social Media. In *Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems*. pp. 253–262. 2011.
[5] Goldberg, L., R. The Development of Markers for the Big Five factor Structure. In *Psychological Assessment*, 4(1). pp. 26–42. 1992.
[6] Mairesse, F., and Walker, M. Words mark the nerds: computational models of personality recognition through language. In: *Proceedings of the 28th Annual Conference of the Cognitive Science Society*. pp. 543-548. 2006.
[7] Mairesse, F. and Walker, M. A. and Mehl, M. R., and Moore, R. K. Using Linguistic Cues for the Automatic Recognition of Personality in Conversation and Text. In *Journal of Artificial intelligence Research*, 30. pp. 457–500. 2007.
[8] Norman, W., T. Toward an adequate taxonomy of personality attributes: Replicated factor structure in peer nomination personality rating. In *Journal of Abnormal and Social Psychology*, 66. pp. 574–583. 1963.
[9] Oberlander, J., and Nowson, S. Whose thumb is it anyway? classifying author personality from weblog text. In *Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics ACL*. pp. 627–634. 2006.
[10] Quercia, D. and Kosinski, M. and Stillwell, D., and Crowcroft, J. Our Twitter Profiles, Our Selves: Predicting Personality with Twitter. In *Proceedings of SocialCom2011*. pp. 180–185. 2011.
[11] Rossini Favretti R. and Tamburini F., and De Santis C. CORIS/CODIS: A corpus of written Italian based on a defined and a dynamic model. In *A Rainbow of Corpora: Corpus Linguistics and the Languages of the World*. Wilson, A., Rayson, P. and McEnery, T. (eds.), Lincom-Europa, Munich. pp: 27–38. 2002.
Cellular Automata: Simulations Using Matlab
Stavros Athanassopoulos\textsuperscript{1,2}, Christos Kaklamanis\textsuperscript{1,2}, Gerasimos Kalfoutzos\textsuperscript{1}, Evi Papaioannou\textsuperscript{1,2}
\textsuperscript{1}Dept. of Computer Engineering and Informatics, University of Patras
\textsuperscript{2}Computer Technology Institute and Press “Diophantus”
Patras University Campus, Building B, GR26504, Rion, Greece
e-mail: \{athanaso, kakl, kalfount, papaioan\}@ceid.upatras.gr
Abstract—This paper presents a series of implementations of cellular automata rules using the Matlab programming environment. A cellular automaton is a decentralized computing model providing an excellent platform for performing complex computations with the help of only local information. Matlab is a numerical interactive computing environment and a high-level language with users coming from various backgrounds of engineering, science, and economics that enables performing computationally intensive tasks faster than with traditional programming languages (such as C, C++, and Fortran). Our objective has been to investigate and exploit the potential of Matlab, which is simple mathematical programming environment that does not require specific programming skills, regarding the understanding and the efficient simulation of complex patterns, arising in nature and across several scientific fields, captured by simple cellular automata structures. We have implemented several cellular automata rules from the recent literature; herein we present indicative cases of practical interest: the forest fire probabilistic rule, the sand pile rule, the ant rule, the traffic jam rule as well as the well-known “Game of Life”. Our work indicates that Matlab is indeed an appropriate environment for developing simulations for cellular automata models.
Keywords—cellular automata; simulation; Matlab.
I. CELLULAR AUTOMATA
A cellular automaton (CA) is an idealization of a physical system in which space and time are discrete and the physical quantities take only a finite set of values. Informally, a cellular automaton is a lattice of cells, each of which may be in a predetermined number of discrete states (a formal definition can be found in [7]). A neighborhood relation is defined over this lattice, indicating for each cell which cells are considered to be its neighbors during state updates. Time is also discrete; in each time step, every cell updates its state using a transition rule that takes as input the states of all cells in its neighborhood (which usually includes the cell itself). All cells in the cellular automaton are synchronously updated. At time $t = 0$ the initial state of the cellular automaton must be defined; then repeated synchronous application of the transition function to all cells in the lattice will lead to the deterministic evolution of the cellular automaton over time. Many variations of this basic model exist: CA can be of arbitrary dimension, although one-dimensional and two-dimensional CA have received special attention in the literature. CA can be infinite or finite. Finite CA can have periodic boundaries (e.g., the opposite ends of a one-dimensional finite CA are joined together so the whole forms a ring). Updates can be synchronous or asynchronous. Transition rules can be deterministic or stochastic. Many other variations exist; those mentioned above are some of the most typical ones.
The concept of cellular automata was initiated in the early 1950’s by John Von Neumann and Stan Ulam [18]. Von Neumann was interested in their use for modelling self-reproduction and showed that a CA can be universal. He devised a CA, each cell of which has a state space of 29 states, and showed that it can execute any computable operation. However, Von Neumann rules, due to their complexity, were never implemented on a computer. Von Neumann’s research raised a dichotomy in CA research. On one hand, it was proven that a decentralized machine can be designed to simulate any arbitrary function. On the other hand, this machine (CA) can become as complex as the function it is intended to simulate.
Cellular automata have received extensive academic study into their fundamental characteristics and capabilities and have been applied successfully to the modelling of natural phenomena and complex systems [1], [3], [4], [13], [17], [24], [23]. Based on the theoretical concept of universality, researchers have tried to develop simpler and more practical architectures of CA that can be used to model widely divergent application areas. In the 1970, the mathematician John Conway proposed the (now famous) Game of Life [10], which received widespread interest among researchers. Since the beginning of the 80’s, Stephen Wolfram has studied in much detail a family of simple one-dimensional cellular automata rules (known as Wolfram rules [24]) and has showed that even these simplest rules are capable of emulating complex behavior. Other applications include, but are not limited to, theoretical biology [2], game theory [19], and non-equilibrium thermodynamics [15].
The rest of the paper is structured as follows: Section II includes a brief description of Matlab as well as main reasons that motivated us for using it in our simulations. Simulations are presented in Section III. Section IV includes conclusion and plans for future work on cellular automata simulations using Matlab.
II. MATLAB
MATLAB is a numerical computing environment and fourth-generation programming language which allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, Java, and Fortran. Although it was intended primarily for numerical computing, it also allows symbolic computing, graphical multi-domain simulation and model-based design for dynamic and embedded systems. It has been widely used in academia and industry by users coming from various backgrounds of engineering, science and economics. MATLAB was first adopted by researchers and practitioners in control engineering, and quickly spread to many other domains. It is now also used in education, in particular the teaching of linear algebra and numerical analysis, and is very popular amongst scientists involved in image processing [16].
Why we used Matlab for our simulations? Existing implementations of cellular automata have been developed using Java and C/C++. This selection has been supported by the graphical interface these programming languages offer as well as by their strict object-oriented programming nature. In this way, implementation of cellular automata can be a very efficient and effective development task. For our study, Matlab offers simplicity coupled with power; this mainly motivated us to use it for the implementation/simulation of cellular automata, i.e., of simple structures that can, however, model complex behavior and real-world patterns. Matlab neither requires nor focuses on particular programming skills; on the contrary, it provides an efficient tool for the researcher/user to simulate simple models without focusing on programming and easily conceive such complex patterns in practice – not only through some mathematically defined function (however, using appropriate toolboxes, Matlab code can be converted – if needed – to C/C++ code).
More specifically, cellular automata can be implemented using matrices of one or several dimensions. Matlab makes a quite appropriate environment since it offers a wide range of operations and functions particularly working on matrices. Moreover, the status of network cells can be easily represented using function surf(), while necessary diagrams and graphical representations can be produced - almost directly - using function plot(). Using Matlab only a single file per cellular automaton (i.e., per algorithm) is needed; this provides high flexibility in the experimentation and simplicity in the code execution process. Furthermore, syntax is simpler (than in involved programming languages) thus directly reflecting the simplicity of the rules according to which automaton cell status is altered. Such technicalities could be of high importance when it comes to communities of researchers not familiar with programming languages: they could easily deploy their model and see its behavior without having to spend extra resources for becoming programming experts. Of course, Matlab is a rather slow environment and Matlab programs require more computational power compared to Java or C++; this could be a drawback if our algorithms were to be used as parts of intense resource-requiring applications.
III. Our Simulations
As already stated, the question that motivated our work is the following: Matlab is a “simple” programming environment that does not require a researcher/student to be a programming-expert to use it. Cellular automata can capture, via a small set of simple rules, very complex phenomena from the real world. Is Matlab efficient for simulations involving cellular automata?
We have implemented several CA rules from the recent literature: the Wolfram’s 184 rule, rules for probabilistic cellular automata, the Q2R rule, the annealing rule, the HPP rule, the sand pile rule, the ant rule, the traffic jam rule, the solid body motion rule, the “Game of Life”. Detailed description of these rules can be found in [7].
Herein, we present in detail five indicative cases of practical interest we simulated (and used for teaching purposes in the Theory of Computation lab of our department): Probabilistic Cellular Automata rules for forest fire models, the Sand Pile rule, the Ant rule, the Traffic Jam rule and the John Conway’s Game of Life.
Matlab Version 184.108.40.20620 (R14) has been used for implementation. Simulations have been executed on a system using an Intel Core i3 530 processor (2.93GHz, 6144MB DDR3 RAM), running Windows 7 Premium 32-bit operating system. For the graphical representation of the behavior of simulated models, function surf() has been used (full size figures can be found at [25]).
A. Implementation of a probabilistic rule for Burning Forest
Probabilistic Cellular Automata (PCA) are ordinary cellular automata where different rules can be applied at each cell according to some probability [24]. An interesting and simple example of a PCA model is a probabilistic rule for Burning Forest. The cellular automaton used for simulation uses a (nxn) grid, representing the forest, and a Moore neighborhood. Cells correspond to trees and can be in one of the following three states: green tree (1), empty site (2), burning tree (3). Initially, all cells are in state (1) (i.e., contain a green tree). Cell states are updated according to the following rules presented in detail in [5], [8]:
- A burning tree becomes an empty site.
- A green tree becomes a burning tree if at least one of its nearest neighbors is burning.
- At an empty site, a tree grows with probability p.
- A tree without a burning nearest neighbor becomes a burning tree in one time step with probability f.
At each time step, every cell is assigned a new random value (in [0,1]) for fire (f) and birth (p) probability. A green tree becomes a burning tree when f is greater than a threshold value set to 0.001. A new tree grows in an empty site when p is greater than a threshold value set to 0.1. These threshold values for f and p, once set remain the same throughout a single execution. Threshold value for f has been chosen to be sufficiently small so that in a large grid only few fires can start. Threshold value for p has been chosen to be greater than this for f so that new trees can grow and simulation can continue.
The following figures show instances of the simulation using a grid of size 200x200. In the beginning (Fig. 1a) two fires (white areas) have started in the forest (black area). Fire starts spreading among green trees, leaving empty sites behind (grey areas) (Fig. 1b). The fire spreading pattern looks like growing circular discs with a white outline (burning sites) and grey inside area (destroyed sites).

**B. Implementation of the Sand Pile rule**
The physics of granular materials has recently attracted CA-related research interest. It is possible to devise a simple cellular automaton rule to model basic piling and toppling of particles like sand grains [7]. The idea is that grains can stack on top of each other if this arrangement is stable. Of course, real grains do not stand on a regular lattice and the stability criteria are expected to depend on the shape of each grain. Despite the microscopic complexity, the result is sand piles that are too high to topple.
Toppling mechanisms can be captured by the following cellular automaton rule: a grain is stable if there is a grain underneath and two other grains preventing it falling to the left or right (Fig. 2). Assuming a Moore neighborhood, the rule implies that a central grain will be at rest if the southwest, south and south-east neighbors are occupied. Otherwise, the grain topples downwards to the nearest empty cell.

The cellular automaton used for simulation uses a (nxn) grid and a Margolus neighborhood which gives a simple way to deal with the synchronous motion of all particles [20]. Informally, when Margolus neighborhood is used, the lattice is divided in disjoint blocks of size 2x2; each block moves down and to the right with the next generation, and then moves back [21]. Cells can be in one of the following two states: grain of sand (1), empty cell (0). Initially, sand grains are placed randomly on the grid (no additional grains appear during the evolution of the cellular automaton). Cell states are updated according to the following rule [7], which is also presented graphically in Fig. 3:
| Current state | 1000 | 0100 | 1010 | 1001 | 0110 | 0101 | 1110 | 1101 | 1100 (p) | 1100 (1-p) |
|---------------|------|------|------|------|------|------|------|------|-----------|------------|
| Next state | 0010 | 0001 | 0011 | 0011 | 0011 | 0011 | 1011 | 0111 | 0011 | 11100 |

The configuration in which the upper part of a block is occupied by two particles while the lower part is empty, is not listed in the above image, although it certainly yields some toppling. When this configuration occurs, we adopted the probabilistic evolution rule shown in Fig. 4 in order to produce a more realistic behavior: some friction may be present between grains and some arches may appear to delay collapse. Of course, the toppling of other configurations could also be controlled by a random choice.
![Figure 4: Probabilistic behavior of the sand pile rule [7]](image)
In this simulation, p has been set to 0.5, i.e., two neighboring grains can equiprobably either fall (filling the cells bellow them) or remain at rest.
Fig. 5a, 5b and 5c show simulation instances. Initially, all grains are falling, except those at the bottom which remain at rest. The Margolus neighborhood does not affect grains at the grid boundaries, so they also remain at rest. The sand pile is growing and the number of falling grains decreases (Fig. 5b). The simulation terminates when there are no more grains to fall (Fig. 5c).

**C. Implementation of the Ant rule**
Langton's Ant [13, 14] follows extremely simple rules and initially appears to behave chaotically, however after a
certain number of steps a recurring pattern emerges. Langton’s Ant models the true behavior of ants in nature: a moving ant tends to leave pheromones behind it. All other ants moving in the same area can sense that substance and follow the motion of the first ant.
The rule simulates the following idea: an ant sits on a cell of a grid where all other cells are initially empty. It moves into a neighboring cell and does one of two things, based on the color of the cell:
- If the square is white, it turns 90 degrees to the left and colors the square grey
- If the square is grey, it turns 90 degrees to the right and colors the square white
The movement is continued in the same fashion, ad infinitum. The interesting thing about this is that after a fixed number of steps, the ant builds a highway and hotfoot it into infinity. The motion of the ant in this highway is not linear; it rather looks like the pattern of operation of a sewing machine. Although the ant rule seems to be very simple, it drives the ant to a chaotic state. This feature also shows the power of modeling systems with cellular automata: even though the cellular automata rules are very simple, they can implement very complex behaviors.
The cellular automaton used for simulation uses a (nxn) grid and a Von Neumann neighborhood; a von Neumann neighborhood is composed of the four cells orthogonally surrounding a central cell on a two-dimensional square lattice [12]. A cyclic neighbourhood has been used for cells at the lattice boundaries: when an ant reaches the lattice boundaries, it returns to the lattice simulating the existence of a second ant. Cells can be in one of two states: ant (1), empty cell (0). Initially, all cells are empty (state 0) apart from one cell (state 1) which contains the ant. Cell states are updated according to the following rules:
- \( n_i(r + c_i, t + 1) = \mu n_{i-1}(r, t) + (1 - \mu) n_{i+1}(r, t) \)
- \( \mu(r, t + 1) = \mu(r, t) \oplus n_1(r, t) \oplus n_2(r, t) \oplus n_3(r, t) \oplus n_4(r, t) \)
where \( n_i \): new state, \( r \): current cell, \( c_i \): current direction, \( t \): current time step, \( \mu \): cell color (1=white, 0=black). Initially, \( c_0=4 \), \( r \)=central cell of the grid.
As soon as the ant reaches the lattice boundary it returns to the lattice from a different position as a “second” ant, which has just entered the area. This second ant continues moving on the highway, just like the first one (Fig. 7a), moves towards the chaotic area (created by the first ant) (Fig. 7b) and starts moving irregularly (Fig. 7c). The “second” ant “senses the pheromones” of the first ant and escapes the chaotic situation faster than the previous ant. Ants can either create their own highways (Fig. 7d) or follow existing ones depending on the position of the chaotic area they enter.

**D. Implementation of the Traffic Jam rule**
Cellular automata models for road traffic have received a great deal of interest. One-dimensional models for single-lane car motions are quite simple and elegant [6]. The road is represented as a line of cells: each cell is either occupied by a vehicle or not. All cars travel in the same direction. Their positions are updated synchronously, in successive iterations (discrete time steps). During the motion, each car can be at rest or jump to the nearest-neighbor site, along the direction of motion. The rule is that a car moves only if its destination cell is empty. This means that the drivers are short-sighted and do not know whether the car in front will move or whether it is also blocked by another car. Therefore the state of each cell \( s_i \) is entirely determined by the occupancy of the cell itself and its two nearest neighbors \( s_{i-1} \) and \( s_{i+1} \). The motion rule can be summarized in the following table, where all eight possible configurations \((s_{i-1}, s_i, s_{i+1}) \rightarrow (s_i)_{t+1}\) are given [6]:
| 111 | 110 | 101 | 100 | 011 | 010 | 001 | 000 |
|-----|-----|-----|-----|-----|-----|-----|-----|
| 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 |
This simple dynamics captures an interesting feature of real car motion: traffic congestion. This cellular automaton rule turns out to be Wolfram’s rule 184 [6].
The cellular automaton used for simulation uses a line and a one-dimensional neighborhood. Cells can be in one of three states: empty cell (0), stopped car (1), moving car (2). Initially, cars are placed randomly in line cells. Cell states are updated according to the following rule:
- \( n_i(t+1) = n_i^{in}(t)(1-n_i(t)) + n_i(t) n_i^{out}(t) \),
where \( n_i(t) \) denotes the car occupation number (\( n_i=0 \): free site, \( n_i=1 \): a car is present at site i), \( n_i^{in}(t) \) denotes the state of the source cell, i.e., that from which a car may move to cell i. Similarly, \( n_i^{out}(t) \) indicates the state of the destination cell, i.e., that the car at site i would like to move to. The rule implies that the next state of cell i is 1 if a car is currently present and the next cell is occupied, or if no car is currently present and a car is arriving.
A car is moving or not according to its “speed”, a variable taking random values in [0,1] that change in each time step. If a car has a “speed” lower than a threshold value set to 0.05, then it stops for one time step. When a car reaches the leftmost cell of its row, it is injected in the rightmost cell of the lattice in the same row and keeps moving in loops. Fig. 8a shows a normal traffic instance where all the cars are moving from right to left by one cell per step. White cars are moving; grey cars have stopped. In Fig. 8b, cars 1 and 2 stop. When car 1 stops, all following cars also stop (since there are no empty cells between them) and turn grey. Cars in front of car 1 keep moving left because no preceding car has stopped. When car 2 stops, there is an empty cell behind it. This is why the following cars remain white and keep moving left, covering every empty cell.

In Fig. 9a, another instance of normal traffic is shown. The car pointed by the arrow stops and becomes grey (Fig. 9b). There is an empty cell behind it, so all cars that follow keep moving left and remain white.

Finally, the last car (Fig. 10a) stops (and becomes grey in Fig. 10b). All other cars keep moving left leaving an empty cell in front of the last car.

**E. John Conway’s Game of Life**
The Game of Life is a cellular automaton devised by the British mathematician John Horton Conway in 1970 [10]. The game is a zero-player game, meaning that its evolution is determined by its initial state (called pattern), requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, alive (white-1) or dead (black-0). Every cell interacts with its eight neighbours (Moore neighbourhood), which are the cells that are horizontally, vertically, or diagonally adjacent. Cell states are updated according to the following rule:
- Any live cell with fewer than two live neighbours dies, as if caused by under population.
- Any live cell with two or three live neighbours lives on to the next generation.
- Any live cell with more than three live neighbours dies, as if by overcrowding.
- Any cell with exactly three live neighbours becomes a live cell, as if by reproduction.
The initial pattern placed in the middle cells of the grid constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed-births and by deaths occurring simultaneously, and the discrete moment at which this happens is sometimes called a tick; in other words, each generation is a pure function of the preceding one. The rules continue to be applied repeatedly to create further generations.
Fig. 11 shows simulation snapshots for 6 different initial patterns: cell row (Fig. 11a), glinder (Fig. 11b), small explorer (Fig. 11c), explorer (Fig. 11d), lightweight spaceship (Fig. 11e), tumbler (Fig. 11f).




IV. CONCLUSION AND FUTURE WORK
We have simulated several popular cellular automata rules of practical interest using Matlab. Our simulations yield evolution patterns in accordance with those expected from corresponding rules and similar to those obtained so far using Java or C/C++.
Our work indicates that Matlab is indeed an appropriate environment for developing compact code for simulations involving cellular automata, even though it does not always guarantee high simulation speeds. It does not require specific programming skills and therefore it offers the flexibility to non-programming-expert researchers and/or students to experiment and understand in practice complex patterns captured by simple cellular automata structures.
Our current ongoing work investigates the potential of Matlab for simulations involving cellular automata for problems related to energy-efficient communication in Wireless Sensor Networks.
ACKNOWLEDGMENT
This work has been partially supported by EU under the ICT-2010-258307 project EULER and by EU and the Greek Ministry of Education, Lifelong Learning and Religious Affairs under the project “Digital School” (296441).
REFERENCES
[1] S. Bandini. Guest Editorial - Cellular Automata. Future Generation Computer Systems, 18:v-vi, August 2002.
[2] M. Boerlijst and P. Hogeweg. Spiral wave structure in prebiotic evolution: hypercycles stable against parasites. Physica D, 48(1):17–28, 1991.
[3] A. W. Burks. Essays on Cellular Automata. Technical Report, Univ. of Illinois, Urbana, 1970.
[4] P. Pal Chaudhuri, D. R. Chowdhury, S. Nandi, and S. Chatterjee. Additive Cellular Automata - Theory and Applications, volume 1. IEEE Computer Society Press, CA, USA, ISBN 0-8186-7717-1, 1997.
[5] P. Bak, K. Chen, C. Tang. A forest-fire model and some thoughts on turbulence. Physics Letters A, Vol. 147, Issues 5-6, pp. 297-300, 1990.
[6] B. Chopard, P. O. Luthi, and P-A. Queloz. Cellular Automata Model of Car Traffic in a Two-Dimensional Street Network. Journal of Physics A: Mathematical and General, 29, pp. 2325–2336, 1996.
[7] B. Chopard and M. Droz. Cellular Automata Modeling of Physical Systems, Cambridge University Press, 1998. ISBN 0-521-46168-5.
[8] B. Drossel, F. Schwabl. Self-organized critical forest-fire model. Physical Review Letters, Vol. 69, Issue 11, pp. 1629–1632. 1992.
[9] N. Ganguly, B. K. Sikdar, A. Deutsch, G. Canright, and P. Pal Chaudhuri. A survey on cellular automata. Technical report, Centre for High Performance Computing, Dresden University of Technology, December 2003.
[10] M. Gardner. Mathematical Games - The fantastic combinations of John Conway's new solitaire game "life". Scientific American, 223, pp. 120-123, 1970. ISBN 0894540017.
[11] R. Goering. Matlab edges closer to electronic design automation world. EE Times, 10/04/2004 (http://www.eetimes.com/electronics-news/4050334/Matlab-edges-closer-to-electronic-design-automation-world).
[12] L. Gray. A Mathematician Looks at Wolfram's New Kind of Science. Not. Amer. Math. Soc. 50, 200-211, 2003.
[13] C. G. Langton. Self-reproduction in cellular automata. Physica D: Nonlinear Phenomena, Volume 10, Issues 1-2, pp. 135-144, 1984. ISSN 0167-2789, 10.1016/0167-2789(84)90256-2.
[14] C. G. Langton. Studying artificial life with cellular automata. Physica D: Nonlinear Phenomena 22 (1-3): 120–149, 1986.
[15] M. Markus B. Hess. Isotropic cellular automaton for modelling excitable media. Nature, 347(6288):56–58, 1990.
[16] C. Moler. The Origins of MATLAB. December 2004. Retrieved April 15, 2007.
[17] M. Mitchell, P. T. Hraber, and J. P. Crutcheld. Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations. Complex Systems, 7, pp. 89-130, 1993.
[18] J. V. Neumann. The Theory of Self-Reproducing Automata. A. W. Burks (ed), Univ. of Illinois Press, Urbana and London, 1966.
[19] M. Nowak and R. May. Evolutionary games and spatial chaos. Nature, 359(6398):826–829, 1992.
[20] J. Schiff. 4.2.1 Partitioning Cellular Automata. Cellular Automata: A Discrete View of the World, Wiley, pp. 115–116, 2008.
[21] T. Toffoli, N. Margolus. II.12 The Margolus neighborhood. Cellular Automata Machines: A New Environment for Modeling, MIT Press, pp. 119–138, 1987.
[22] S. Wolfram. A New Kind of Science. Champaign, IL: Wolfram Media, pp. 29-30, 52, 59, 317, and p. 871, 2002.
[23] S. Wolfram. Cellular Automata and Complexity. World Scientific, Singapore, 1994. ISBN 9971-50-124-4 pbk.
[24] S. Wolfram. Theory and Applications of Cellular Automata. World Scientific, Singapore, 1986. ISBN 9971-50-124-4 pbk.
[25] http://www.ceid.upatras.gr/papaioan/CA/figs/index.html, November 15, 2011.
Fast Polynomial Approximation Acceleration on the GPU
Lumír Janošek
Department of Computer Science
VSB-Technical University of Ostrava
Ostrava, Czech Republic
Email: firstname.lastname@example.org
Martin Němec
Department of Computer Science
VSB-Technical University of Ostrava
Ostrava, Czech Republic
Email: email@example.com
Abstract—This article presents the possibility of parallelization of calculating polynomial approximations with large data inputs on GPU using NVIDIA CUDA architecture. Parallel implementation on the GPU is compared to the single thread CPU implementation. Despite the enormous computing power of today’s graphics cards there is still a problem with the speed of data transfer to GPU. The article is mainly focused on the implementation of some ways of transferring data from memory into GPU memory. The aim is to show what method is suitable for a large amount of data being processed and what for the lesser amount of data. Afterwards performance characteristics of the implementation of the CPU and GPU are matched.
Keywords—GPU; CUDA; Direct Memory Access; Parallel Reduction; Approximation.
I. INTRODUCTION
This article is focused on the application of a parallel approach to the implementation of the polynomial approximation of the $k$-th degree and its comparison with conventional single thread approach. Polynomial approximation model is widely used in practice. The statistics commonly use the basic model of approximation of 1th degree - a linear approximation, in other statistics called the linear regression.
Nowadays it is possible to create a massively parallelized applications using modern GPUs (Graphics Processing Unit) that enable the distribution of calculations among tens of multiprocessors of graphic cards. The problem still remains the need to transfer data between the CPU (Central Processing Unit) and GPU. This can become a limiting factor in performance when the time needed to transfer data between memory and GPU memory, the host system plus the time the GPU processes data exceeds the time after, which the same data can be handled by the CPU. But there are ways to at least partially eliminate this lack of trying.
In this article will be shown how to implement polynomial approximation using the GPU parallel computing architecture of NVIDIA CUDA (Compute Unified Device Architecture), which provides a significant increase of computing power [1]. The parallel implementation is compared with single threaded CPU implementation. Performance results of both implementations are compared with each other and show the differences between the parallel implementation approach and common single threaded approach for certain volume of data. By comparison of these two approaches it can be seen for how much data is suitable for the parallel approach and for how much it is already inappropriate. A substantial part of the implementation is a comparison of the chosen methods of copying data from RAM (Random-Access Memory) to graphics card memory, and especially the methods of allocating this memory. Three methods are compared: the allocation of pageable memory, the allocation of page-locked memory (also known as Pinned memory), and the allocation of memory mapped into the address space of the CUDA [2].
A common approach is the method of allocation and data transfer, when the input data are placed in pageable memory and from this memory are then transmitted by conventional copying approach into graphics card memory. The allocation of page-locked memory when copying data allows the GPU to use DMA (Direct Memory Access). Mapping memory allocation into the address space of the CUDA is a special case that allows to read data stored in RAM directly from the GPU.
This paper is structured as follows. First, some mathematical background related to polynomial approximations is presented. Next, a description of the implemented memory approaches and a description of the implementation of a parallel reduction are presented. Lastly, results and conclusion are presented.
II. MATHEMATICAL MODEL OF APPROXIMATION
Consider a set of points with coordinates $x_i \in \mathbb{R}^d$, where $i \in \{1, \cdots, n\}$. The aim of the approximation data problem is to find the function $f(x)$ in the general case, which best approximates the scalar value $f_x$ at point $x_i$. The result, using the least squares method, is a function $f(x)$ such that the distance between scalar data values $f_x$ and functional values $f(x_i)$ is as small as possible [3]. Least squares method based linear approximation in its simplest application, which approximates an input data by linear function in the form of:
$$f : b_0 + x \cdot b_1,$$
(1)
where the sum of squares has the form:
\[
\psi(b_0, b_1) := \sum_{i=1}^{n} [f(x_i) - f_i]^2
\]
(2)
Minimum of sum of squares then we found as:
\[
\frac{\partial \psi}{\partial b_0} = 0 \quad \frac{\partial \psi}{\partial b_1} = 0
\]
By adjusting the obtained:
\[
\begin{pmatrix}
b_0 \\
b_1
\end{pmatrix} =
\begin{pmatrix}
n & \sum_{i=1}^{n} x_i \\
\sum_{i=1}^{n} x_i & \sum_{i=1}^{n} x_i^2
\end{pmatrix}^{-1}
\begin{pmatrix}
\sum_{i=1}^{n} y_i \\
\sum_{i=1}^{n} x_i y_i
\end{pmatrix}
\]
(3)
Members of the vector of the right side \( b_0 \) and \( b_1 \) are coefficients of the polynomial approximation (1). Input data of the algorithm are represented by a set of vectors (pairs) of \( \mathbb{R}^2 \). For input data it is sufficient to calculate the four sums (vector of sums):
\[
V_\Sigma = \left( \sum_{i=1}^{n} x_i, \sum_{i=1}^{n} y_i, \sum_{i=1}^{n} x_i y_i, \sum_{i=1}^{n} x_i^2 \right)
\]
(4)
The results of these sums are then just put back into the system of equations (3). By solving it, we get the sought coefficients of \( b_0 \) and \( b_1 \) approximation polynomial (1).
A. Polynomial approximation
A special case of linear model approximation is polynomial approximation. It is an approximation by polynomial of \( k \)-th degree. Using the procedure for calculating the linear approximation it is possible to express polynomial approximations formula as a set of [4]:
\[
b = A^{-1} Y
\]
(5)
where
\[
A = \begin{pmatrix}
n & \sum_{i=1}^{n} x_i & \cdots & \sum_{i=1}^{n} x_i^k \\
\sum_{i=1}^{n} x_i & \sum_{i=1}^{n} x_i^2 & \cdots & \sum_{i=1}^{n} x_i^{k+1} \\
\vdots & \vdots & \ddots & \vdots \\
\sum_{i=1}^{n} x_i^k & \sum_{i=1}^{n} x_i^{k+1} & \cdots & \sum_{i=1}^{n} x_i^{2k}
\end{pmatrix}
\]
\[
b = \begin{pmatrix}
b_0 \\
b_1 \\
\vdots \\
b_k
\end{pmatrix}, \quad Y = \begin{pmatrix}
\sum_{i=1}^{n} y_i \\
\sum_{i=1}^{n} x_i y_i \\
\vdots \\
\sum_{i=1}^{n} x_i^k y_i
\end{pmatrix}
\]
The solution of this system of equations is a vector \( b \), the individual members of which \( b_0 \cdots b_k \) represent the coefficients of the approximation polynomial. Using the system of equations (5) it is possible to derive a vector of sum for any polynomial \( k \)-th degree as in the case of linear approximation.
III. IMPLEMENTATION OF MEMORY APPROACHES
The algorithm for calculating polynomial approximation, which was described in the previous section, was implemented using the CUDA architecture of NVIDIA. The implementation was designed for processing large amounts of data represented as a set of vectors of \( \mathbb{R}^2 \). Parallel implementation of polynomial approximations on GPU is compared with single threaded implementation on CPU. During implementation, the goal was due to the large amount of input data, to optimize data flow between RAM and GPU memory.
Given the size of the input data set, several approaches to copy data from RAM to graphics card memory (global memory), or to access data from the GPU were compared. Three approaches were compared: normal approach with pageable memory allocation, the allocation of page-locked memory (also known as Pinned memory), and the allocation of page-lock memory mapped into address space of the CUDA.
The actual calculation of the polynomial approximation was implemented in part on the GPU and in part on the CPU. For comparison approximations of 1th degree, 2th degree and 3th degree were implemented. A parallel approach was used for calculating the vector of sums (4) by using a the parallel reduction algorithm. Calculation of the resulting coefficient \( b_0 \) and \( b_1 \) of the approximation polynomial is then completed on the CPU.
A. Pageable system buffer and page-locked memory
A common approach to transfer data from RAM to the global memory was compared with direct access of the GPU to RAM when copying data, otherwise the DMA (Direct Memory Access). The disadvantage of the common approach is double copying of transferred data. Data are transmitted in the first step from pageable memory (pageable system buffer) to the page-locked memory, and then from this page-locked memory to the GPU memory. By direct allocation of a data buffer in the page-locked memory extra data copying can be avoided. By allocation page-locked memory the operating system guarantees that this memory is not paged to disk, thus ensuring its place in physical memory [5]. By knowing the physical address of the buffer in the memory, GPU can copy data to the global memory direct memory access - DMA.
B. Memory mapping into address space of the CUDA
Another approach to access data in the RAM from the GPU is using direct mapping of page-lock memory into address space of the CUDA. The data are, as in the previous case, stored in memory allocated as page-locked memory, with the only difference being that this data can be accessed directly from the GPU. This eliminates the need for allocating memory block in global memory and the need to copy data into this block of memory.
IV. PARALLEL REDUCTION
With access to parallel hardware the entire process of the sum calculation can be parallelized. If we have hundreds of threads, then each thread can contribute to the gradual calculation of the resulting sum. This approach is called parallel reduction [6]. The main idea of the parallel reduction is that each thread performs the sum of two values in the memory and then saved back. The algorithm therefore starts at the beginning with half the number of threads than the number of inputs. In each step, one thread adds the two values. In the next step the process is repeated, but with half the number of threads. The process continues until the final sum is achieved by gradual reduction. The parallel reduction algorithm is especially efficient for large data inputs.
Reduction of the vector (4) is divided between $C \cdot N_{threads}^{-1}$ blocks, where $C$ is the count of input data (vectors of $\mathbb{R}^2$) and $N_{threads} = 256$ is the number of threads per block. The data are this way evenly divided between the individual blocks, when each block handles one subset of the input data.
Implementation of the parallel reduction of the vector of sums can be divided into three steps: 1) The first step is to copy data from global memory to the shared memory. In the shared memory the reduction of the vector of sums is subsequently made. Copying data from global memory to shared memory is implemented in the CUDA kernel by using all threads of the block, thus each thread copies always the four values that belong to one of the four sums of a vector (4). Simultaneously with copying the data into shared memory, is made the first reduction step - first add during load [6]. This leads to the reduction of the required number of blocks by half. The total number of blocks needed to run the CUDA kernel is
$$\frac{1}{2} \cdot \frac{C}{N_{threads}}$$
2) After copying the data into shared memory all the threads of block are synchronized, which ensures that no thread starts reading the shared memory until all threads finish copying the data. Then begins the process of reduction. In each iteration, one thread performs the sum of the vector of sums, which leads to a gradual reduction of input data. Before entering the next iteration, the number of threads is reduced by half. Reduction cycle ends when the number of threads reaches zero. The data inside the loop are processed in the shared memory (on-chip memory), accordingly there is no unnecessary transfer of data between global memory and GPU multiprocessor. 3) The result of each block is transferred back to global memory after the reduction. This copy process takes place before the end of kernel one of the threads. The results of the individual blocks are copied from global memory back to the RAM on the CPU. Completion of reduction, thus the sum of all results of individual blocks, is completed on the CPU. The result is a vector (4). It is then possible, without difficulty, to apply the described algorithm of a parallel reduction, with minor modifications, to the calculation of vectors of sums of approximations of higher degrees.
V. RESULTS
The presented method for parallel calculation of linear approximation to the GPU has been implemented and tested on a graphics card GeForce 9600 GT, GeForce 9800 GTX and GeForce GTS 450 NVIDIA. The implementation was tested for various sizes of the input file in order to determine what amount of data is preferable to compute on the CPU and for how much data it is more efficient to use a parallel implementation on GPUs. Performance characteristics of both implementations were compared, the result is shown in the Fig. 1. From a comparison of the characteristics of computations on the CPU and GPU it is obvious that for a smaller amount of data it is preferable to keep the calculation of polynomial approximation on the CPU. GPU in this case is more appropriate for larger data amounts. The size of test data ranged from 12 KB to 50 MB (1365 - 6400000 input data).
As written in the previous section, three approaches of transferring data between RAM and the global memory were compared. A common approach to copy data between RAM and global memory of GPU is compared with the approach of direct access of the GPU to RAM (DMA) when copying data. This method of implementation has brought strong effect especially in an expanding volume of copied data, as seen from the Figure 2 because there is no need to copy data from pageable memory into the page-locked memory, before transferring data to the GPU global memory.
The last of the studied approaches of transferring data from memory into the GPU global memory was the use of mapping page-lock memory into address space of the CUDA. Mapping page-lock memory is especially suitable for integrated graphics processors that are built into the system chipset and usually share their memory with the CPU. In this case, using the mapping page-lock memory removes unnecessary data transfers. For discrete graphics processors, the mapping page-lock memory is only suitable just in some cases [7]. For this reason, this method also did not bring any optimization of implementation. On the other contrary, when using the mapping page-lock memory into address space of the CUDA, there was a significant downgrade in performance, see Figure 3. Below are listed the size of data transfers that have occurred during the calculation between the CPU and GPU memory.
A total number of $N$ bytes of data was transmitted into the global memory from RAM. After completion of the calculation on the GPU back to RAM was transmitted a total of
$$\|V_\Sigma\|_2 \cdot \left( \frac{NumBlocks}{2} \cdot 4_{bytes} \right)$$
The total number of bytes transferred from global memory back into the RAM is equal to:
\[
\|V_{\Sigma}\|_2 \cdot \left( \frac{1}{2} \cdot \frac{N}{4_{bytes}} \cdot \frac{1}{NumThreads} \cdot 4_{bytes} \right)
\]
\[
\frac{1}{2} \cdot \|V_{\Sigma}\|_2 \cdot \frac{N}{NumThreads}
\]
**Figure 1.** The speed of calculating a linear approximation for the input data (vectors) in milliseconds. Comparison of speed of calculation on the CPU and GPU.
**Figure 2.** The speed of calculating a linear approximation for the input data (vectors) in milliseconds. Comparison of the effectiveness of implementation of calculation on the GPU using DMA access and common access to copy data to the global memory.
**Figure 3.** The speed of calculating a linear approximation for the input data (vectors) in milliseconds. Comparison of implementations using the mapping page-lock memory into the address space of the CUDA and approach using the DMA to copy data to the global memory.
**VI. CONCLUSION**
This article presented a parallel implementation of polynomial approximations on the GPU, which will significantly optimize the performance during calculation the large amounts of data. The implementation was compared with single thread CPU implementation, which is more suited for smaller data amount. It was shown that copying data from RAM, allocated as a page-lock memory, using the direct memory access (DMA), significantly accelerated the application performance in the result. In contrast, the use of mapping page-locked memory into address space of the CUDA in the implementation provided no improvement in application performance. This method is suitable for integrated GPU, which almost always produces a positive result due to the shared memory of CPU and GPU.
**ACKNOWLEDGMENT**
The thanks belongs to Professor Václav Skala for his substantive comments.
**REFERENCES**
[1] NVIDIA Corporation, *CUDA ZONE*, http://developer.nvidia.com/, retrieved: December, 2011.
[2] NVIDIA Corporation, *NVIDIA CUDA C Programming Guide*, Version 4.0, 2011.
[3] G. Coombe, *An Introduction to Scattered Data Approximation*, October 31, 2006.
[4] K. Rektorys, *Survey of Applicable Mathematics II*, 7th ed. Praha, 2003.
[5] J. Sanders and E. Kandrot, *CUDA by example: an introduction to general-purpose GPU programming*, 1th ed. United States of America, 2011.
[6] M. Harris, *Optimizing CUDA*, SC, 2007.
[7] R. Farber, *CUDA, Supercomputing for the Masses*, May 14, 2009, http://drdobbs.com/high-performance-computing/217500110, retrieved: December, 2011.
Generating Context-aware Recommendations using Banking Data in a Mobile Recommender System
Daniel Gallego Vico, Gabriel Huecas and Joaquín Salvachúa Rodríguez
Departamento de Ingeniería de Sistemas Telemáticos
Escuela Técnica Superior de Ingenieros de Telecomunicación, Universidad Politécnica de Madrid
Avenida Complutense 30, 28040, Madrid, Spain
Email: {dgallego, gabriel, firstname.lastname@example.org
Abstract—The increasing adoption of smartphones by the society has created a new area of research in recommender systems. This new domain is based on using location and context-awareness to provide personalization. This paper describes a model to generate context-aware recommendations for mobile recommender systems using banking data in order to recommend places where the bank customers have previously spent their money. In this work we have used real data provided by a well known Spanish bank. The mobile prototype deployed in the bank Labs environment was evaluated in a survey among 100 users with good results regarding usefulness and effectiveness. The results also showed that test users had a high confidence in a recommender system based on real banking data.
Keywords—Mobile Recommender; Context-aware; Banking data mining; User modeling; Customer segmentation
I. INTRODUCTION
In recent years the mobile world has evolved extremely quickly not only in terms of adoption, but also in technology. The result of these advances is a high adaptive personalization of mobile applications. These new capacities provided by smartphones give rise to the possibility of building enhanced mobile commerce applications using all the user data we have at our disposal by utilizing their context sensors.
On the other hand, the eBusiness world has also advanced due to this new way of personalization. Good examples of this evolution are recommender systems. Traditional recommender systems usually are based on subjective data or personal scores provides by the users (e.g. Google Places). However, in recent years new platforms have based their recommendation on real purchases and therefore, the recommendations inspire more confidence (e.g. Amazon). This confidence in the results is always a key feature in any recommender system, but usually it is not easy to have such kind of data from real purchases. As a result, if we think in bank entities, we will probably agree that they are one of the best sources of trusted data in the world, as they have a huge amount of transactions from millions of users.
In this paper we present a mobile prototype based on using banking data to generate enhanced context-aware recommendations. This research project was carried out through the collaboration between our research group and one of the most important Spanish banks (its identity is not revealed in these lines in order to comply with bank’s policies). This banking entity has provided us with more than 2.5 million credit card transactions made during the year 2010 and information about the 222,000 places and 34,000 anonymous customers’ profiles related to the previous transactions.
The rest of the paper is organized as follows: the next section reviews related work. Section 3 describes the motivations behind this research. Section 4 presents the model used to generate the context-aware recommendations. Section 5 provides the results of our experimental work based on the prototype deployment and the survey carried out. After that, in Section 6 we discuss the results achieved. Finally, we conclude with a short summary and directions for future research.
II. RELATED WORK
A large amount of research and practical applications exist on mobile computing, recommender systems, context-awareness (e.g. [1] or [2]) or location based services, as well as any combination of the above areas. For instance, Kenteris et al. recently surveyed the field of mobile guides [3]. Ricci also discusses the goals of context-dependent recommendations and their importance in mobile recommender systems in his recent survey [4].
However, as Yujie and Licai stressed in [5] one of the most important challenges for context-aware recommender systems is the lack of public datasets available to conduct experimental evaluations on the methods developed.
On the other hand, it is important to note that usually all of the projects related to a banking data mining process in a bank entity are focused on generating mined knowledge useful for the bank workers, helping them to make decisions about customer segmentation or economic products, as we can see in [6] and [7]. Therefore, we cannot find research work in which the banking data is used to generate recommendations to the end users.
trust property to increase the users’ confidence in the system recommendations. When we usually use a recommender system, we can think about several ways of falsifying or distorting the reality related to the items recommended. For example, the score of a restaurant recommendation from Google Places [10] is based on different user opinions. Thus the final recommendation is based on subjective evaluations of each user, and in some cases, the recommendation might not correspond to the reality. In conclusion, sometimes you might not trust recommendations because of the doubtful data origin. In our case we accomplish this goal because the system inspires confidence, as the data used for recommending are real data from the bank.
IV. CONTEXT-AWARENESS GENERATION
As we mentioned in Section 2, there has been much research on the area of generating context-awareness and different definitions of the term context exist (e.g. [11], [12]). Therefore, we follow the definition proposed by Dey [13]: “Context is any information that can be used to characterize the situation of an entity”. Specifically, the context dimensions in which our system is based on are: Social, Location and User context.
In the following sections we are going to present how we generate and use them to improve the recommendations, describing in detail the adaptive recommendation process summarized by Figure 1.
A. Social Context
The social context is generated by a data mining process over the banking data divided into three steps (Figure 2). These steps are not constricted to a real-time execution because all of them are carried out before the recommendations are requested by the user.
In the first step (User Profile Clustering), the system takes the banking client profiles provided to apply a clustering segmentation to them. These data have to be cleaned before the processing starts, so each record containing incorrect data (e.g. incorrect format, missing values, etc.) are ignored in the clustering process. It is very important to point out that only a restricted set of information was provided by our Spanish banking partner from its databases, being also previously anonymized in order to avoid privacy problems and to comply with the bank’s policies. For this reason, the client profiles provided by the bank entity are represented by the following straightforward $n$-tuple:
$$<\text{profileID}, \text{gender}, \text{age}, \text{averageExpensePerYear}>$$
(1)
In order to reduce the complexity of the banking data mining process, we first apply a Canopy clustering process [14] on these data and then a K-means clustering process [15] over the canopies generated, achieving a set of clusters based on the similarity of the banking clients. We have
called this set of clusters *Social Clusters*, because they gather together banking clients with similar profiles, forming social groups where the consumption model or tastes are related.
In the Transactions Assignment step, the system first assigns the credit card transactions to the corresponding cluster, considering that there is a unequivocal relationship between a transaction and a client (given by the *profileID* element that indicated who made that credit card transaction). Every bank transaction is represented by the following $n$-tuple:
$$<\text{profileID}, \text{placeID}, \text{paymentAmount}, \text{time}, \text{date}>$$
After that, all the transactions are assigned to the social clusters and then, a second process identifies the places where the transactions were made. The places are represented by the following $n$-tuple:
$$<\text{placeID}, \text{category}, \text{name}, \text{address}, \text{latitude}, \text{longitude}>$$
With this second process, we create a map of places where the relationships among places and clusters are shown, noticing in this way the consumption trends of every cluster.
Finally, the User’s Cluster Discovery process is activated when the user enters the first time to the mobile application. The system checks the information profile extracted from the user’s banking account (a $n$-tuple like the one show in 1) in order to assign her to any of the existing social clusters. This is carried out by calculating the distance among the point that represents the user profile in the space defined and the centroids from every social cluster. A centroid is a virtual point corresponding to the average of all the real points in the cluster. That is, its coordinates are the arithmetic mean for each dimension separately over all the points in the cluster. Hence, the cluster with the centroid at a minor distance from the user profile point representation is the social cluster assigned to the user.
After these steps we know the social context of the user because now she has been assigned to one of the social clusters generated. Every cluster has a common consumption model represented by the Clusters Trends Map and thus, we know which places are candidates to be recommended to her.
For instance, if a user belongs to the social cluster of 50 year-old women with an average expenditure of EUR 10,000 per year in credit card purchases, the set of possible places to recommend is made up of the places in which people in this category have paid with their credit cards in the year 2010 (as the data provided by our bank partner for this research correspond to that year).
**B. Location Context**
As [11] said, location is currently one of the most important context parameters. Accordingly, after obtaining the social context of the user based on the banking information, the recommendation can be made more accurate by adding the location context dimension. Most of the time, end users are looking for places recommendations in their immediate locality (e.g. good restaurants nearby). The use of mobile context device information as an input for the recommender system allows us to personalize recommendations based on the user’s location.
Different mobile context devices are involved in the acquisition of the user’s location. If the user’s device is GPS-capable, the geo-location will be more accurate. If not, a less accurate but usually valid location can be obtained using network-based positioning technologies or Geo IP capabilities.
Once the system is aware of the user’s location, it is applied as a new input to filter the user’s cluster trends map, obtaining a geo-located user’s cluster trends map (Figure 3).
C. User Context
The final process to achieve the personalized recommendation takes into account the user context. This context is based on a set of parameters (e.g. current time or current activity of the user inferred from sensor data) and an input preference given by the user to know the place category (one of the elements of the n-tuple 3) she wants to be recommended (e.g. restaurant, supermarket, cinema, etc.).
For instance, if the user wants a restaurant recommendation (category input), the mobile application could also use the current time information (e.g. lunch time) to filter the geo-located user’s cluster trends map (Figure 4) considering only those restaurants that fit with her current activity (e.g. walking). Following with the example, the user would see only a ranking of the closest restaurants at walking distance to her location that the banking clients belonging to her social cluster has visited the most at lunch time. That ranking would be generated by ordering those restaurants attending to the number of customers that have previously visited everyone.
V. EVALUATION AND RESULTS
To evaluate the system, a prototype was developed and deployed in a real environment that belongs to our bank partner called Labs. The primary aim of Labs is to allow the deployment of new researches and development projects created in the bank in order to be able to collect feedback from a set of bank clients registered in this environment.
Using this platform we evaluated first the social clusters achieved after applying the previous processes to real banking data. Then, we set up an online survey with two scenarios using a mobile prototype developed in Android in order to evaluate the user acceptance.
A. Social Clusters
The banking data provided by our bank partner to create the social clusters consisted of more than 2.5 million credit card transactions made during the previous year, providing information on 222,000 places and 34,000 anonymous customers’ profiles from customers between 48 and 55 years old. All these data were provided by the bank following the n-tuples (1), (2) and (3).
The Figure 5 illustrates the social clusters emerged after the clustering process over the banking data. As we can see, the average credit card expense in one year, the average age and the size of every social cluster (given by the diameter of the circles) is shown.
B. User Acceptance
We have analyzed the feedback provided by 100 bank customers registered in the Labs platform where the system is deployed. This evaluation was carried out using an online questionnaire based on two scenarios. The first one was focused on restaurant recommendations and the second one on supermarket recommendations. Both scenarios show a simple case in which after a user request, she receives a recommendation composed by several places corresponding to the previous categories. Figure 6 depicts a screenshot for a lunch recommendation provided by the Android mobile prototype taking into account that the location is provided by the mobile context sensors and the user context information is previously provided by the user.
Therefore, after a brief experience with the application in those scenarios the test users were asked to judge several statements related to some properties using a 5-point scale, where 1 mean “totally disagree” and 5 “totally agree”. The statements were like this: “The application is [property evaluated]”. Additionally, users had free text inputs fields to
make comments and annotations. The results are illustrated with average values in Figure 7.
VI. DISCUSSION
First of all, if we think about the distribution of customers along the social clusters (Figure 5), the results confirm the intuition: the clusters with less people are the clusters who spent more money in credit card transactions because high economic class people are more infrequent than medium and low economic class people. While the bigger size clusters are those who spend less money and also, are composed by older people that are less used to pay with credit card than younger people.
In regard to the results related to the user acceptance, they reveal a very positive attitude towards the mobile recommender system shown in the Figure 6, as long as it has average high scores in all the properties analyzed.
On the other hand and attending to the way we manage the explanations in our recommender system, it achieves some of the most important criteria recently set out by Tintarev and Masthoff in [9]. Specifically the “transparency” (i.e. explain how the system works) is achieved due to the explanation provided for the places recommended, as the application informs the user about how the recommendations have been generated considering the purchases of other bank customers like her. The “trust” (i.e. increase users’ confidence in the system), “effectiveness” (i.e. help users make good decisions) and “satisfaction” (i.e. increase the ease of use or enjoyment) criteria are achieved if we take into account the high values of the “reliable”, “effective” and “useful” properties respectively evaluated in the survey (Figure 7).
The statistical outcome is also supported by the comments wrote by the test users during the survey process. For example: “Having a recommender system in my smartphone available anywhere, anytime for searching any kind of place is very usefull.”, “I think that the most important feature for me is the confidence in the results as they come from real data of my own bank” or “I will really appreciate to have such a kind of application in my mobile phone in my daily life.”.
However, some of them remarked the privacy issues related to deploy this system into a real commercial exploitation, as they did not want their personal data in danger. Although the system is right now deployed in the bank Labs environment (a close secure environment), this point is an important issue that has to be studied if the system is deployed in a future outside of it, attending to the facts pointed out by Ohm in [16], where he showed that in some cases anonymization is not enough to preserve privacy promises.
VII. CONCLUSION AND FUTURE WORK
In this paper, we have presented a method of generating context-aware recommendations using banking data in mobile recommender systems. As we have shown, using this kind of data based on real people actions and banking history, allows us to increase the confidence in the personalized recommendations generated, because there is no subjective data used in the recommendation process. This feature of our system provides an essential advantage compared to other recommender systems which have the aforementioned problem of being based on non-reliable data.
Current and future work includes the evolution of the current prototype into a complete mobile application. This would allow us to evaluate the system with users really
interacting with a mobile device in realistic scenarios related to the users’ daily life. Of course, the long run aim of our banking partner is to launch a real product for commercial exploitation based on our system. Thus it will be necessary to study the privacy problems related to that real deployment.
On the other hand, we want to analyze the impact of using proactive techniques in our recommendation process. Proactivity means that the system pushes recommendations to the user when the current situation seems appropriate without being needed a explicit user request. Therefore, we are working now in a model to achieve proactivity in mobile context-aware recommender systems ([17] and [18]) that could be integrated into the context-generation model presented in this paper.
Another open issue that could be studied in relation with enhancing the recommendation process is the one described in [19], in order to create multiple personalities in the system that would have their own personalized profile, like a “what kind of customer do you want to be today?” feature. As a result, a user of the system could have several profiles with different social clusters associated. This is an interesting feature if we bear in mind that sometimes people pay with their credits cards to buy gifts or services for friends or family that usually does not have the same tastes.
ACKNOWLEDGMENT
The authors would like to thank the bank Labs group for providing their banking data and their expertise on the banking domain without which this research could not have been possible.
REFERENCES
[1] M. Baldauf, S. Dustdar, and F. Rosenberg, “A survey on context-aware systems,” *Int. J. Ad Hoc Ubiquitous Comput.*, vol. 2, pp. 263–277, June 2007. [Online]. Available: http://portal.acm.org/citation.cfm?id=1356236.1356243
[2] G. Adomavicius and A. Tuzhilin, “Context-aware recommender systems,” in *Proceedings of the 2008 ACM conference on Recommender systems*, ser. RecSys ’08. New York, NY, USA: ACM, 2008, pp. 335–336. [Online]. Available: http://doi.acm.org/10.1145/1454008.1454068
[3] M. Kenteris, D. Gavalas, and D. Economou, “Electronic mobile guides: a survey,” *Personal Ubiquitous Comput.*, vol. 15, pp. 97–111, January 2011. [Online]. Available: http://dx.doi.org/10.1007/s00779-010-0295-7
[4] F. Ricci, “Mobile recommender systems,” *International Journal of Information Technology and Tourism*, 2011.
[5] Z. Yujie and W. Licai, “Some challenges for context-aware recommender systems,” in *Computer Science and Education (ICCSE), 2010 5th International Conference on*, aug. 2010, pp. 362 –365.
[6] P. Atace, “Mining the (data) bank,” *Potentials, IEEE*, vol. 24, no. 4, pp. 40 – 42, 2005.
[7] S. Ren, Q. Sun, and Y. Shi, “Customer segmentation of bank based on data warehouse and data mining,” in *Information Management and Engineering (ICIME), 2010 The 2nd IEEE International Conference on*, 2010, pp. 349 –353.
[8] K. Swearingen and R. Sinha, “Interaction design for recommender systems,” in *In Designing Interactive Systems 2002. ACM. Press*, 2002.
[9] N. Tintarev and J. Masthoff, “Designing and evaluating explanations for recommender systems,” in *Recommender Systems Handbook*, F. Ricci, L. Rokach, B. Shapira, and P. B. Kantor, Eds. Springer US, 2011, pp. 479–510.
[10] Google, “Places,” 2011. [Online]. Available: http://www.google.com/hotpot
[11] G. D. Abowd, A. K. Dey, P. J. Brown, N. Davies, M. Smith, and P. Steggles, “Towards a better understanding of context and context-awareness,” in *Proceedings of the 1st international symposium on Handheld and Ubiquitous Computing*, ser. HUC ’99. London, UK: Springer-Verlag, 1999, pp. 304–307. [Online]. Available: http://portal.acm.org/citation.cfm?id=647985.743843
[12] M. Bazire and P. Brézillon, “Understanding context before using it,” in *CONTEXT’05*, 2005, pp. 29–40.
[13] A. K. Dey, “Understanding and using context,” *Personal Ubiquitous Comput.*, vol. 5, pp. 4–7, January 2001. [Online]. Available: http://dx.doi.org/10.1007/s007790170019
[14] A. McCallum, K. Nigam, and L. H. Ungar, “Efficient clustering of high-dimensional data sets with application to reference matching,” in *Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining*, ser. KDD ’00. New York, NY, USA: ACM, 2000, pp. 169–178. [Online]. Available: http://doi.acm.org/10.1145/347090.347123
[15] T. Kanungo, D. Mount, N. Netanyahu, C. Piatko, R. Silverman, and A. Wu, “An efficient k-means clustering algorithm: analysis and implementation,” *Pattern Analysis and Machine Intelligence, IEEE Transactions on*, vol. 24, no. 7, pp. 881 –892, Jul. 2002.
[16] P. Ohm, “Broken promises of privacy: Responding to the surprising failure of anonymization,” *Social Science Research Network*, vol. 57, pp. 1–64, 2009.
[17] W. Woerndl, J. Huebner, R. Bader, and D. Gallego-Vico, “A model for proactivity in mobile, context-aware recommender systems,” in *the 5th ACM International Conference on Recommender Systems*, October 2011.
[18] D. Gallego-Vico, W. Woerndl, and R. Bader, “A study on proactive delivery of restaurant recommendations for android smartphones,” in *the International Workshop on Personalization in Mobile Applications (PeMA) at 5th ACM International Conference on Recommender Systems*, October 2011.
[19] L. F. Cranor, “‘i didn’t buy it for myself’ privacy and ecommerce personalization,” in *Proceedings of the 2003 ACM workshop on Privacy in the electronic society*, ser. WPES ’03. New York, NY, USA: ACM, 2003, pp. 111–117. [Online]. Available: http://doi.acm.org/10.1145/1005140.1005158
Web Personalization
Implications and Challenges
Ahmad Kardan, Amirhossein Roshanzamir
Department of Computer Engineering and IT
Amirkabir University of Technology
Tehran, Iran
email@example.com, firstname.lastname@example.org
Abstract — Companies are under the pressure to provide tailor-made products or services that match customers’ preferences better. Personalization from web mining is a significant tool to accommodate this trend by extracting patterns of customer’s preferences and connecting them directly to production line and supply chain. However, with exception of companies like Amazon, Dell, Toyota and most of Airlines which have solid strategy and large appeal, it is challenging to develop a concrete and cost-effective approach in personalization for many companies. This paper studies dimensions of personalization and addresses business implications and challenges of web personalization. It will further suggest a novel approach for developing web personalization in small and medium sized enterprises by utilizing Frequent Flyer Program of Airlines Industry in order to justify and guide investment.
Keywords-web personalization; build-to-order; customer relationship management; recommender system; frequent flyer program.
I. INTRODUCTION
Web Personalization is simply defined as the process of gathering and storing data about visitor's interactions and navigation in a website in order to assemble and deliver the tailor-made web experience to a particular user. The delivery can range from making the web site more appealing to anticipating the needs of a user and providing customized and relevant information. The experience can be as simple as browsing a web site up to trading stocks or purchasing a computer. Effective personalization can be achieved by three steps including identifying, retrieving and assembling. The first step starts by collecting all available data. In fact, the user's data is divided into two categories i.e. personal data such as age, gender and demographics as well as behavioral data such as usage, click stream and time [1]. In the next step, the web site forms the visitor's profile and utilizes intelligent algorithms to analyze and mine data in order to extract statistics, and discover correlations between web pages and user's preferences. In the final step, the web site will deliver the right information and/or produce customized products and services to meet each user's requirement or assemble the most preferred page to be displayed to his or her preferences. This paper reviews dimensions of personalization and then addresses business implications and challenges of web personalization. It will further suggest a novel approach for developing web personalization in small and medium sized enterprises by explaining a mobile portal as a case study. A Frequent Flyer Program of Airlines Industry is used to justify and guide investment in this approach.
II. DIMENSIONS OF PERSONALIZATION
In practical applications, we can suggest a model to divide personalization technologies into two dimensions namely horizontal and vertical. Horizontal dimension enables a company to adapt attributes and appearance of products or services according to customer's flavor and taste, whereas vertical dimension enables a company to interact with customers in order to customize the configuration, performance and quality of products or services according to customer's request.
A. Solid
This is a kind of personalization which a user has no control to change or modify the product, nor is there any interaction between website and user. The user or customer simply selects the product and then use it. Digital music and video store and most of retail online sites which sell specific and predetermined products are examples of this type.
B. Superficial
This is a kind of personalization in which user has only limited control on appearance and presentation of the products. Many portal sites, such as Yahoo, Google and MSN allow users to personalize the page with selected news, local weather forecast, and other features. For instance, web news portal may provide news and articles about education and business school ranking to customers who are avid MBA fun and sport news for sportsmen. Digital Greeting Cards stores are another example where the user can select the type and structure of greeting card and can further customize it by individual messages.
C. Evident
This is a kind of personalization which user has no control to modify the product however, he or she can communicate with supplier and inform his or her specific interests while receiving recommendations and reviewing buying patterns of other customers of similar preferences. The customer can further place order for his or her preferred type or version or predetermined price. Amazon is an example of this kind in which users can receive
recommendations and review comments of other customers. They can also request for digital version of a book and once the total requests reach a certain number, Amazon will fulfill this request. It is actually, manifestation of one of Amazon’s strategies titled obsess over customers, to start with the customer and then go backwards to develop product and service solutions [2]. eBay can be considered as another example of evident personalization in which consumers and businesses engage in buying and selling a variety of goods and services worldwide. eBay online portal facilitate these ventures by providing Auction-style listing in which buyers and sellers can set and adjust their specific requirements and predetermined price.
D. Collaborative
This is the ultimate level of personalization in which user has control in designing and building the product according to his or her preferences. User usually selects the modules that are literally building blocks in order to customize a product and then assembles various combinations of modules. In electronics examples of modules would include processor, mother boards, memory, disk drives and software. Dell Computers has created a unique process within Industry and pioneered the build-to-order computer business. The process was long and required a great change in the thinking of many firms and many people within Dell and it has taken 20 years to get where it is [3]. Toyota, for example, introduced Buyatoyota.com as the major step in order to integrate its unique Just In Time (JIT) system with personalization and customer’s specific requirements. This site guides the customer through the steps of selecting a model, viewing and searching options, assembling the features and specifications, choosing the color and accessories, and finally, locating a local dealer that can provide that choice of car in order to get a quotation and arrange finance. Other car manufacturers may start to use the same approach in building their cars in near future.
| Vertical Personalization | Evident e.g. Amazon, eBay | Collaborative e.g. Dell, Toyota |
|--------------------------|---------------------------|--------------------------------|
| | Solid e.g. Online Music | Superficial e.g. E-card, E-News|
Fig 1. Dimensions of Personalization
III. PERSONALIZATION-BROADER STRATEGY
In fact, personalization must be an integral part of a broader strategy and connect to Customer relationship Management (CRM) and Recommender Systems. CRM is viewed as a strategy to attract, grow and retain customers. Personalization is an approach that can aid in bringing, staying and returning visitors / customers to a website. Since the very nature of the web tools encourage interactivity between people and organizations, the topics of personalization and CRM are, therefore, complementary to each other. They both have the ability of providing the right information or content (e.g. products, services and data) to the customer at the right time and right place [4].
The explosive growth of e-commerce and online environments has overwhelmed users by countless options to consider. They simply neither have time nor knowledge to personally consider and evaluate these options. Recommender systems represent a prominent class of personalization applications that aim to support online users in deciding which products or services meet their requirements. The advanced version of a recommender system, for example the one implemented in Amazon Inc., would add information about other complementary products (cross selling), in the form other customers that had bought X, also had bought Y and Z. Today, recommender systems have become one of the most powerful and popular tools in electronic commerce, since, they allow to transfer users into customers, increase the cross selling and build loyalty [5].
IV. FREQUENT-FLYER PROGRAM
A Frequent Flyer Program (FFP), which is a loyalty program offered by many airlines, was first created by Texas International Airlines in 1979. FFP can be considered as an interesting manifestation of evident personalization which integrates personalization with CRM and recommender system. Passengers can typically enroll in FFP of an airline and accumulate miles based on the distance flown of that airline or its partner. Miles accumulated allow members to redeem tickets, upgrade service class, or obtain free or discounted car rentals, hotel stays, merchandise, or other products and services through partners. The personalization features include but not limited to the possibility of selecting seats through online portal and requesting for special meal. Members can manage their account online by buying tickets and checking online while receiving personalized news and special offers based on their preferences and destination flown. In recent empirical study on FFP, Drèze and Nunes argue that successful redemption of miles fosters reengagement of passengers and motivates them to flying more frequently. Therefore, loyalty programs that offer people multiple redemption opportunities must balance the attractiveness of a reward with an appropriate level of difficulty in attaining success [6]. This research further shows that loyalty can be better accomplished when the reward is not too difficult to achieve, rather it should be inspiring and challenging to cash out and when someone does, he or she feels successful.
V. IMPLICATIONS AND CHALLENGES
Personalization has gone through different development phases since early 2000. It initially started as a tool to attract and retain visitors by giving them chance to explore more of the site. Advertising and promoting products and services, nevertheless, were part of this phase as well. The next phase attempted to increase turnover of customer's spending by offering more expensive or similar products. Today, personalization is increasingly used as a means to speed up the delivery of the right information to a visitor in order to customize products and services for meeting and exceeding his or her requirements. Those companies that are systematically gathering information about their customers, product attributes, purchase contexts and integrate it with behavioral segmentation such as demographics, attitudes and buying patterns can make more sophisticated offers that identify customers who are most likely to defect [7]. This personalization strategy ultimately increases number of regular customers and amount of each transaction and has made personalization as a required and expected feature of an e-business. For example, Timberland Boot Studio by allowing customer to select different leathers and colors gets three times hits on its customized boots [8].
Without challenging the enormous potential and contribution of personalization technology, the question still remains whether personalization really works? The smart answer is it depends. On one hand the benefits could be significant not only for the website visitor (being offered more interesting, useful and relevant web experience) but also for the provider (allowing one to one relationships and mass customization and improving website performance). On the other hand, personalization requires extensive and precise data that are neither easily obtainable, nor can be mined and analyzed efficiently. As such in many cases the output does not seem so successful in understanding and satisfying user needs and goals.
First and foremost, the ethical dimension of personalization need to be addressed, since online user's navigations are recorded for building and updating user profiles and this can put privacy of users in jeopardy. At the same time users are becoming more vigilant and have higher expectation. They are not so happy with idea of being stereotyped without their consent. Users also expect to be treated equal and have enough control and choices. The cost of technology initiated from intelligent software and storing hardware as well as the time spent are also critical factors which must be justified in the long run. Schneider and Bowen [9] proposed to explain that customer satisfaction of the service originates with the handling of three basic customer needs: security, justice and self-esteem. Building on their three needs conceptualization, we suggest six major implications and challenges of personalization as follows:
A. Security and Privacy
There are implications for user’s privacy and security of information, since, much of personalization entails intensive collection and use of personal information. The terms "privacy" and "security" are often used interchangeably, but there is an important distinction between these two. Security refers to the ability of user or site to protect information against unauthorized third parties by preventing them to access, use or modify information whereas privacy is the quality of being secluded from the presence or view of others [10]. It further refers to the ability and the right of the users to decide and control what can happen to their data and profile. Given the importance of data acquisition to personalization approach, it is crucial that sites identify privacy preferences of users and the relationship to their satisfaction with the site. This includes the users’ level of acceptance in how the data is acquired, whether the benefits of the approach outweigh the privacy risks, and whether the site will disclose the information to third parties [11]. As such users, at least, must explicitly feel connected to information in order to start benefiting from the service/features. Once users see benefits, they might be willing to surrender additional information and be more transparent, provided they know what is going to be done with it. In addition, the site must take all measures against the factors that could be outside of the knowledge or understanding of the user, such as the sufficiency of security mechanisms to protect any data provided. The user may often be unaware of what data is being given away and how they are stored and secured in the case of passive collection of information. Therefore, the site must encrypt passwords and sensitive data of users and evaluate an external test to ensure about the security and protection of data. A commonly recommended practice is to declare a privacy statement (or disclosure statement) which describes exactly what kind of data are gathered from users and then declare the policies about methods of using and sharing it.
B. Fair dealing and Integrity
These are critical issues in dealing with customers and visitors of a web-site. In fact, all visitors expect to be treated equally in terms of information, prices and services provided. When it comes to personalization, there are of course many occasions that a company is obliged to act discriminatorily based on the time, efforts, and money invested by customers as well as level of loyalty and pervious transactions. In such cases, the reason for differentiation or privileged treatment must be publicly announced to address user’s expectation for equality and avoid misunderstandings.
A good example of implementing this discriminatory practice is in Airline Industry which employs different prices based on booking time and publicly announces it. Passenger might have paid different fair tickets but upon boarding they would receive similar treatment. Today, fair dealing, integrity and justice is becoming critical issues and few other factors such as keeping commitment and flexibility in dealing with unusual requests are also emerged accordingly. Personalization has the capacity to reinforce all these based on the nature of business and
dimensions of personalization. It can also meet reasonable yet abnormal requests by keeping records and commitments.
C. Self-esteem and Sense of worth
Personalization can provide a unique possibility to maintain and enhance self-esteem and sense of worth for visitors by providing a user-friendly platform in which people feel in control, important and comfortable while having enough choices. Maslow claimed that the need for self-esteem can be met through mastery or achievement in a given field or through gaining respect or recognition from others [12]. As users become more vigilant, they might find personalization more pleasant and appealing, if they could exercise more control over it. We can imagine a scenario in which an online bookseller asks a visitor, "Would like us to add this title to our growing knowledge of your interests to lead and direct our future recommendations?" The customer can select "yes" and enjoy receiving recommendation, if he or she is an avid reader fun of the same subject. Alternatively, he or she can say "no" and spare time of receiving and reading these recommendations [13]. Likewise, a website which has personalization capabilities to discover and analyze the patterns in customer's navigations can fulfill their need by saying "We have noticed you frequently check football news, would you like us to update this news on your home page?" In both these scenarios, regardless of visitors response, we are trying to view them as unique people by respecting their interests and self-esteem while giving them some control over displayed contents. Even, a simple greeting message by indicating the name of visitor can enhance customer's feeling of self worth. This approach which is simple yet powerful can address privacy requirements of vigilant users, since, personalization is done after receiving their consent.
D. Cost
Successful companies like Google, Amazon, Dell and most of airlines spent millions on their portal web-site in order to personalize their products and services for their customers. In fact, personalizing online offerings is considered less costly than customizing physical products because of the "digital" nature of information goods. That is, with advanced information technology, online pages can be manipulated easily to suit individual customers' needs. But, it all depends on the number of visitors and customers as well as the nature of business. Each individual need to have a profile with details of their preferences and every time he or she goes online and visits the site, the profile needs to be updated. Moreover, intelligent algorithms must be applied in order to extract the most suitable and customized products or services to be offered. There will be also interrelated links between preferences of users who have similar purchasing behavior. Although the price of using new technology is being reduced day by day, however, developing new algorithms and using thousands of servers to address each and every customer's need is time consuming and costly. It was found that operating a personalized web site can cost more than four times than operating a "comparative dynamic site" and most sites that deployed personalization have not realized adequate returns on their investments [14]. As such, there must always be a balance between cost spent and potential income which is likely generated by economy of scale and addressing core business strategy. Google for example has almost billion visitors every day using its search engine [15]; therefore, they can easily afford heavy investment on technologies such as collaborative filtering, data mining, and click-stream analysis in order to customize their offerings at the individual level. Amazon, nevertheless, has thousand customers too and personalization is an important part of their sales strategy.
E. Timing
Web personalization enables online websites to customize their contents by capturing real time preferences of individual visitors through web mining techniques. The next step is to adapt the website content in order to meet individual's specific requirements. Yet, there is a trade-off between quality of recommendation and probability of accepting a given recommendation i.e. although the content of web-site will improve during the course of session to meet preference of the visitors, however, the probability of using and enjoying the fresh contents diminishes over the course of session [16]. These effects suggest that online portals have limited time to capture and mine visitors data in order to customize the most tailor-made contents or products. In fact, consumers prefer early presentation that eases their selection process, whereas adaptive systems can make better personalized content if they are allowed to collect more consumers' clicks over time [16].
Therefore, personalization needs to be efficient enough in order to keep the balance between time spent by user and to the extent to which his or her online behavior can be symbolized.
F. Agility
With advances in tracking and database technologies, companies can better understand and evaluate their customer's requirements. However, they need to build up certain capacity in order to rapidly translate this understanding into appealing products and service. Agility or nimbleness is the capability to swiftly adapt to changes and can be achieved in three distinct ways including operational, portfolio and strategic [17]. Operational agility is illustrated as the success of Toyota and Dell through integrating supply chain and directly linking customer to production line. While sharing real-time market data that is detailed and reliable, Dell and buyatoyota.com only assemble the product after receiving the order (build-to-order) and this strategy increases visibility to the demand and flow of goods. The primary advantage is sensitivity to changes in customer demand and possibility of mass-customization.
Toyota, for example, outsources about 70% of components and that is why Toyota Production System (TPS) requires a serious investment in building an agile network of highly capable suppliers of different components that must be truly integrated into supply chain [18]. In this scenario, the user goes through the process of designing a car by selecting features and components online. Once the order is registered and the car dealer and type of finance is arranged, the delivery of parts and components takes place a few times a day from different suppliers to Toyota factory and the car is specifically and solely assembled as per online order of the user.
Although a company’s growth is dependent on finding and retaining customers, however, its success is far beyond what is on the web page and depends on internal operations (the back end) and its relationship with suppliers and other business partners [19]. The true power of such operational agility requires solid information technology infrastructure and is based on ingenuity and electronic supply chain management.
VI. GIVE UP OR BUILD UP
Today, companies can extract valuable information by exploiting information hidden in their web site created as a result of visitors interaction and browsing. Facing increasingly sophisticated customers, companies are under the pressure to provide tailor-made products or services that better match customers’ preferences.
The major challenge is cost pressure and justification of investment by economy of scale. Except giant companies like Amazon, Facebook, Dell and most of Airlines, which have large appeal and thousands and even millions of customers, it is challenging to seek cost-effective approaches in personalization for small and medium sized enterprises. The appropriate solution for these companies could be to start their portal on Solid personalization basis and gradually develop it horizontally or vertically to Superficial and then Evident personalization based on the nature of business and improvement of the business model.
VII. WEB PERSONALIZATION METHODOLOGY
Commonly used to enhance customer service and e-commerce sales, personalization is sometimes referred to as one-to-one marketing, because the enterprise's web page is tailored to specifically target each individual consumer. The main purpose of every business is to make money either by selling products or services to new customers or selling more to existing ones [21]. Miller [20], in his book, argues that the average value of customers is 8 to 10 times their initial purchase depending on the research he and his colleagues have reviewed. He further argues that the cost to attract a new customer is 5 to 6 times more than your cost to save an existing customer.
As small and medium enterprises have serious challenge to create traffic on their site and increase the number of visitors, the best strategy would be to focus on existing customers by encouraging them to repeat the orders on similar or different products and services and/or cross selling. Frequent Flyer Program is a gifted tool which can be duplicated and applied in any online shop in order to grant points to the customer based on the value of their purchase. These points can be rewarded later in terms of free offering or discount scheme as incentives for repeated orders. By exploiting such a policy, a company can increase market share and improve profitability. Saving time and reducing costs of sales and marketing as well as economic use of resources are other indirect benefits of what can be called point plus program. All these will ultimately justify investment in web personalization. Business owner, Rosalind Resnick also insists on rewarding program by saying "It's nice to know that every dollar we spend to grow our business can also be a point or air mile we can use to celebrate our independence" [22].
In order to increase customer satisfaction and the likelihood of repeat visits, there must be a reason like membership. When people are members rather than shoppers, they feel connected and privileged. Therefore, the first step in formulating a solid personalization model is to encourage users to become members in order access special contents and/or perform special functions on the site while enjoying superficial personalization. Once a closer relationship with the users are developed, the company can gradually address the other challenges of personalization as indicated in Fig 2 and move to evident level and build interactive relationship to meet and exceed user's requirements. The collaborative requires ingenious infrastructure and agile supply chain, which certainly goes beyond information technology.
A mobile portal, which aggregates and provides contents and services for mobile users, is a good example for explaining how the above model works. These portals are specifically designed for m-commerce with short menu of popular topics like news, sports, email, travel information and finance as well as very few graphics. They recently
started to provide downloads, health, dating and job information as well [23]. In fact, a mobile portal provides an intimate one-to-one experience for a user who is visiting the portal for specific purpose like checking sports scores or looking to buy specific item in nearby shops [24]. Users frequently can become a member by paying a monthly membership to access basic information. This is superficial personalization in the above model. As users started to seek specific services and information, the portal need to address 5 challenges of web personalization in the same order to meet user’s requirements for evident personalization. First and foremost, privacy and security are critical, since mobile systems tie highly personal information to location and contacts. We normally do not share our mobile device with anybody; therefore, anything that we do with our mobile is traceable to us. This may be helpful for marketing; yet, mobile advertisement is nowadays pursued with caution while getting the customer’s consent. This can be done by offering long-distance in exchange for viewing the ads offered by mobile portals. Furthermore, mobile portals can customize advertising message for specific group of customers based on their location while knowing their preferences and buying habits. The integrity, fair dealing and self growth should be the essential cornerstones of these offers to convince for quick decision and it must be support by loyalty program to reward customers with points. The cost and timing are also hot topics for mobile portals. In addition to basic monthly fees, most mobile portals charge per-service fee for premium and customized contents such as download and weather forecast. The nature of m-commerce, in which users have little time to navigate the contents and wait for page to load, also demands that the content to be produced in much shorter time. We can imagine in the not so far future, Toyota and few other car manufacturers allow their customers to assemble and buy their favorite car through mobile portals.
VIII. CONCLUSION
Personalization from web mining has been received lots of interests in business as a gifted tool to improve sales and retain customers, since, it can increases customer’s satisfaction by providing them with tailor-made products and services. Web Personalization can also help the company to implement build-to-order policy by connecting customer’s requirements and preferences directly to production line and supply chain. As a result the company can benefit from cost saving and efficient utilization of resources. Therefore, collaborative personalization can be considered as a vision of any small and medium sized enterprise to incorporate its online portal with built-to-order strategy for sales and marketing. This of course a long term plan which must be supported by integrating personalization into loyalty program in order to reward customers with points for a wide range of daily interactions with the company.
The reward scheme should be smartly designed to provide users with both mental and financial benefits in order to encourage them for repeated order and/or cross selling. The power and benefits of these two can create a solid momentum to justify and guide investment in personalization.
REFERENCES
[1] Kardan, A., Fani, M.R. and Mohammadian, N. (2011), Purposing an Architecture for Learner Modeling base on Web Usage Mining in e-Learning Environment, The 5th Data Mining Conference Dec. 14 - 15, 2011 : Tehran, Iran
[2] Treaon, T. (2010), Amazon : Love Them ? Hate them ? Let's Follow the Money, June 2, 2010 Springer Science + Business Media, LLC 2010, pp. 124–125
[3] Kumar, S. and Craig, S. (2007), Dell, Inc.’s closed loop supply chain for computer assembly plants, Information Knowledge Systems Management; 2007, Vol. 6 Issue 3, pp. 197-214
[4] Jackson, T.W. (2007), Personalization and CRM, Journal of Database Marketing & Customer Strategy Management (2007) 15, pp. 24–36
[5] Velaquez, J. D. and Palade, V. (2007), Building a knowledge base for implementing a web-based computerized recommendation system, International Journal on Artificial Intelligence Tools, Oct2007, Vol. 16 Issue 5, pp. 798
[6] Drèze, X. and Nunes, J. (2011), Recurring Goals and Learning: The Impact of Successful Reward Attainment on Purchase Behavior, Journal of Marketing Research (JMR), Apr2011, Vol. 48 Issue 2, pp. 268-281
Thomas H. Davenport, Leandro Dalle Mule, and John Lucker
[7] Davenport, T.H., Mule, L.D., and Lucker, J. (2011), Know What Your Customers Want Before They Do, Harvard Business Review, Dec2011, Vol. 89 Issue 12, pp. 84-92
[8] Turban, E. and Volonino, L. (2010), Information Technology for Management, John Wiley & Sons, Inc.; 7th Edition
[9] Schneider, B. and Bowen, D. (1999), Lessons in customer service, Understanding Customer delight and outrage Fall 1999, SloanSelection Winter 2011 pp. 4
[10] Becker, M. and Arnold, J. (2010), Mobile Marketing for Dummies, Wiley Publishing Inc.; 1st Edition
[11] Getek, R.C. (2010), A usability model for web-based personalization based on privacy and security, PhD dissertation ; University of Maryland, Baltimore County.
[12] www.normemma.com/articles/armalow.htm (accessed on Dec. 18, 2011)
[13] Nunes, P. and Kambil, A. (2001), Personalization? No Thanks., Harvard Business Review; Apr2001, Vol. 79 Issue 4, pp. 32-34
[14] Jupitermedia Corp. (2003), Beyond the Personalization Myth: Cost-effective Alternatives to Influence Intent
[15] Jarvis, J. (2009), What Would Google Do ?, HarperBusiness; 1st Edition
[16] Ho, S.Y., Bodoff, D. and Tam, K.Y. (2011), Timing of Adaptive Web Personalization and Its Effects on Online Consumer Behavior, Information Systems Research; Sep2011, Vol. 22 Issue 3, pp. 660-679
[17] Sul, D. (2009), How to thrive in turbulent market, Harvard Business Review; Feb2009, Vol. 87 Issue 2, pp. 78-88
[18] Liker K.J. (2004), The Toyota Way, McGraw-Hill; 1st Edition
[19] Turban, E., Lee, J.A., King, D. , Liang, T.B. and Turban, D. (2010), Electronic Commerce a Managerial Perspective, Pearson Prentice Hall; 6th Edition
[20] Miller, R. (2008), That is customer focus, BookSurge Publishing
[21] Hess, E. (2010), Smart Growth, Columbia Business School Publishing; 1st Edition
[22] Resinck, R. (2010), Fly Higher, Entrepreneur: Sep2010, Vol. 38
[23] Becker, M. and Arnold, J. (2010), Mobile Marketing For Dummies, Wiley Publishing, Inc.: 1st Edition
[24] Dushinski, K. (2009), The Mobile Marketing Handbook, Information Today, Inc.; 1st Edition (January 19, 2009)
New Service Development Method for Prosumer Environments
Ramon Alcarria, Tomás Robles, Augusto Morales, Sergio González-Miranda
ETSI Telecomunicación
Technical University of Madrid
Madrid, Spain
{ralcarria,trobles,amorales,email@example.com
Abstract—Prosumer environments are characterized by user participation in the service creation and provision processes. These services, which become increasingly important in recent years, have some peculiarities that differentiate them from the services that follow the traditional model of supplier-customer. However, there is little research on how to adapt existing business models to harness the prosumer’s value and the implications of this new role for the company’s business model. In this paper we design a methodology for the development of prosumer services by using the New Service Development approach to provide creation tools, used by prosumers to create final services. We pay special attention to the relationship between creation process participants by modeling this relationship as co-creation mechanisms. The proposed method is applied to a use case, based on prosumer interaction in the Future Intelligent Universe.
Keywords—new service development; co-creation; service composition; service customization; QFD
I. INTRODUCTION
A new service provision model is needed in a society in which individuals, companies and cities are related; and in which users contribute with his suggestions, interesting information and even his own services to the rest of the community. This paper focuses on a new environment for the current society, based on user participation, not only in information provision but also in the creation and composition of their services, called Prosumer Environment.
Users participating in this society, called prosumer users or prosumers, want to get involved in service development stages, but they are not experts in the use of traditional tools of service development. Companies are aware of the evolution of the society and they view their customers as important resources when they develop new products and services. Thus, they try to involve their customers in the co-creation and co-development of new services [14]. Currently, some projects focused on the figure of prosumer are appearing, such as iStockphoto or Lego Mindstorms, and the term prosumer begins to be used by companies such as Sony to describe video camera users, producers and publishers of multimedia content.
From our knowledge, there are no product and service development models that explicitly take into account the new prosumer role but only as customer involvement in the business process [15] [4]. Nor is there much information on the related work on how to adapt existing business models to harness the prosumer business value and the implications of this new role for the company’s business model.
The New Service Development (NSD) methodology is often used for the development of services that are new to the company, and that involve resources, processes and customer interaction [2]. In this paper, we extend the development model proposed by NSD in order to involve prosumer users, who wish to take responsibility for the creation and provision of services, in the service development process. The benefits of this new business model is perceived from the viewpoint of the company, which harnesses the power of the prosumer development to improve and test new products, and from the prosumers’ point of view, which get involved in service development by using creation tools adapted to their experience level.
The rest of the paper is as follows. Section II describes the service provision prosumer model we want to achieve with our method and Section III analyzes and discusses the related work regarding service development with user participation. Section IV describes our proposed method through the NSD stages. Section V presents a scenario in which this methodology has been applied as a validation and Section VI presents the conclusions of our work.
II. PROSUMER MODEL
Internet has become a powerful distributed infrastructure that enables information to be widely available and its actors to interact with the rest of the world. Users require tools for providing their own services and consuming services published by others, and thus, transform the network into a collaborative infrastructure, adapted to different areas, such as social, personal and commercial ones. There are some initiatives to provide this type of tools, such as ICT-2007.1.6 challenges (14 projects) of the European FP7 program, focused on validating highly innovative and revolutionary ideas for new service paradigms.
The term prosumer [16] (as an acronym formed by the fusion of the words producer and consumer) is applied to those users that are at the same time consumers and producers of services or contents. The proposed prosumer model is shown in Fig. 1 and is described below. In the creation process, the prosumer, using his mobile phone or a computer, can design his services using the tools he has at his disposal. This is the most critical stage, because of the technical difficulty of transforming service creation ideas from a non-expert user to machine interpretable code. The Creation tools are oriented to a specific personal or
professional domain. They are provided by companies or third-party developers, using the traditional software development paradigms. *Service composition* graphical environments have been developed to solve this problem, using different paradigms (automatic, semiautomatic, static workflow-based service composition [20], natural written language [19], etc.). *Customizable services* are service models that already solve most of the technical issues regarding service creation and they allow the prosumer to introduce some configuration and customization parameters, both in GUI and in the service logic. From the service presentation’s point of view, the service composition paradigms that have proven to be more effective are based on User Interface Composition as some Mashup contributions [18] or those based on predefined templates [17]. The former allow greater flexibility in creation process while the latter usually have a more eye-catching look, providing that they have not been assembled by individual graphical blocks. The result of this stage is called *final service*, completely or partially created by a prosumer user and ready to be consumed.
Service provision and execution take place after the creation process. As these stages are very similar to those of traditional services we will not explain them in detail.
The complexity in the creation phase requires an infrastructure deployment to facilitate contact between the developers of creation tools and the prosumer user that will use these creation tools. A methodology is required to decide between different design strategies, according to the captured design requirements. Section IV explains the method we developed.
### III. RELATED WORK ON SERVICE DEVELOPMENT
In this section, we focus on the related work about two key aspects developed in this paper: User participation in service development process and, specifically, the relationship between users and other participants in the NSD methodology.
#### A. Customer participation in service development
In recent years, companies have considered the importance of the presence of the customer in service development in order to understand customers’ needs and wishes properly [21], evolving from the traditional model of service development, which produces common failure to involve service personnel and customers [1]. The presence of the customer in the production process results in increased customer value, as the overall benefit perceived in the solution at the price the customer is willing to pay [22]. Customer value is an aspect of the service that must be continually revisited for the company so that it can anticipate an alteration in customer’s needs (the customer’s perspective on a service offering can change from being favorable to being unfavorable).
With the emergence of the Web 2.0 in the information and communications society, based on user participation, the user acquires a leading role in service development. Therefore, user involvement in the service production process is more justified. Some authors believe that users are the root of the service idea. Matthing et al. [23] illustrate that the consumers’ service ideas are found to be more innovative, in terms of originality and user value, than those of professional service developers.
The role that the user plays is evolving from “content prosumer”, who uses the Internet and other technologies to find information and also to produce content, to “service prosumer”, which develop services and make them available to other users. The FIA (Future Internet Assembly) mentions this evolution of Internet users in his roadmap [27], defining prosumer as “a new kind of Internet user, playing both roles consumers of services as well as creators of added value services based on those consumed”. In this paper we focus on this second type of prosumers.
#### B. Co-creation in New Service Development
NSD is a service development methodology that is often used in corporate environments [2][21][22]. Johnson et al. synthesized past service development research and created a general four-stage NSD process model involving the phases of design, analysis, development and full launch, emphasizing the interdependence on design and development as well as the cyclical aspects of the new service creation process [2]. The difficulty of finding flexible tools for creating prosumer services is covered with large number of tools that relate the prosumer with NSD. In this section we review the tools, methods and practices found in the literature.
In the design phase, related tools focus on identifying costumer needs. Although there are tools that extract qualitative information on customer perception, as focus groups and face-to-face interviews, and stimulate the production of new ideas (brainstorming), other authors [6] consider that the best method to identify customer needs is to select the so called “lead users”, which present strong needs. In our work, we define *domain-expert* as a lead user with good knowledge of his environment and who is aware of specific and general needs of the users it represents. Data mining techniques (artificial neural networks, decision trees, case-based reasoning, and multivariate discriminant analysis) [7] are used for classifying user need types for recommendation systems.
In the analysis phase, recommendation systems use techniques to analyze the information on user needs,
processed in the previous stage in order to quantify its importance and thus, estimate the value of service for the target customers and the company. One of these techniques, the conjoint analysis [8], consists of a multivariate statistical technique which studies the joint effects of service components on consumers. This technique is often used along with the graphic technique of perceptual mapping [8], which helps marketers to visually display the perceptions of customers or potential customers.
Related to service development, Quality Function Deployment (QFD) [4] tools have been proposed to transform the needs of users into service design requirements, which are more easily understandable by developers. Once the service concept has been created, other service engineering models are used, which map, describe, and/or analyze the design of service processes and include the customer experience in form of interactions through the process [9]: blueprinting, SADT (Structured Analysis and Design Technique), STA (service transaction analysis) and IDEF3. Among these methods we highlight SADT, which proposes the involvement of NSD providers, project managers and customers.
Regarding the launching phase, the quality of the service being deployed and the customer satisfaction with the service once deployed is evaluated. Before service launching it may be necessary to identify potential failures in the service design or implementation. One of the most used techniques is the Failure Modes and Effects Analysis (FMEA) [10], which identifies failure modes based on past experience with similar products or services and provide corrective actions. The model proposed by Kano et al. [11] is often used, which complements the QFD to measure customer satisfaction and ranks customer demands with threshold attributes, in such a way that if a new service is not examined using the threshold aspects, it may not be possible to enter the market.
IV. PROSUMER NSD METHODOLOGY
In this work, we rely on the NSD model to define a New Service Development method in which the prosumer is present from service conception to the deployment of the infrastructure and the tools to personalize and provide services to other prosumers that consume them. The ultimate goal of this methodology is that users unfamiliar with traditional creation tools may be responsible for the creation of final services (through mechanisms such as composition or template customization).
Our methodology is based on the phase model proposed by Johnson et al. [2]. In this section, we describe our adaptation of the processes carried out at each stage in order to involve customers as service co-producers.
A. Design
As shown in Fig. 2, in this first stage we identify the target customer and choose a customer representative. We call him domain expert and he serves as a link between prosumers and company developers. To meet the prosumer involvement needs we have adopted a customer-centric approach and, using the technique of Quality Function Deployment (QFD) [4], we ensure that the dimensions most valued by customers are adequately captured and translated into objective metrics, as the main task of service development is to create the right prerequisites for the service [1]. To do this, the domain expert must acquire the best possible knowledge of the target customers (prosumer users) to feed QFD, and he should be able to appropriately translate customer expectations into design requirements.
A major challenge in this stage is to ensure that every decision is made based on delivering the correct services to potential customers. So, we first determine what type of service is going to be developed and the characteristics of the target customers. We define the service concept for prosumer environments as the combination of the co-creation level and service scope. The service concept helps to focus the relationship between customer needs and the company’s strategic intent. The co-creation level measures the domain expert involvement in the NSD process. A low co-creation level means that the users are little involved in the co-creation process. We define service scope as the set of service requirements, covering both the design (functional and non-functional requirements) and the co-creation. The service scope is affected by two attributes: level of flexibility demanded in the service and experience level of the target prosumers regarding service creation. These attributes are interdependent, so that a lack of experience using creation tools will involve the development of creation tools with a lower degree of flexibility, which inevitably affects the service scope. Likewise, a service which requires a wide scope demands a high degree of flexibility and experienced target prosumers. As the service scope is defined by the target users, or the domain expert that represents them, the design phase analyses whether the relationship between service scope, flexibility and experience level of target customers is met. If the analysis fails some solutions are
sought, such as increasing user experience through training courses for prosumers or reducing the service scope by dividing it into several sub-services, whose combined scope makes up the full scope.
B. Analysis
In the analysis stage, organizations estimate the market-performance potential, strategy and financial prospects of the new service concept. The analysis phase, shown in Fig. 3, analyses whether the new service concept is aligned with the business strategy. Two simultaneous value calculation processes are produced, which determine the suitability of the service creation process: prosumer value and business value estimations.
We define *prosumer value* as the overall benefit perceived in the service solution at the price the prosumer is willing to pay. In the prosumer environment, users create and consume services as long as their perception (i.e., the prosumer value) of the service creation and provision tools is adequate. Prosumer value must be identified at early stages of the methodology by using customer input information. While customers may be able to provide some guidance, some of the most successful cases of value identification occur when a company provides a service that addresses a need that the customer was not aware of previously [3].
We define *business value* as the benefit experienced by the company for the acquisition of knowledge, experience and presence in the sector. The business model of prosumer service development has been exploited before, and is adopted by major online stores such as Android Market or Apple Store. These stores provide development tools, a service search and publication infrastructure and a control mechanism for published applications. In return, they benefit from a percentage of the applications’ selling price, which is estimated by the application creator.
We define a threshold below which the project is not viable. This threshold depends on the benefit margins of the project development and the business value mentioned above. These concepts will not be studied in depth in this paper.
C. Development
The development phase is described in Fig. 4. In the model proposed by QFD, customer expectations are often described as verbatims, for example: “I want to have an on/off button in the main screen of my creation tool”. At this stage these verbatims must be converted to re-worded data using simple expressions as “higher customization” and “add control buttons”. These data are *classified into clusters* that share a common theme (e.g., interface customization, easy to use) and *prioritized*. The *house of quality diagram* [13] can be used for this task, which utilizes a planning matrix to define the relationship between customer desires and firm capabilities. The result of this classification and prioritization is a set of requisites (design and co-creation requisites). If the domain expert involvement is high it is advisable to validate with the domain expert the collection of customer expectations to meet the term *voice of the customer* [12], widely used in QFD to describe the in-depth process of capturing customers preferences and expectations. Once the requirements are validated, decision criteria are used to select *service design strategies*. The attributes affecting the service concept are used as criteria: *service flexibility* of prosumer tools demanded by customers and *experience level* in service creation for target customers. Fig. 6 shows the relationship between these two attributes and characteristics of the design strategies for the described use case.
A process whose objective is to develop a co-creation support system takes place once the design strategies are chosen. This system allows customers and developers to agree upon prototype evaluation, intermediate versions and co-creation requirements adjustment. Prototypes are used as proof of concept to demonstrate that the requirements of the prosumer are properly understood by the creation tools developers. Prototyping is considered a design strategy for environments with high involvement of the domain expert.
The outcome of this phase is a set of *creation tools*, templates, prototypes, atomic and customizable services as well as the mechanisms for monitoring the service life cycle.
D. Launch
The launch phase (see Fig. 5) is divided into two phases.
In the *pre-launch analysis* phase, the service provision infrastructure is designed, according to the developed services and the mechanisms for monitoring them.
In the *post-launch evaluation* phase, the information collected by the infrastructure and customer feedback is iteratively analyzed in order to evaluate the *prosumer satisfaction* for the service (NSD outcome quality). It also evaluates the process performance for the company (NSD process performance), from the point of view of the operational effectiveness and market-place competitiveness [5]. Measures to analyze this performance are divided into the prosumer satisfaction by the use of developed services (*NSD outcome quality*) and the process performance for the company (*NSD process performance*).
**V. VALIDATION CASE**
We present a scenario in which we applied this methodology for the creation of prosumer service delivery infrastructure in the Future Intelligent Universe, as part of the mIO! project [26], supported by the CENIT Spanish National Research Program (CENIT-2008 1019).
In this scenario the prosumer user, using his mobile phone, interacts with the elements (sensors, actuators, smart devices) that are around him in order to obtain their functionality. The prosumer user creates and shares services by following the prosumer model shown in Fig. 1. Our goal is to use our NSD method, defined in Section IV, to create an infrastructure for service creation, delivery and execution.
The first step is to select a domain expert (based on the concept of lead user [6]) to identify customer needs. We determine that the user experience is varied, because, although this environment is focused on non-expert users, some of them are familiarized with creation paradigms. Thus, if we want to achieve high flexibility, the scope analysis requires dividing the infrastructure design in several subsystems, to cover the service scope.
The viability analysis concluded with satisfactory results, thanks to high prosumer value and a sustainable business model for both the company and prosumers. The company business model is based on sponsored final services and the incorporation of advertisements and sponsorship in creation and execution tools. The business model for prosumers is based on applying the *Freemium model*, combination of free and premium services.
In the development stage, following the proposed QFD model, the ideas expressed by the domain expert are processed and clustered. In total we extracted 104 requirements, divided into 9 clusters (Infrastructure, User and Subsystem Interaction, Security, Context, Personalization, Service Model, User Interface and Technologies) y 32 subclusters. For example, through this process, the user idea “I want to take my services on my mobile” was turned into the “mobile service provision” requirement, and was introduced in the subcluster called *Service Provision*, within the *Service Model* cluster.
We relate the prosumer experience level to the creation flexibility required in order to determine the most appropriate design strategy. Fig. 6 shows the considered decision strategies.
As prosumers in this scenario have a mixed experience for service creation we need to take more than one design strategy. The main determinant for the decision is the requirement of high flexibility. Thus, we discard the strategies that provide only a few customization options (*GUI customization, Template configuration, Creation wizards*), and also the *WYSIWYG* (What You See Is What You Get) paradigm, for being difficult to be used in a domain as general as mobile and ubiquitous sensor access. Regarding the remaining options, we discard the *Mashup* creation strategy for being somewhat less flexible than the choice of *Service composition*. Therefore we consider the development of tools to enable *Service composition* for experienced prosumers and Natural Written Language (NWL) creation interface for non-expert users. Thanks to a high domain expert involvement in NSD we consider advisable to use *Prototyping* as a strategy to avoid deviations in the design objectives.
After selecting the design strategies we proceed to the infrastructure implementation stage. Some parts of the implementation, such as service creation environment based on natural written language and sensor access can be seen in [24] and [25] respectively.
The launch phase is performed on a test group that studied the platform and we received several conclusions. Due to space limitation we only describe four of them:
- The creation mechanism for service composition is somewhat difficult to assimilate by non-experts, who have found the creation system based on natural written language [24] more intuitive.
- To really obtain consumer satisfaction we need to provide a large set of components so that composition is versatile.
- The limited display capabilities of a mobile terminal delegate the composition creation method to devices with larger screens, such as tablets.
- It is recommended to support service provision with the help of fixed infrastructure, to avoid provision issues due to lack of coverage or battery problems.
As conclusions of the analysis we believe that the application of our NSD method to the implementation of this scenario has been very helpful and the application of the defined design strategies has been successful.
VI. CONCLUSION AND FUTURE WORK
This paper proposed a tool development method for creating services for prosumer users, based on NSD. We evolve this methodology to cover the interrelationships of the different roles that traditionally participate in the creation process (company developers and customers) with the new *prosumer user* role. This prosumer user is the consequence of the evolution of information society, in which users are more participative and feel responsible for the generation of services and their publication into the user community.
In this paper, we develop a NSD method for prosumer environments. We review some concepts belonging to the service development process, such as *service concept*, *service scope*, *value estimation*, and we define other concepts, as *co-creation level*, *prosumer value* and *co-requisites*, which became contributions to the traditional NSD stages. As a validation, we show a service creation scenario for mobile prosumers and, following our proposed method, we develop some creation tools and a service delivery infrastructure.
We consider the implications in NSD of the new prosumer role as a contribution to the traditional service development process. This prosumer role reflects the current evolution of society towards a service provision model focused increasingly on the user.
Feedback from users allows us to identify future work on the proposed model, such as developing service composition tools for distributed service provision and task delegation and studying the characteristics of creation skills of non-expert prosumers thoroughly.
REFERENCES
[1] A. Johne and C. Storey, “New Service Development: A Review of the Literature and Annotated Bibliography,” European Journal of Marketing, 1988, vol. 34, pp. 184-251.
[2] S.P. Johnson, L.J. Menor, A.V. Roth, and R.B. Chase, A critical evaluation of the new service development process: integrating service innovation and service design. Eds. Sage Publications, Thousand Oaks, CA, 1999, pp. 1-32.
[3] C. Liu, “Constructing a value-based service development model,” Journal of Applied Business Research, Dec. 2006, vol. 22, pp. 47-60.
[4] T. Ohfuji and T. Noda, Quality function deployment: integrating customer requirements into product design. Eds. New York: Productivity Press, 2004.
[5] M.V. Tatikonda and M.M. Montoya-Weiss, “Integrating operations and marketing perspectives of product innovation: the influence of organizational process factors and capabilities on development performance,” Journal of Management Science, 2001, vol. 47, no. 1, pp. 151–172.
[6] E. von Hippel, “Lead users: A source of novel product concepts,” Journal of Management Science, 1986, vol. 32, pp. 791-805.
[7] K. Kim, “Customer Need Type Classification Model using Data Mining Techniques for Recommender Systems,” World Academy of Science, Engineering and Technology, vol. 80, pp. 279–284
[8] K. Ramdas, O. Zhylievskyy, and W.L. Moore, “A Methodology to Support Product-Differentiation Decisions,” IEEE Transactions on Engineering Management, Nov. 2010, vol. 57, no. 4, pp. 649-660.
[9] J. Fraendorf, Customer Processes in Business-to-Business Service Transactions. Eds. duv, Nov. 2006.
[10] Z. Bluvband and P. Graboy, “Failure analysis of FMEA,” Reliability and Maintainability Symposium, Jan. 2009, pp. 344-347.
[11] N. Kano, N. Seraku, F. Takahashi, and S. Tsuji, “Attractive quality and must-be quality,” The Journal of Japanese Society for Quality Control, 1984, vol. 41, no. 2, pp. 39–48.
[12] E. Roman, Voice-of-the-Customer Marketing: A Revolutionary 5-Step Process to Create Customers Who Care, Spend, and Stay. Eds. McGraw-Hill, Sep. 2010.
[13] D. Kim, “Application of the HoQ framework to improving QoE of broadband internet services,” IEEE Network, March-April 2010, vol. 24, no. 2, pp. 20-26.
[14] L. Witell, P. Kristensson, A. Gustafsson, and M. Löfgren, “Idea Generation: Customer Cocreation versus Traditional Market Research Techniques,” Journal of Service Management, 2011, vol. 22, no. 2.
[15] A. Lundkvist and A. Yakhlef, “Customer Involvement in New Service Development: a conversational approach,” Managing Service Quality, 2004, vol. 14 no. 2/3, pp. 249-257.
[16] A. Toffler, The Third Wave. Eds. Bantam Books, 1980 .
[17] J. Ling, P. Ping, Y. Chun, L. Jinhua, and T. Qiming, “Rapid Service Creation Environment for service delivery platform based on service templates,” in Proc. of IFIP/IEEE International Symposium on Integrated Network Management, June 2009, pp. 117-120.
[18] Q. Zhao, G. Huang, J. Huang, X. Liu, and H. Mei, “A Web-Based Mashup Environment for On-the-Fly Service Composition,” in Proc. of IEEE International Symposium on Service-Oriented System Engineering, Dec. 2008, pp. 32-37.
[19] M. Cremene, J-Y. Tigli, S. Lavriotte, F-C. Pop, M. Riveill, and G. Rey, “Service Composition Based on Natural Language Requests,” in Proc. of IEEE International Conference on Services Computing, Sept. 2009, pp. 486-489.
[20] N. Laga, E. Bertin, and N. Crespi, “User-centric Services and Service Composition, a Survey,” in Proc of Annual IEEE Software Engineering Workshop, Oct. 2008, pp. 3-9, 15-16.
[21] B. Edvardsson and J. Olsson, “Key concepts for new service development,” The Service Industries Journal, April 1996, vol. 16, no. 2, pp. 140-164.
[22] M. Reinoso, S. Lersviriyajitt, N. Khan, W. Choonthian, and P. Laosiripornwattana, “New service development: Linking resources, processes, and the customer,” in Proc. of Portland International Conference on Management of Engineering & Technology, Aug. 2009, pp. 2921-2932.
[23] J. Mattingh, B. Sanden, and Bo Edvardsson, “New service development: learning from and with customers,” International Journal of Service Industry Management, 2004, vol. 15, pp. 479-498.
[24] G. Sebastian, J.A. Gallud, and R. Tesoriero, “A Proposal for an Interface for Service Creation in Mobile Devices Based on Natural Written Language,” in Proc. of the Fifth International Multi-conference on Computing in the Global Information Technology, Sept. 2010, pp. 232-237.
[25] U. Aguilera, A. Almeida, P. Orduña, D. López-de-Ipiña, and R. de las Heras, “Continuous service execution in mobile prosumer environments,” in Proc. of Int. Symposium of Ubiquitous Computing and Ambiente Intelligence, Sept. 2010, pp. 229-238.
[26] mIO! project Web page: http://www.cenitmio.es (web in Spanish)
[27] Future Internet Assembly Research Roadmap v1.0. Available at the European Future Internet Portal: http://www.future-internet.eu [Retrieved: Nov. 22, 2011]
Digital Investigations for Enterprise Information Architectures
Syed Naqvi, Gautier Dallons, Christophe Ponsard
Centre d’Excellence en Technologies de l’Information et de la Communication (CETIC)
29 Rue des Frères Wright, 6041 Charleroi, Belgium
{syed.naqvi; gautier.dallons; firstname.lastname@example.org
Abstract—This paper highlights the role of digital forensics in the enterprise information architecture. It presents a framework for embedding digital forensics analysis techniques at various stages of corporate information and communication technologies (ICT) lifecycle. A set of best practices for the corporate ICT security policy is also outlined to keep the operational costs of digital forensics at the optimal level. It also presents a detailed analysis of the risks to the competitive-edge of the companies that will not employ the forensics solutions to protect their business interests. This work also provides a high-level roadmap for the adaption of digital forensics in the emerging core business technologies such as cloud computing and virtualization infrastructures.
Keywords - digital forensics analysis, ICT security architecture, enterprise information architecture
I. INTRODUCTION
Crimes involving computers started to occur as soon as the computers started sprawling across the various activities of everyday life in the 1980s. United States Federal Bureau of Investigation (FBI) launched its Magnetic Media Program [1] to address the growing needs of analyzing computers especially storage media for their investigations of high-technology crimes. The growth of networking technologies in the 1990s gave further impetus to the need for tackling crimes using sophisticated technological means. However, the overall scope of the digital forensics remained confined to the law enforcement agencies with the sole objective of collecting reliable evidences for the prosecution of the criminals involved in a court of justice.
The interest of performing digital forensics analysis at the enterprise level has significantly grown nowadays. This trend can be seen as a logical evolution of white hat or ethical hacking that is performed by the businesses to ensure that their ICT infrastructure is free from vulnerabilities. However, the major driving force behind this trend is the fact that growing number of criminal offenses using ICT is overwhelming the computer crime units. It is not so easy to involve law enforcement agencies in a commercial environment where there are some suspicious activities; but no explicit computer crime is committed. Moreover, their involvement casts shadows on the business interests notably on the reputation of the company.
Nowadays, enterprises are embracing considerable shift in their business approach where they have to not only adapt to non-traditional concepts such as virtualization, but also switch to more comprehensive security solutions to cope with the new security requirements as only pre-incidence measures, i.e., attack preventions is no longer be sufficient to protect their assets. They need to invest in the post-accident situation, i.e., digital forensics. It is evident that European enterprises can learn a lot in this area from their US counterparts who have acquired considerable experience of using digital forensic technologies in the business environment whereas digital forensics is still seen by a vast majority of Europeans as specialized tools for police to tackle cyber criminology.
This nascent trend of in-house digital forensics analysis in a relatively non-criminal context has to prove its worth by providing some competitive edge to the businesses. Major challenges include the demarcation of exact role of the digital forensics in the overall corporate ICT security operations; reducing its functional costs both in terms of monetary expenditures incurred in the acquisition of corresponding technologies; and in terms of the time consumed in the subsequent investigations especially when the operations of core business line are directly affected. This paper pragmatically identifies the role of digital forensics in the overall security architecture of an enterprise. We propose to implant the corresponding digital forensics analysis features in the various components of the corporate ICT infrastructure. This scheme provides better organization of the security functionalities with minimal impact on the overall performance of the ICT operations.
This paper presents a framework of digital forensics for enterprise information architectures and evaluates the scope of various methodologies and tools for these enterprise applications. It also presents a detailed analysis of the risks to the competitive-edge of the companies that will not employ the forensics solutions to protect their business interests. This work also provides a high-level roadmap for the adaption of digital forensics in the emerging core business technologies such as cloud computing and virtualization infrastructures.
This paper is organized as follows: Section II describes the role of digital forensics in the enterprise information architecture. A digital forensics analysis framework for enterprise information security policy is presented in Section III. Section IV highlights the impact of digital forensics on the competitive-edge of enterprises. Section V identifies a range of challenges of investigating the emerging paradigm of virtualized infrastructures. A discussion on the recent trends in the area of digital forensics and their positioning with our approach is presented in Section VI. Finally, some conclusions are drawn in Section VII.
II. ROLE OF DIGITAL FORENSICS IN ENTREPRISE INFORMATION ARCHITECTURE
The area of digital forensics analysis is considerably mature in the modern criminology; however, its scope in the commercial environment is still vague. Therefore, it is needed to precisely identify the role of digital forensics at various levels of the enterprise ICT operations so as to enhance the overall quality of protection without causing any substantial impact on the system’s performance. This section outlines major blocks of a typical enterprise ICT infrastructure followed by the identification of those blocks where digital forensics analysis has a role to play. These roles are elaborated for individual blocks. These roles can eventually be harnessed together to constitute various digital forensics analysis functions.
Figure 1 shows a typical lifecycle of corporate ICT operations. These blocks are tagged to facilitate their subsequent usage. These operational blocks are subject to a number of internal and external requirements that have to meet in order to ensure the proper functioning of the overall business routines. We identify those blocks that require digital forensic analysis features for the improvement of the quality of protection offered by the corporate ICT security architecture. A magnified glimpse of these blocks is shown in the Figure 2.
1) Design and Deployment
The design and development phase should ensure that the ICT infrastructure is globally *forensic-friendly* – this implies that both design specifications and deployment parameters should provide some sort of explicit checkpoints to facilitate digital forensics analysis at any time instant without causing major disruption to their operational routines.
The design and development phase also needs to ensure that *chronological documentation (chain of custody)* can be conveniently produced if deemed necessary. It can include some nonrepudiation techniques such as watermarking to justify that they are not tampered at any stage.
The design and development phase should also include *resilience planning* so that the forensic analysis will have minimal impact on the functioning of routine operations. This feature is particularly important for today’s large-scale highly connected systems, such as Clouds, where disruption of entire ICT infrastructure in the aftermath of any incident will inflict huge damages. This feature also includes recovery time that is ideally kept at the minimum possible level.
2) Auditing and Compliance
The auditing and compliance is not only a prerequisite for launching a specific business but also a marketing tool to increase the customer base. Therefore, the auditing and compliance phase should provide means of justifying that the business is meeting all of its *legal obligations*. For example, the United States Sarbanes Oxley Act (SOX) [2] requires a formal process of using forensic analysis techniques for the investigation of incidents. This law has made significant impact on the security policies and incident management strategies of US based corporations [3].
Besides serving the compliance issues of the legal requirements, digital forensics can also play its role in the fulfillment of regulatory requirements that most of the corporations are required to comply with. For example, information security incidence management procedures are recommended by the ISO standard ISO27002 [4].
Likewise, the *quality assurance* practices can be improved by using digital forensics techniques such as analyzing the performance bottlenecks of network bandwidth, storage mediums, etc. They can be employed when some performance degradation is reported or they can be proactively used to ensure the execution of an optimal quality assurance plan.

3) Business Operations
The business operations phase is crucial for meeting the company’s targets within the allocated resources. Therefore, it is important to explore newer techniques to improve this phase not only to facilitate the targets achievements, but also to provide a competitive-edge in this core operational phase.
Several *maintenance operations* of corporate ICT infrastructure, such as disk cleaning operations, require reliable analysis solutions. Digital forensics analysis tools
can be employed to efficiently provide these maintenance operations.
With the ever increasing evolution and expansion of the ICT scope in everyday life, the *upgrades* have become a routine activity. Moreover, *data recovery* from the obsolete, faulty, or broken equipment requires some reliable solutions. Digital forensics techniques can be employed for recovering data in these situations.
Managing the ICT inventories in a corporate environment also requires efficient sorting solutions before discarding the useless lot. Digital forensics techniques can assist in securely disposing of equipment.
4) *Incident Management*
This is the privileged phase for employing digital forensic techniques as the general perception of the *forensics* is the post incident analysis. Digital forensics can be used by the administrator of a corporate ICT infrastructure to gather the digital traces and consequently analyze them to determine the causes of incident. All these actions should be carried out within the given legal framework. Another important role of the incident management team is to ensure business continuity ideally even during the incident phase. In real terms, it should at least ensure the continuation of core business operations during the adverse situations.
III. A DIGITAL FORENSICS ANALYSIS FRAMEWORK FOR ENTERPRISE INFORMATION SECURITY POLICY
Corporate security policy can play the pivotal role in outlining best practices for their ICT security teams. We suggest that organizational security policy of a modern enterprise should explicitly reflect the role of digital forensics as described in the section II of this paper.
Figure 3 presents a placement of *security and forensics team* in a corporate ICT infrastructure where it interacts with the different phases that can benefit from various digital forensics analysis techniques.

*Figure 3. Security and Forensics Team in a Corporate ICT Infrastructure*
It is essential that security and forensics team should be equipped with *fine-grained forensics policy* that can handle the peculiar requirements of the contemporary ICT infrastructures especially their complexities, scale, and decentralization. It is understood that highly dynamic environments such as *Open Clouds* give rise to enormous challenges of localization and demarcation of their boundaries that keep on changing in an amoeboid style. While research and development initiatives are needed to effectively address these specific challenges, ICT security teams of corporate sector can handle the digital forensics tasks by having clear vision and strategy of achieving the security goals. The current state of the tools support for the digital forensics activities is quite sparse. We therefore recommend using a set of tools for different tasks instead of looking for a comprehensive framework or toolkit that can provide a silver bullet kind of solution.
We now present different phases of standard forensics analysis and their efficient implementation in businesses. They are presented in the context of incident management; however, we have already shown that these techniques are equally useful in a number of other ICT operations.
A. *Preparation phase*
Preparation phase mainly involves adequate planning by envisioning the security threats and the contingency plans followed by the deployment of necessary tools and procedures. This is a management task that requires regular re-evaluation and assessment as the threats picture changes rapidly especially in the dynamic environments such as virtual infrastructures.
B. *Detection phase*
Detection phase often includes collection mechanism as well. This phase requires up to date set of tools to monitor the company’s ICT resources and capture the various events followed by the identification of some abnormal activity that should be logged and the security manager be notified.
C. *Preservation phase*
Preservation phase generally involves the production of copies of the ICT resources being investigated. The original resources are preserved so that they can no longer be altered. The integrity of the original resources is indispensable in the follow-up procedures in the court of justice or even in the disciplinary action board.
D. *Analysis phase*
Analysis phase traditionally employs human interventions to note the sequence of events leading to a particular incident. However, the ever growing scale and scope of the digital data now necessitates the use of some semantic tools for the analysis of log files and traces. This phase also involves some artificial intelligent mechanisms for performing good quality analysis to avoid human errors.
E. *Recovery phase*
Recovery phase is generally not seen as an integral part of the forensics task. However, any emergency or incidence response cannot be completed if the system is not restored to its original functioning state. This situation is especially desirable when the core business of a company is halted due
to some incidence and therefore there is immense pressure to restore the routine activities.
F. Reporting phase
Reporting phase is not necessarily the preparation of legal case against the attackers. A simple report of incidence for the line manager can also constitute this phase. The objective is to produce a record of the incidence that took place in the corporate ICT infrastructure. It can also be used as a feedback for the preparation phase, so that the reasons leading to the incident can be taken into account by the company’s security team.
IV. Impact of Digital Forensics on the Competitive-Edge of Enterprises
There are genuine ICT security concerns for enterprises especially when the scope of virtualization brings several challenges for its deployment including a lot of uncertainty as to how and where to implement security [5]. Security and dependability issues are a gauging factor for measuring the success of business endeavors. Classical security solutions and practices are getting obsolete in the face of the peculiar security requirements of virtualization infrastructures where physical resources are dynamically mapped to address the spontaneous business needs. The inherent nature of virtualization requires totally different security provisioning approach than the classical one developed decades ago.
This section covers a set of threats mitigation techniques that can be employed by using digital forensics techniques and the risks of not using digital forensics in the enterprise environments. We maintain that it is becoming indispensable for the businesses to integrate digital forensics in the security architecture of their enterprise information architectures otherwise there could be devastating impact both in terms of security breaches and in the loss of business prospects.
A. Threats Mitigation Techniques using Digital Forensics
This section summarizes a set of major security challenges that can be addressed by using appropriate digital forensic technologies.
1) Access Control
Enterprise virtualization infrastructures offer promising features for the enterprises. However, virtual ICT resources are lot more vulnerable to malicious activities than the classical ones. Smart solutions for the monitoring of the access logs of an enterprise have become indispensable to know if its resources are the target of some malicious activity. It also includes insider threats such as frequent remote access to the company’s resources outside the routine office hours.
2) Steganalysis
The ever-growing demand of bandwidth capacity for the contents rich applications is driving the conception of high speed internet backbone to enable seamless access to the applications using bulk of data. However, this capacity exposes enterprises to steganography where important corporate data can easily be stolen. The enterprises need to incorporate efficient steganalysis tools to protect its intellectual properties.
3) Multi-tenancy
Multi-tenancy is quite new concept [6] that refers to the architectural principle, where a single instance of the software runs on a software-as-a-service (SaaS) vendor’s servers, serving multiple client organizations. Assuring data isolation on a node is a challenging task for the security designers that include complete isolation of their execution environments and data storage including temporary storage of the execution data. Enterprises need to employ some analysis tools to trace the software instances for being assured of the protection of their data in the multi-tenant applications.
B. Risks Analysis for Computing Impact on the competitive-edge of Enterprises
The ISO27001 [7] standard defines the way to establish an Information Security Management System. Considering this standard, a company has to monitor its security in order to adapt the security policy to new threats. This standard considers not only IT security but also Information system security that is broader than the IT infrastructure. In this paper, we only consider IT security of the infrastructure used to operate the enterprise information system.
ISO27001 requires enterprise to monitor its security state in order to monitor security breaches and to react accordingly. Security monitoring implies tracking of the security traces left by an attacker. Digital forensics analysis techniques and tools can facilitate this task. This standard requirement has the objective of reducing the risk linked to security for companies. If we consider a company with a set of computer business asset $A = \{a_1, ..., a_n\}$, each asset is concerned by security following four properties: Confidentiality (C), Integrity (I), Availability (D) and Legal force (L) (legal force is the ability of an asset to be used as a proof in the case of lawsuit). Each asset has some requirements concerning these properties. The risks are to loose one or more of these security properties. The risk of a particular enterprise can be summarized by $\sum(Ic(ai) + Ii(ai) + Id(ai) + Il(ai))$, where $Ic$ is the financial impact function of confidentiality lose, $Ii$ is the financial impact function of integrity lose, $Id$ is the financial impact function of availability and $Il$ is the financial impact function of legal force lose. We can argue that a company that doesn’t use forensics techniques is exposed to a financial risk of $\sum(Ic(ai) + Ii(ai) + Id(ai) + Il(ai))$. Now, the question is how digital forensics can reduce this risk. Digital forensics will be used as a curative tool. Forensics techniques will be used after the attack when an impact has been detected. In this scope, it will be only useful to demonstrate legally the cybercrime and to try to obtain compensation accorded by a court of law. If we consider our maximal financial risk will be reduced by an amount equal to the compensation obtained through a court of law; but some fees must be paid to the lawyers. We can
therefore express this risk by of $\sum (Ic(ai) + Ii(ai) + Id(ai) + Il(ai)) - \text{compensations} + \text{fees}$. This financial risk formula is interesting:
- More $\sum Il(ai)$ is high, more compensations will be important due to difficulties to demonstrate the cybercrime
- If fees are higher than compensations, the forensics analysis won’t be useful
So, optimizing the benefit of forensics investigation implies to reduce the risk of losing legal force of an asset and its traces (ideally $\sum Il(ai)$ must be closer to 0). It is also important to reduce the lawyers’ fees by contracting a legal insurance. Legal insurance is marginal compared to cybercrime compensation. In this situation, our risk can be evaluated to $\sum (Ic(ai) + Ii(ai) + Id(ai)) - \text{compensations}$.
Compensations are relative to security incident of some assets and are at less equivalent to direct damages. In this case, the security impacts of these assets are at least covered. The risk equation becomes $\sum_{i<j} (Ic(ai) + Ii(ai) + Id(ai)) - |\Delta|$ where $\Delta$ is the difference between compensation and the impacts of assets $a_j$.
This formula demonstrates that forensics reduces significantly the financial risk by suppressing the risk of the assets that can be fully covered by forensics. And an extra reduction is induced by compensations. In an ideal and idyllic situation, each asset can be traceable and all the legal demonstration of cybercrime can be produced in this case forensics can generate an extra benefit equal to $|\Delta|$.
V. CHALLENGES OF INVESTIGATING VIRTUALISED CORPORATE INFRASTRUCTURES
The concept of virtualization is not new in the field of ICT. It dated back to the inception of programming language compilers that virtualizes the object code [8]. However, the concept of virtualization infrastructures, where physical resources are dynamically mapped to address the spontaneous business needs, is relatively new. Moreover, the scale and scope of this novel concept brings several challenges for its deployment including a lot of uncertainty as to how and where to implement security [5]. The inherent nature of virtualization requires totally different security provisioning approach than the classical one developed decades ago.
Classical digital forensics techniques and solutions require precise information of the underlying infrastructure to perform investigation and conceive the sequence of events. They cannot be applied to these emerging infrastructures due to the intrinsic characteristic of virtualization that provides abstraction to the underlying resources and infrastructures. This section examines the challenge of investigating virtualized corporate infrastructures that will have to be addressed to ensure smooth and secure transition from the classical enterprise information architecture towards virtualized one.
A. Conducting Security Audit Investigation of Future Internet-based Virtualization infrastructures
Security audit assess the security of the networked system’s physical configuration and environment, software, information handling processes, and user practices. While it is similar in terms of investigation, it is carried out before commissioning of a system and then on regular basis to ensure the desired functioning of a system. Whereas the investigations carried out by the digital forensics team is the post incident activity where a malicious activity successfully carried out its nasty action. However, in terms of investigating an ICT infrastructure, both have similar challenge of dealing with virtualization paradigm. We have therefore used our previous work on security audit [9] to leverage the work on applying digital forensics in the virtual infrastructures.
B. Case-study: Payment Card Industry Data Security Standard (PCI-DSS)
Various security audit standards such as Payment Card Industry Data Security Standard (PCI-DSS) require audit of the physical controls [10]. The virtualization infrastructures provide an abstraction layer to the underlying lower-level details. This situation raises several security concerns such as multitenancy; lack of security tools [11]; and disparity with the classical IT security audit practices.
There exist a number of generic monitoring tools such as hardware monitoring (e.g., HP Insight Manager, Dell Open Manage, VMWare Virtual Center, etc.), performance monitoring (e.g. VizionCore, Veeam Monitor, Vmtree, Nagios, etc.), machine state monitoring (e.g. Virtualshield, Logcheck, etc.), and security monitoring (e.g. intrusion detection, honeypots, etc.). However, these tools may not be suitable for security audit controls of virtualization infrastructures as physical controls can be distributed that will require onsite checks by the local controllers. There is a strong need of a new set of matrices for measuring security strength. With more reliable matrices, new check-pointing models need to be developed. Besides these technical requirements for carrying out security audit of the virtualization infrastructures, there is also a need of new regulations/legislations for the cross-border deployment of resources used in virtualization infrastructures. This work is the continuation of our previous work on analyzing the overall security requirements of deploying virtual infrastructures [12].
VI. DISCUSSIONS
The scope of digital forensics outside the criminology sphere was hardly explored in the past. However, the broadened scope of corporate ICT infrastructures and the reliance of core businesses on these infrastructures are pushing the paradigm shift in this domain. Some recent literature shows the exploration of digital forensics in the corporate sector [13,14]. However, these efforts are mainly focused on the philosophical possibilities. They do not
provide any concrete model or design approach towards the inclusion of digital forensics practices in the routine operations of corporate ICT infrastructures. Likewise, a set of best practices for computer forensics is proposed in [15] that describe some effective ways of carrying out the digital forensics analysis in the post-incident situations without any specific link to the commercial side of employing these techniques in the business environments.
There are the genuine predictions that near future cybercrimes will be driven by the clouds and virtualization infrastructures [16]. We believe that businesses and law enforcement agencies won’t be able to cope with this wave of contemporary crimes with the classical digital analysis approaches. Digital forensics is often a very painstaking task that consumes enormous resources and takes considerable time to develop a sequence of events that is acceptable by the courts. An example is the FBI investigation of ENRON scandal [17] where FBI gathered and analyzed 31 terabytes of digital data from 130 computers; thousands of e-mails; and more than 10 million document pages. The entire investigation took five years while the total monetary costs incurred remained largely unknown. With the proliferation of computing through virtualization infrastructures and the evident increase of related crimes, the law enforcement agencies will simply be overwhelmed with the demand of providing digital forensic analysis requests. Therefore, there is a strong need for the businesses to include digital forensics in their corporate security strategy. They should use these technologies within the legal framework covering their activities.
We have carried out a risk analysis to show the impact of not using the digital forensic solutions by the enterprises. We proved that loss of clientele and sanctions from the regulatory bodies might severely harm the business interests of those enterprises who will not employ the digital forensics technologies for the protection of their business interests. A similar concern is reported in [18] that predict fall of forensic research behind the market in the next ten years if various disjoint research efforts are not harnessed together in a systematic way.
VII. CONCLUSIONS AND PERSPECTIVES
We have presented a non-classical approach towards the digital forensics analysis in this paper. We argue that enterprise information architectures can improve their overall quality of protection if digital forensics technologies are made their integral part. We presented a framework for embedding digital forensics analysis techniques at several stages of enterprise information lifecycle followed by a set of best practices for operating an enterprise information security policy together with the digital forensics techniques and tools.
There are a number of open issues that we plan to address in the near future. The foremost is the use of digital forensics solutions in the virtualization infrastructures such as open clouds. Major challenges of virtualization infrastructures are the absence of a fix perimeter of the ICT resources; and the unknown details of the underlying physical infrastructure.
ACKNOWLEDGMENT
The research leading to the results presented in this paper has received funding from the Walloon Region of Belgium through the project CE-IQS (Centre d’Expertise en Ingénierie et Qualité des Systèmes) and the European Union’s seventh framework programme (FP7 2007-2013) Project PONTE under grant agreement number 247945.
REFERENCES
[1] K. S. Rosenblatt, High-Technology Crime: Investigating Cases Involving Computers, KSK Publications, ISBN 0-9648171-0-1, 1995
[2] The United States Sarbanes Oxley Act 2002 – http://uscode.house.gov/download/pls/15C98.txt <retrieved: Nov. 2011>
[3] J. Mullis, The Impact of the Sarbanes-Oxley Act of 2002 on Computer Forensic Procedures in Public Corporations, University of Oregon, July 2009 – https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/9480/Mullis-2009.pdf <retrieved: Nov. 2011>
[4] International Organization for Standardization, Standard ISO27002: Code of practice for information security – http://www.27000.org/iso-27002.htm <retrieved: Nov. 2011>
[5] R. Adhikari, The Virtualization Challenge, Part 5: Virtualization and Security, TechNewsWorld, March 2008
[6] F. Chong, G. Carraro, and R. Wolter, Multi-Tenant Data Architecture, Microsoft Corporation, June 2006
[7] International Organization for Standardization, Standard ISO/IEC 27001:2005 Information technology -- Security techniques -- Information security management systems -- Requirements
[8] J. Bloomberg, Building Security into a Service-Oriented Architecture, ZapThink Whitepaper, ZapThink LLC Publisher, May 2003
[9] S. Naqvi, G. Dallons, C. Ponsard, and P. Massonet, Ensuring Security of the Future Internet-based Virtualization Infrastructures (Position Paper), IEEE Symposium on Security and Privacy 2010, Oakland, CA, USA, May 16-20, 2010
[10] Payment Card Industry Data Security Standard (PCI-DSS) https://www.pcisecuritystandards.org/ <retrieved: Nov. 2011>
[11] E. Haletky, Virtualization Security – Security and Compliance within the Virtual Environment, DABCC online article 08 April 2009 http://www.dabcc.com/channel.aspx?id=279 <retrieved: Nov. 2011>
[12] S. Naqvi, P. Massonet, and J. Latanicki, Challenges of Deploying Scalable Virtual Infrastructures - A Security Perspective, CESNET Conference on Security, Middleware and Virtualisation, Prague, Czech Republic, September 25-26, 2008
[13] B. Nikkel, The Role of Digital Forensics within a Corporate Organization, IBSA Conference, Vienna, Austria, May 2006 – http://www.digitalforensics.ch/nikkel06a.pdf <retrieved: Nov. 2011>
[14] J. Heiser, Digital Forensics and Corporate Investigations, Gartner, November 2005 – www.gartner.com/teleconferences/attributes/attr_144863_115.pdf <retrieved: Nov. 2011>
[15] Scientific Working Group on Digital Evidence (SWGDE) Best Practices for Computer Forensics V1.0, 2004 – http://swgde.org/documents/swgde2005/SWGDE%20Best%20Practices%20_Rev%20Sept%202004_.pdf <retrieved: Nov. 2011>
[16] Trend Micro Report: The Future of threats and Threat Technologies – How the Landscape is Changing, December 2009 – http://affinitypartner.trendmicro.com/media/3471/trend_micro_2010_future_threat_report_final.pdf <retrieved: Nov. 2011>
[17] Federal Bureau of Investigations (FBI), Digital Forensics: It’s a Bull Market, July 2007 – http://www.fbi.gov/page2/may07/rcf1050707.htm <retrieved: Nov. 2011>
[18] S. Garfinkel, Digital forensics research: The next 10 years, Elsevier Science Directi Magazine Digital Investigation vol. 7, pp 64-73, 2010 – http://www.dfirws.org/2010/proceedings/2010-308.pdf <retrieved: Nov. 2011>
Shadow IT
Management and Control of unofficial IT
Christopher Rentrop; Stephan Zimmermann
Faculty of Computer Science
HTWG Konstanz – University of Applied Sciences
Brauneggerstr. 55, 78462 Konstanz, Germany
email@example.com; firstname.lastname@example.org
Abstract—Shadow IT describes the supplement of “official” IT by several, autonomous developed IT systems, processes and organizational units, which are located in the business departments. These systems are generally not known, accepted and supported by the official IT department. From the perspective of IT management and control it is necessary to find out, which interrelations exist with shadow IT and what tasks are resultant. So far only little research exists on this topic. To overcome this deficit the presented project targets on a scientifically based definition of shadow IT, the investigation of best practices in several companies and the development and application of instruments for the identification, the assessment and controlling of shadow IT.
Keywords- Shadow IT; IT Controlling; IT Governance; IT Service Management.
I. INTRODUCTION
IT management and control focus on the effective, efficient, transparent and compliant organization of information technology to achieve a best possible support of the business objectives [1]. This includes the minimization of risks and the recognition and realization of opportunities for improvements. The “official” IT infrastructure, developed, managed and controlled by the IT department, is supplemented in most companies by an unofficial IT. Business departments have a multiplicity of other hardware, software and IT employees. Generally these exist without the awareness, acceptance and support of the IT department. The resulting, autonomous developed systems, processes and organizational units are usually characterized as “Shadow IT” [2].
From IT management’s perspective, some questions arise: What does the existence of shadow IT mean to its implementation? Does IT management have an influence on the growth or reduction of shadow IT? And what continuative tasks result from this subject?
Shadow IT is not a new phenomenon, but due to some current trends its significance is increasing [2]: New and primarily web-based technologies allow an easy access with low initial costs – so, on the first look, it is easy for a business department to select and get admirable IT services by itself. In addition to this the end users themselves play a particular role for growing shadow IT. Especially young employees have a strong bond to the usage of IT, as they grew up with it and use it in their daily private life. Thus, however, the expectation regarding the IT environment in their job is going to increase [3]. If the IT department is not able to satisfy their needs, the “emancipated” users start to take care of their IT devices and applications by themselves [4][5].
In this paper, we present first results of our research project “Shadow IT” [6]. Apart from the theoretical analysis of some detailed questions on this phenomenon and its definition, it is particularly necessary to develop methods for the identification and evaluation of shadow IT. In addition to that best practices have to be collected and the developed approaches have to be assessed in business. Several companies will be analyzed for data collection and for the verification of the methods mentioned above. All these steps are important to build a stable basis for developing an integrated and practical approach to control shadow IT. So far, we have set up the research concept and worked on the definition and the layout of the methods.
For this paper we will give a brief literature review in Section II. Based on this analysis, research questions are derived. In Section III we will present a detailed description of shadow IT and its occurrences. Section IV introduces the first concepts and developed methods for the identification and evaluation. Section V concludes with a brief outlook and next steps of the study.
II. STATE OF THE ART
This section examines the state of the art on the topic shadow IT. Therefore, an overview of the most considerable literature is given. Based on this, open research questions will be derived.
A. Literature Review
In spite of its rising significance, shadow IT has so far only attracted little attention in science. Some references can be found using the term shadow IT. But mostly, the topic has a tangential-role or it is only mentioned in connection with the main issue of the considered work. Most references are practical reports or blogs, which are based on the author’s experience with no scientific foundation. Table I shows the central contributions, which are often referred to or which provide a solid investigation of the topic.
| Reference | Main content |
|-----------|--------------|
| Sherman, 2004 [7] | This article focuses on Business Intelligence shadow IT, e.g., Excel- or Access-based systems, used to add information to reports, which are not supplied from the official IT. The systems start small and grow continually over time, which makes them costly to maintain. The data shadow systems can be recognized through user interviews on how reports are created. Sherman terms several reasons for their development: 1) Missing fulfillment of user’s needs; 2) Shadow IT is easy to develop and seems to be “cost-free”; 3) A solution is needed, but the realization of official IT projects takes too long. To control shadow IT he suggests an improved communication between business and IT and the creation of data marts to secure consistent databases. |
| Bayan, 2004 [8] | Bayan describes reasons and effects of shadow IT and an approach how IT can deal with it. As the main reason he mentions the combination of reduced IT budgets and increasing IT demands. This forces business departments to develop their own IT. Furthermore, shadow IT is more focused on the business needs, it seems to be cheaper from business view and it appears to be faster and more dynamic than official IT. He refers primarily to security risks as the main effect of shadow IT. In his approach Bayan suggests to search for shadow IT with technical scanning tools. Afterwards, security gaps in the identified systems should be detected and closed. Finally, the implementation of new shadow IT should be reduced by achieving a better fulfillment of the business needs. |
| Jones et al., 2004 [9]; Behrens/Sedera, 2004 [10]; Behrens, 2009 [11] | These publications refer to a study on a single shadow IT system in an Australian university [8]. The study describes an eight-year life cycle of a shadow software system, which was implemented and supported parallel to an official system. The work shows the possible reasons for its implementation and the opportunities and risks shadow IT can have. Furthermore, the work presents a few lessons learnt [10] on how management and the official IT should react on existing shadow IT. It is stressed that contrary to the common opinion shadow IT also has positive sides: It can be a source of innovation. |
| Raden, 2005 [12] | In his work Raden concentrates on spreadsheets for Business Intelligence. In his opinion this is the most common kind of shadow IT. These spreadsheets occur due to a lack of satisfaction of business requirements, such as reporting. Spreadsheets are an expensive, universally used, autonomous, fast and portable opportunity to fill these gaps. He highlights different problems through the behavior of developing shadow IT spreadsheets, e.g., wasted time, inconsistent business logic and inefficiencies. He concludes that a company-wide supply and integration of databases connected to all official IT systems can reduce the negative effects. |
| Schaffner, 2007 [13] | Schaffner describes effects and reasons for the development of shadow IT. As effects he lists several risks, such as poor engineering techniques, inefficiencies and compliance problems. His main argument for the existence of shadow IT is an insufficient alignment between business and IT. Typical efforts to reduce shadow IT, like the prohibition of shadow IT or the locking of user’s personal rights didn’t have any effects. Schaffner suggests a closer cooperation between business and IT to increase the IT understanding of business processes and requirements. |
| Reference | Main content |
|-----------|--------------|
| Worthen, 2007 [14] | Worthen focuses on web tools and private devices as shadow IT. He highlights security and compliance violations as central risks. He is not considering the prohibition of shadow IT, because this could cause conflicts between business and IT departments. Also, the potential of user-driven innovations, which represents an opportunity of shadow IT, would be ignored. Instead he underlines, that the IT management has to find a strategy to deal with this subject. Worthen makes some general recommendations on how IT could handle shadow IT. |
| Shumarova/Swatman 2008 [15] | In their study the authors focus on shadow collaboration systems, e.g., social software, wikis, etc. They explain the rising usage of these systems due to an easy and cost-free access and the growing merger of private and work life. They deduce and discuss three basic strategies on how to handle these shadow collaboration systems: 1) Rejection and banning; 2) Limitation and regulation; 3) Acceptance. |
| Dols, 2009 [16] | The topic of this master thesis is the search for causes of compliance defects in companies. In an empirical study, which analysis Dutch and Belgian subsidiaries of PwC, shadow IT is identified as one of two reasons for such defects. The work shows the state of the discussion on the topic shadow IT. Additional effects, causes or recommendations on shadow IT are not compiled. |
### B. Open Research Questions
The analysis of the articles listed in Table 1 and further existing references indicate a number of relevant open issues. These open research questions are listed in this paragraph.
1) **Definition and theoretical framework**: The term shadow IT is mostly described in an experience-related way. An academic, cohere and consistent definition of shadow IT and its classification according to a theoretical framework is missing.
2) **Methods to deal with shadow IT**: There are no specific methods or tools on how to deal with shadow IT. The existing frameworks and best practice approaches, such as ITIL [17] or COBIT [18], do not offer solutions regarding shadow IT. To develop a consistent methodology for this subject, the first steps are to identify shadow IT in practice and to evaluate the collected data. The developed methods need to be empirically tested. Best practices in the examined companies could be collected, to find out how successful companies deal with this topic. The answers to the first two research questions should establish a detailed basis for the following research work.
3) **Business view**: Most articles focus on shadow IT from IT view. The possibilities and consequences of the topic for the business are only analyzed occasionally.
4) **Positive effects**: The existing work primarily associates shadow IT only with negative effects. There is barely a focus on the opportunities of shadow IT. Nevertheless, to identify the potentials of user-driven shadow IT, it is necessary to identify positive outcomes of
shadow IT like improved process orientation and faster adoption of technical innovations.
5) **Integrated approach:** Mainly, the current contributions only focus on partial aspects of shadow IT. To handle the increasing phenomenon in practice and to give organizations an orientation on the controlling of shadow IT a balanced set of instruments and methods is necessary. Therefore, it is useful to collect best practices and develop an integrated, scientific approach including its relation to the different elements and tasks of IT management; IT governance and IT service management.
III. **Definition and Occurrences of Shadow IT**
In Section I, we defined Shadow IT as a collection of systems developed by business departments without support of the official IT department.
This definition of shadow IT includes a variety of different occurrences [2]. One aspect is the usage of “Social Media Software” for business communication and data exchange or other services offered by providers from the internet, e.g., Cloud Computing or Software as a Service [14][19]. Furthermore, shadow IT includes the development and operation of self-built applications. In many cases these applications are Excel or Access based [7] and implemented by employees in the business departments. Moreover, the subject includes purchasing, in-house development and support of business intelligence solutions [12]. In the field of hardware, shadow IT relates to the integration of self-procured notebooks, servers, network routers, printers or other peripherals [13]. These devices are procured directly from a retailer, instead of being ordered via the official IT catalogue. A special case is the own purchasing of mobile devices, such as smartphones or tablets, and the usage of the related applications in the company network [20]. Finally, another occurrence is the development of own IT-support structures inside the business departments [12][13]: In case of IT incidents or problems technology-friendly colleagues are asked for help.
For the definition of shadow IT it is necessary to differentiate the term from end user computing (EUC). In this concept, the development of applications is delegated to the end users [21]. In contrast to shadow IT, EUC is officially initiated and supported. Primarily EUC is applied for the development of very easy IT solutions based on official platforms or for basic, individual configurations concerning specific applications.
The phenomenal description is one way to develop a definition for shadow IT. Another way is to consider existing work on informal organization [22] structures: Unofficial and hidden shadow IT processes are created in parallel with official structures. Similar to informal organization structures shadow IT differs from official policies and establishes own structures and processes. In addition, the emergence of both phenomena is linked with a distinct orientation towards employees’ needs and results often from a lack within the formal structures, e.g., the autonomous acting of business departments pictures an irregularity concerning the decision of centralization within the defined IT governance.
Moreover, the emergence of shadow IT can be explained with information asymmetries and conflicts of interest between IT and business departments [23]. Information asymmetries associated with this relation exist as incorrectly understood business requirements by the IT and as a lack of knowledge by the business departments in general IT subjects and offered IT services. This asymmetry can lead to overpromised offers regarding service levels and software functionality and overcharged prices for IT services. The business departments experience these effects and therefore they try to reduce these risks. As a result, they deploy their own (shadow IT) solutions.
IV. **Identification and Evaluation of Shadow IT**
This section presents the current level of the research project in identifying and evaluating shadow IT. This refers to research question 2 and includes the collection of best practice data in the analyzed representative companies.
A. **Identification Methods**
Generally, there are three possible strategies for the collection of shadow IT information: 1) technical analyses [8]; 2) interpretation of help desk requests and 3) direct surveys of employees in the business departments [7].
The first approach is to identify shadow IT hardware or software with technical tools. Existing license management software and a network analysis tool for shadow hardware, which has been already developed in cooperation with this project team, can be used. The second method is based on information retrieved form the company’s service desk. Incidents and problems identified there can be investigated on shadow IT as project experience proves that a remarkable number of calls is related to unofficial IT.
The third approach is a process oriented survey. It is based on structured interviews and process monitoring, to find out, which IT tools employees use in their daily business. Based on the experience gained in these interviews, we will try to develop standardized questionnaires to collect more information on user behavior and the usage of shadow IT.
The types of results from this identification phase are, e.g., graphical process descriptions with actual used IT tools and process-oriented IT landscapes with identified shadow

IT. Fig. 1 illustrates exemplarily the presentation of shadow IT in a process-oriented landscape on an abstract and high level. The identified shadow IT is assigned to one or several value chain activities [24], such as Operation or Administration. Also, the kind of shadow IT is shown. This type of representation can be refined on a level of departments and business processes to achieve a more detailed view. Thus, the process-oriented IT landscape allows picturing the shadow IT impact on business.
The different methods for shadow IT identification have certain advantages and disadvantages. The technical and help desk analysis enables a direct and quick search for shadow IT within the company’s IT architecture. However, it is difficult to find all existing shadow IT occurrences with these techniques and it is not possible to define shadow IT related processes. In contrast to this, the structured interviews are based on the business processes and reveal the process-relation of identified shadow IT. However, this method depends on the knowledge and willingness of the interviewed users, e.g., the users might try to hide shadow IT applications from the interviewer. Furthermore, a lot of work and time is necessary to apply this method. Due to the described facts a combination of the methods is practical: The technical and the help desk analysis should be the foundation for the process survey. Thereby the expenses and disadvantages can be reduced and process-related results can be provided.
B. Evaluation Methods
After the identification of shadow IT, each specific system has to be evaluated. This validation is important to assess first needs of action due to risks. The evaluation results build the basic input for the development of guidelines and strategies. The following section briefly presents an evaluation model developed in the study.
For the evaluation, it is necessary to collect comprehensive information on the company and the IT, its policies and strategies. The general aim is to define aggregated characteristics to evaluate located shadow IT. Based on shadow IT examples in literature and discussions with companies and due to existing interactions of shadow IT with risk management, IT governance and IT service management topics, several parameters can be derived as mayor evaluation criteria.
The mayor criterion relevance describes the significance and importance of a located shadow IT instance for the investigated organization. Therefore, the analysis of the strategic relevance and the shadow IT criticality concerning the business processes, the IT security, the compliance and the IT service management is necessary. The mayor criterion quality refers to the system, the service and the information quality of the located shadow IT. Furthermore, the effects of shadow IT on the quality of business processing is of interest. The size of shadow IT is evaluated with regard to its use of resources and professionalism, its distribution in the company and its penetration with components and IT service processes.
Apart from these criteria, it is essential to evaluate the innovative potential of the shadow IT instance. Finally, it is of interest to judge, if shadow IT is operated parallel to an existing, official IT-System or if it is complementary. Table II summarizes the different major and sub-criteria of this shadow IT evaluation model.
All sub-criteria on the different levels need to be weighted individually for the regarded company and rated for each located shadow IT instance. For the specific criteria evaluation different procedures and models, such as maturity models, can be applied. The total ratings of the major criteria are based on the weighted ratings of their sub-criteria. With these results each shadow IT instance is transferred into a
| Shadow IT evaluation criteria | Sub-criteria level I | Sub-criteria level II |
|-------------------------------|----------------------|-----------------------|
| **Relevance** | Strategic relevance | Business process |
| | Criticality | IT security |
| | | Compliance |
| | | IT service management |
| **Quality** | System quality | Hard-/Software |
| | | Engineering process |
| | Service quality | |
| | Information quality | |
| | Quality of business processing | |
| **Size** | Use of resources and professionalism | |
| | Number of users | |
| | Shadow IT components | |
| | Shadow IT service processes | |
| **Innovative potential** | | |
| **Parallelism** | | |
Figure 2. Shadow IT Evaluation Portfolio - Example
portfolio as exemplarily shown in Fig. 2. The portfolio consists of the two axes relevance and quality, the size for an instance and the color for the innovative potential. A parallel existing instance is marked with two parallel lines. The portfolio indicates which shadow IT instances have to be addressed with a high priority and establishes a basis for further management approaches to control shadow IT.
Furthermore, the development of shadow IT-related key performance indicators is an intended aim of the project. Based on this, it is possible to realize relevant benchmarks.
V. CONCLUSION AND FUTURE WORK
This paper introduced our research on shadow IT. The importance for IT management is shown and existing references are analyzed. As a result of this analysis, several open research questions could be pointed out. We have shown the initial steps and ideas of the research project with the focus on the definition, the identification and evaluation of shadow IT.
For the next steps, the theoretical questions on the definition of shadow IT and its relation to IT management disciplines have to be compiled. Besides, a detailed development of the discussed methods and their empirical appliance in practice will be carried out. Best practices for the handling of shadow IT will be investigated in the companies involved. Based on the results of data collection, the research project aims at the development of an integrated and practical approach to control shadow IT. This enables the revelation of its innovative potentials and the further development to a “User-driven IT”.
ACKNOWLEDGMENT
This research project is partially funded by the Ministry of Science, Research and Arts Baden-Württemberg [25]. The authors would also like to thank Cassini Consulting GmbH and Schutzwerk GmbH for supporting this project. Finally, we would like to thank the reviewers for their valuable input.
REFERENCES
[1] R. Zarnekow and W. Brenner, “Auf dem Weg zu einem produkt- und dienstleistungsorientierten IT-Management,” HMD – Praxis der Wirtschaftsinformatik, vol. 40, no. 232, 2003, pp. 7-16.
[2] C. Rentrop, O. van Laak, and M. Mevius, “Schatten-IT: ein Thema für die Interne Revision,” Revisionspraxis – Journal für Revisoren, Wirtschaftsprüfer, IT-Sicherheits- und Datenschutzbeauftragte, April 2011, pp. 68-76.
[3] K. Quack, “Autoritätsverlust oder wahre Größe?” computerwoche.de, July 08, 2010, http://www.computerwoche.de/management/itstrategie/2349055/index.html, checked on 22/08/2011.
[4] Accenture GmbH, “Millennials vor den Toren – Anspruch der Internet-Generation an IT,” Kronberg, 2009.
[5] RSA Security Inc., “The Confessions Survey,” Bedford, 2007.
[6] See for the research project “Shadow IT” our website www.schattenit.in.htwg-konstanz.de, checked on 22/08/2011.
[7] R. Sherman, “Shedding light on data shadow systems,” Information Management Online, April 29, 2004, http://www.information-management.com/news/1002617-1.html, checked on 22/08/2011.
[8] R. Bayan, “Shed light on shadow IT groups,” techrepublic.com, July 09, 2004 http://www.techrepublic.com/article/hed-light-on-shadow-it-groups/5247674, checked on 22/08/2011.
[9] D. Jones, S. Behrens, K. Jamieson, and E. Tansley, “The rise and fall of a shadow system: Lessons for enterprise system implementation,” ACIS 2004 Proceedings Paper 96.
[10] S. Behrens and W. Sedura, “Why Do Shadow Systems Exist after an ERP Implementation? Lessons from a Case Study”, PACIS 2004 Proceedings, Paper 136.
[11] S. Behrens, “Shadow Systems: The Good, the Bad and the Ugly,” Communications of the ACM, vol. 52, no. 2, 2009, pp. 124-129, DOI: 10.1145/1461928.1461960.
[12] N. Raden, “Shedding light on shadow IT: Is Excel running your business?” Hired Brains Inc., Santa Barbara, 2005.
[13] M. Schaffner, “IT needs to become more like “Shadow IT”,” January 12, 2007 http://mikeschaffner.typepad.com/michael_schaffner/2007/01/we_need_more_sh.html, checked on 22/08/2011.
[14] B. Worthen, “User Management - Users who know too much and the CIOs who ‘fear’ them,” CIO.com, 15/02/2007, http://www.cio.com/article/28821/User_Management_Users_Who_Know_Too_Much_and_the_CIOs_Who_Fear_Them_?page=1&taxonomyId=3119, checked on 22/08/2011.
[15] E. Shumarova and P. A. Swatman, “Informal eCollaboration channels: Shedding light on “Shadow CIT,” eCollaboration: Overcoming Boundaries through Multi-Channel Interaction, 21st Bled eConference, June 15-18, 2008, Bled, pp. 371-394.
[16] T. Dols, “Influencing factors towards non-compliance in information systems,” UAS Utrecht, 2009.
[17] Office of Government Commerce, “ITIL - Service Strategy,” London: TSO, 2007.
[18] IT Governance Institute, “COBIT 4.1,” Rolling Meadows, 2007.
[19] B. Stone, “Firms fret as office e-mail jumps security walls,” International Herald Tribune, January 11, 2007, http://www.nytimes.com/2007/01/11/technology/11iht-web.0111email.4167773.html?_r=1, checked on 22/08/2011.
[20] N. Zeitler, “iPad & Co. am Arbeitsplatz: Strategie gegen die Schatten-IT,” CIO.de, November 15, 2010, http://www.cio.de/_misc/article/printoverview/index.cfm?pid=157&pk=2250537&kop=lst, checked on 22/08/2011.
[21] J.C. Brancheau and C. Brown, “The management of end-user computing: Status and Directions,” ACM Computing Surveys, vol. 25, no. 4, 1993, pp. 437–482.
[22] R. Lang, “Informelle Organisation,” in Handwörterbuch Unternehmensführung und Organisation, vol. IV, G. Schreyögg, A. von Werder, Eds. Stuttgart: Schäffer-Poeschel, 2004, pp. 497–505.
[23] V. Gurbaxani and C. F. Kemerer, Chris, “An Agent-Theoretic Perspective of the Management of Information Systems,” Proceedings of the Twenty-Second Hawaii Conference on Systems Science, vol. III, 1989, pp. 141-150, doi:10.1109/HICSS.1989.49234.
[24] M. Porter: “Competitive advantage: creating and sustaining superior performance,” The Free Press, New York, 1985.
[25] Ministry of Science, Research and Arts Baden-Württemberg, Germany – Website: http://mwk.baden-wuerttemberg.de, checked on 22/08/2011.
A Secure and Distributed Infrastructure for Health Record Access
Victoriano Giralt
Central ICT Services
University of Málaga
Málaga, Spain
e-mail: email@example.com
Abstract—The present paper describes the initial ideas for the author PhD thesis dissertation. The main goal of the research is to use Federated Identity and Access Management techniques in widespread use in academic networks, and more every day on the whole Internet, to the controlled, accountable and open access to health information over the Internet as well as controlling and securing the linkage of such data to a given individual. The challenge is to open the data buried in health records for research without giving out information that will allow to identify the individual persons. All of it keeping the real owners of the data, the individuals, in control of the information release. For this, we propose federated identity use to control access to linkage information about medical acts made publicly available. Using this technique, it would be even possible to provide totally anonymous informed health care.
Keywords—health record; security; accountability; Federated Identity and Access Management.
I. INTRODUCTION
Health related data has the highest level of privacy protection in most countries data protection laws, but, at the same time, it is in the best interest of the whole medical science and the individuals themselves, that health data can be readily available.
The emergency room scenario has been used many times as an use case for expedited access to the whole heath record of an individual, where consent cannot be requested in the most life threatening situations. [1]
On the other hand, free access to high volumes of anonymous, but traceable (not to a real person only to an anonymous single individual), patient data, could be an invaluable resource for clinical research.
Access to health data should, in most cases, be granted by the individual to whom such data pertains, and should be accountable to those who see those data.
The present paper will propose a system than can be built using already available, and in use, protocols and tools that can both allow free access to anonymous health data and provide controlled and accountable means for de-anonymising the health records and tracing them back to the original person to whom they are related. [4][5][11][10] [12]
The proposed work builds upon the author experience in dealing with personal data in diverse scenarios, with some award winning results. [9] The driving force in the past eight years have been to put persons in the centre of their on-line lives and in control of personal data about them. [14][15] In this case, we propose a change of the status quo. At present, health records are owned by the institutions or practitioners that produce them, instead of the persons that are the subjects of those records. The main reason for our work is to put these persons (all of us) at centre stage and give them control over their own information, regardless of who has produced or created it. The present paper has resulted both from experience and the impression that the time is right for connecting two fields, health record management and electronic identity management, that are experiencing rapid development at this point in time. [13][11][1]
By publishing this work in progress paper, the author tries to gather as much hindsight as possible from others that might be working on ideas that could cross-pollinate and contribute to the final proposed landscape.
We will present the different scenarios of access and creation of health records by means of user stories:
- Individual enrolment
- Creation of health record in clinical practice
- Access to health records in clinical practice
- Access to health records from the emergency room
- Access to health information for research purposes
- Access to personal identity information
Finally, we will describe the technologies that will be used to create a demonstrator.
II. TECHNICAL TERMINOLOGY
The proposed work involves several domains with specialised terminologies that are not commonly understood. The author main field of work, despite his academic background, is electronic identity and privacy and access control, thus making this the main domain for the work.
A. Electronic identity terms
- Identifiable individual: A single physical person than can be identified by a set of personal data that constitutes their identity record.
- Attribute: A property of an identity record consisting of one or more values. All the values of an identity attribute are related by a common purpose or meaning. For example, the collection of telephone numbers belonging to
a person might form an identity attribute on the identity record that represents that individual.
- **Principal**: a person for whom another entity acts as an agent or representative.
- **Pseudonym**: an identifier that can single out an individual without revealing the real identity.
- **Biometric information**: personal information attributes derived from physical or biological characteristics of an individual.
### III. General Information Processing and Storage Terms
- **Hash**: the result of using a hash function on an element of a data set. This functions transform larger data sets into smaller ones and produce the same result given the same input.
- **Universally Unique Identifier (UUID)**: a 16 byte (128 bits) string that is guaranteed to be different from all other UUIDs generated before 3603 A.D., if the recommended algorithms are used [2].
- **Resolver**: an entity that can link pseudonymous identifiers like UUIDs to information about principals with or without identifying them.
#### A. Federated Identity and Access Management terms
- **Identity Provider (IdP)**: An entity able to identify individuals and provide attributes pertaining to their identity.
- **Relying Party (RP)**: An entity that trusts the federation and accepts identities asserted by IdPs.
- **Federation**: Infrastructure supporting the trust links between IdPs and RPs.
- **Authorisation Server (AS)**: it is a trusted entity that takes access decisions based on attributes of the principals involved in a transaction in support of an RP.
- **Attribute Authority (AA)**: is a trusted entity that asserts attributes about principals with or without revealing their identities to other principals involved in a transaction.
- **Level of Assurance (LoA)**: the level of confidence with which the identity of an individual has been vetted in order to be linked to an electronic identity record.
#### B. Medical terms
- **Health Level Seven (HL7)**: an international standards organisation that works for the interoperability of health clinical and administrative data. And, it is also used to refer to the standards defined by said organisation. [3]
- **Act**: one of the three main classes defined in the HL7 reference information model (RIM) [8] that represent actions that are executed and must be documented as various parties provide health care. [3]
- **Role**: second of the main classes defined in the HL7 RIM that establishes the function played by entities as they participate in health care acts. [3]
- **Entity**: third of the classes that represents the physical things and beings that are of interest to, and take part in, the health care. [3]
- **Act Relationship**: represents the binding of one act to another. [3]
- **Participation**: expresses an act’s context, such as who performed it, for whom and where. [3]
- **Role Link**: represents relationships between individual roles. [3]
- **Health Record (HR)**: a collection of health information related to an act or to the general health state of an individual. [3]
### IV. Actors
#### A. Patient
We will use the term patient to refer to a person that is the subject of a medical act, although both in classical and modern medicine, keeping persons in a healthy condition is the main target of medical practice.
#### B. Practitioner
Practitioner will refer to any health care professional of any kind that interacts with patients in medical acts.
#### C. Emergency Room Practitioner
We have singled emergency room (ER) practitioners as they will receive special treatment in the system regarding the way they can access health records.
#### D. Staff member
This term refers to non medical professionals that have a role in medical acts like clinic receptionists or hospital administrative staff that require access to partial content of the HRs or to personal data of the patients.
#### E. Relative
A person with a family or other kind of social relationship to a patient that might play a role in authorising access to HR or provide personal information about the patient.
#### F. Researcher
A person that requires anonymous, or, at most, pseudonymous access to HRs for scientific research work.
### V. General System Description
The proposed system aims to provide both freely available anonymous HRs published as HL7 [7] XML [16] documents on common web servers and a privacy controlled way of liking such records to the patients that participated in the corresponding medical acts.
There exist both commercial and non-profit repositories for personal health records, but they are centralised and, in many cases, under tight control of entities like insurance companies. We propose a totally open and distributed system based on trust models proven in higher education, research, government and vertical industries. The level of trust can be as high as to use one of such federations for controlling fusion nuclear reactors remotely or submitting experiments to synchrotron facilities.
C. Patient identification
The patients themselves will register to an IdP recognised in the global health care federation, using a method that provides an acceptable LoA. The personal data record will receive an UUID that can be published into the resolver cloud.
Patients should also get a hash out of some standardised biometric information. Ideally, this information should be genetic as it is the only type of biometric data that any body part carries. The state of the art does not yet allow for a full genomic characterisation of an individual in a reasonable time and for a reasonable cost, but there is fast progress in that area. Any other biometric information can be standardised, and, for the purpose of the demonstrator, we propose the use of digitised fingerprints, that will be hashed using Automated Fingerprint Identification Systems (AFIS) [17] algorithms.
Using fingerprints could be a handicap for the ER use case that we will present, in case the patient has lost the hands, but the prevalence of such situations is not high enough to render the system useless.
Once the patient has a biometric hash, it is associated to an UUID that will be published in a special resolver finder cloud, that allows for, so to say, backwards searches. This is required mainly for the ER use case.
The patient personal data will also include any relevant information needed for authorisation related contacts, either direct or through a relative.
D. Practitioner, staff and researcher identification
All other principals that participate in health care acts will register to pertinent IdPs in the federation, that could be run by hospitals, physicians or nurses colleges, insurance companies, etc. These IdPs will assert attributes that allow the AS, that control access to the RPs in the resolvers, to take appropriate decisions for granting access to the requested information. Thus, no one will get more information than that required to participate in a given act.
VI. USER STORIES
For the sake of brevity, we will do a shallow description of the user stories proposed in the introduction.
A. Individual enrolment
I’m a patient and want to publish my HR.
1) I select an IdP or the national health system provides me one.
2) I identify to the IdP using documents to achieve the required LoA and provide contact information for me and my closest relative.
3) I get the UUID that identifies my personal data.
4) My UUID is published by the IdP resolver.
5) My biometric hash is published in the resolver cloud.
6) I get my biometric hash UUID and link it to my UUID.
B. Creation of health record in clinical practice
1) I as a patient go visit a practitioner.
2) All acts are compiled into HR documents.
3) The HR are dated and get UUIDs.
4) The HR UUIDs and my UUID are inserted in my IdP resolver.
5) The HR UUIDs are sent to the resolver finder cloud from the resolver together with the pertinent pointer.
C. Access to health records in clinical practice
1) I as a patient go visit a practitioner.
2) The practitioner requests historic HR information.
3) I provide the practitioner with my UUID.
4) The practitioner identifies to the pertinent IdP and queries the resolver finder cloud and then, the appropriate resolver.
5) The resolver AS sends me a message indicating the practitioner identity, information about the requested data and a request for granting authorisation.
6) I grant the access and set a time limit.
7) The practitioner can access the data.
D. Access to health records from the emergency room
1) An unconscious and unidentified patient arrives in a life threatening condition.
2) The standard biometric parameters are determined and hashed appropriately.
3) A practitioner in the ER identifies to an IdP connected to an AA that asserts the attributes that verify the ER job.
4) The asserted attributes allow access to the special resolvers for biometric hashes, and to the UUID resolvers without requesting authorisation from the patient or relatives.
5) The resolvers return all HR UUIDs related to the UUID associated to the biometric hash.
6) The ER practitioner can retrieve the whole history of HRs related to the patient, without knowing the identity of the individual.
E. Access to health information for research purposes
1) I am a researcher working on a certain disease.
2) I search the web and collect all pertinent HRs.
3) I need to know about historic HR data about the same individuals that form the population under study.
4) I identify to my IdP that has an AA that asserts attributes to prove my researcher condition.
5) I query the resolvers for other HR UUIDs that belong to the same individuals as the HR UUIDs in the collection under study.
6) Depending on user preferences, data sensitivity or other parameters, patients get a request for granting access to the HR.
F. Access to personal identity information
1) I am a hospital staff member.
2) I need to know a patient identity for billing purposes.
3) I identify to the hospital IdP and the hospital AA asserts attributes to prove my administration staff status.
4) I query the resolver finder cloud to find the resolver for the patient UUID.
5) I query the patient resolver.
6) I get back the data needed to bill the patient.
7) The patient is notified of the personal data request.
VII. Technologies for Implementing the System
There are several options for some of the technologies needed to implement the proposed system. Producing a demonstrator is one of the main aims of the work described in the present paper, so it has been necessary to select a given technology for the different parts of the system. The selection has been mostly based on the author’s experience or common practice in the fields in which he is working.
A. Security Assertion Markup Language (SAML)
SAML version 2 [4] is a proven method for expressing trust via electronic means and asserting information about principals that is in widespread use in present identity federations. It allows for inter-domain authentication, authorisation and accounting of access to resources. Such information is carried using XML [16] documents.
SAML2 will be used for the IdPs, AA and some RPs in the system.
B. Open Authorisation (OAuth)
Also in version 2, OAuth is a protocol that allows third party access to data with express authorisation of the owner of that data. [5]
OAuth will be used for the AS and some RPs in the system.
C. Distributed Hash Tables (DHT)
A distributed hash table (DHT) is a class of a decentralised distributed system that provides a look-up service similar to a hash table; (key, value) pairs are stored in a DHT, and any participating node can efficiently retrieve the value associated with a given key. Responsibility for maintaining the mapping from keys to values is distributed among the nodes, in such a way that a change in the set of participants causes a minimal amount of disruption. This allows a DHT to scale to extremely large numbers of nodes and to handle continual node arrivals, departures, and failures [6].
Due to the distributed, decentralised and resilient nature of DHTs, the system will use this technology to implement the resolver finder cloud. The keys will be UUIDs and the values will be URLs pointing to the resolver that can resolve a given UUID. In the special case of biometric hashes, the keys will be the later and the values will be UUIDs to feed into the normal resolver finder cloud.
VIII. A DEEPER VIEW OF USER STORY C
We will do a more detailed description of user story C, *Access to health records in clinical practice*, once we know the technologies we will be using for the ddemostrator implementation.
The actors and elements involved are:
- **Patient**: The subject in the clinical act.
- **Practitioner**: The health care professional performing the clinical act.
- **IdP**: The Identity Provider where the Practitioner authenticates.
- **Resolver**: The element that resolves the Patient identifier and locates pointers to HR.
- **RP**: The element that grants access to the Resolver.
- **AS**: The element inside the RP that permits the retrieval of pointers.
- **HR**: Relevant health information about the Patient.
Patient and Practitioner are both physical persons and their electronic representations, and computer applications acting in their name as proxies. The elements are computer applications and electronic representations of information.
Patient goes visit PPractitioner for some clinical Act. Let’s assume that is related to cholesterol blood levels. It is the first time Patient and Practitioner meet. So, Practitioner needs some historic data about blood samples, mostly cholesterol levels and some related values. Thus, Patient provides Practitioner with a UUID that can be linked to published HRs, through the use of resolvers. The process flow proceeds as depicted in figure 2, with the following steps indicated as circled numbers:
1) Patient provides Practitioner with UUID
2) Practitioner goes to the resolver cloud
3) RP on resolver cloud requests Practitioner identity
4) Practitioner identifies to the pertinent IdP and returns to RP
5) AS in resolver cloud RP finds Patient authorisation method and requests access permissions for Practitioner.
6) The resolver AS sends Patient a message indicating Practitioner identity, information about the requested data and a request for granting authorisation.
7) Patient grants access and sets a time limit.
8) Practitioner retrieves a set of UUIDs from the resolvers that belong to previous Patient HRs with relevant information.
9) Practitioner retrieves the needed HRs.
RPs log all resolution requests and authorisation responses with pertinent identity information about the requestor and grantor, in order to create audit trails.
In our ddemostrator SAML2 [4] protocol will be used to carry identity and authentication information, while OAuth2 [5] will carry the authorisation requests and responses.
It is possible to increase the security of the previous flow requiring Patient to also authenticate against an IdP for replying to the authorisation request.
IX. CONCLUSIONS
In case the ideas presented in this paper are deemed worth the effort and such effort produces the expected results, the system will have two main advantages:
- a paradigm shift moving ownership of the data from the hands of those that produce such data into the hands of those to whom the data belongs to,
- and open data availability for many purposes.
ACKNOWLEDGEMENTS
The author wishes to thank all the fruitful conversations he has had with wise people in the Identity Federation space, with special mention of some ones that have seeded the ideas resulting in the work described in the present paper, including, but not restricted to, and in no particular order, Roland Hedberg, Ken Klingenstein, Andrew Cormack, J.A. Accino, Licia Florio, Klaas Wierenga, Milan Sova, RL “Bob” Morgan, Lorenzo Gil, Matthew Gardiner, Dave Birch, David Chadwick, and, last, but not least, for his very special support, Diego Lopez.
REFERENCES
[1] P. Groen, P. Mahootian and D. Goldstein, *Medical informatics: emerging technologies and ‘open’ health IT solutions for the 21st century*, January 2011, in press.
[2] ITU, *Universally unique identifiers*. URL: http://www.itu.int/ITU-T/asn/uuid.html retrieved: November 21st, 2011.
[3] R. Gajanayake, R. Iannella and T. Sahama, *Sharing with care: An information accountability perspective*, Internet Computing, IEEE , vol.15, no.4, pp.31-38, July-Aug. 2011 doi: 10.1109/MIC.2011.51 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5749997&isnumber=5934844 retrieved: November 21st, 2011.
[4] P. Madsen et al., *SAML V2.0 Executive Overview*. OASIS Committee Draft, April 2005. Document ID sstc-saml-tech-overview-2.0-cd-01-2col URL: http://www.oasis-open.org/committees/download.php/13525/sstc-saml-exec-overview-2.0-cd-01-2col.pdf retrieved: November 21st, 2011.
[5] E. Hammer-Lahav, D. Recordon and D. Hardt, *The OAuth 2.0 Authorization Protocol*, http://tools.ietf.org/html/draft-ietf-oauth-v2-21 retrieved: November 21st, 2011.
[6] Wikipedia, *Distributed hash table*, http://en.wikipedia.org/wiki/Distributed_hash_table retrieved: November 21st, 2011.
[7] *Health Level Seven Standard Version 2.7 - An Application Protocol for Electronic Data Exchange in Healthcare Environments*, ANSI/HL7 V2.7-2011, National Institute of Standards and Technology, U.S. Government Printing Office: Washington, DC, 2011.
[8] *HL7 Version 3 Standard: Reference Information Model, Release 2*, ANSI/HL7 V3 RIM, R2-2010, National Institute of Standards and Technology, U.S. Government Printing Office: Washington, DC, 2010.
[9] V. Giralt, et al., *Example of Privacy Management in a Public Sector Organizational Electronic Directory* in Cunningham P., Cunningham M. (Eds.) *Expanding the Knowledge Economy: Issues, Applications, Case Studies*, IOS Press, Amsterdam, pp. 1386-1393, 2007
[10] V. Giralt, et al., *Federated Identity Infrastructure for the Andalusian Universities. Deployment of a Multi-technology Federation* in Cunningham P., Cunningham M. (Eds.) *Collaboration and the Knowledge Economy: Issues, Applications, Case Studies*, IOS Press, Amsterdam, pp. 1139-1144, 2008
[11] D. Simonsen, *Revealing the Identity of Federations*, the 16th European University Information Systems Organisation (EUNIS) congress, EUNIS 2010, University Information Systems: Selected Problems, University of Warsaw, Poland, pp. 11-20.
[12] M. Ramos, et al., *Design and Implementation Details of the Public Andalusian Universities Identity Federation CONFIA*, the 16th European University Information Systems Organisation (EUNIS) congress, EUNIS 2010, University Information Systems: Selected Problems, University of Warsaw, Poland, pp. 217-224
[13] Microsoft® “Geneva” Server and Sun OpenSSO: Enabling Unprecedented Collaboration Across Heterogeneous IT Environments, Microsoft® and Sun Microsystems White Paper, 2009, URL: http://download.microsoft.com/download/C/F/D/CFD1D9C8-EBA4-4780-B34B-D8EB5A4792BF/Geneva%20and%20Sun%20OpenSSO.pdf retrieved: November 21st, 2011.
[14] J.A. Accino, et al., *dUMA: comprehensive personal information management*, the 17th European University Information Systems Organisation (EUNIS) congress, EUNIS 2011, Maintaining a Sustainable Future for IT in Higher Education, Trinity College Dublin, Ireland
[15] J.A. Accino, M. Cebrían, and V. Giralt, *Identity Based Clusters of Applications for Collaboration and eLearning*, the 15th European University Information Systems Organisation (EUNIS) congress, EUNIS 2009, IT: Key of the European Space for Knowledge, University of Santiago de Compostela, Spain
[16] T. Bray, J. Paoli, C.M. Sperberg-McQueen, E. Maler and F. Yergeau eds., *Extensible Markup Language (XML) 1.0 (Fifth Edition)*, W3C Recommendation 26 November 2008, URL: http://www.w3.org/TR/2008/REC-xml-20081126/ retrieved: November 21st, 2011.
[17] K.R. Moses, P Higgins, M. McCabe, S. Probbhakar, S. Swann, *Fingerprint Sourcebook-Chapter 6: Automated Fingerprint Identification System (AFIS)*, National Institute of Justice/NCJRS 225326, 2010, URL: http://www.ncjrs.gov/pdffiles1/nij/225326.pdf retrieved: November 21st, 2011.
Active Mechanisms for Cloud Environments
Irina Astrova
Institute of Cybernetics
Tallinn University of Technology
Tallinn, Estonia
firstname.lastname@example.org
Stella Gatziu Grivas, Marc Schaaf
Institute for Information Systems
University of Applied Sciences Northwestern Switzerland
Olten, Switzerland
{stella.gatziugrivas, email@example.com
Arne Koschel
Faculty IV, Department for Computer Science
University of Applied Sciences and Arts Hannover
Hannover, Germany
firstname.lastname@example.org
Ilja Hellwich, Sven Kasten, Nedim Vaizovic, Christoph Wiens
Faculty IV, Department for Computer Science
University of Applied Sciences and Arts Hannover
Hannover, Germany
email@example.com
Abstract—Active mechanisms are used for the coordination (e.g., scalability) of IT resources in clouds. In this paper, we give an overview of existing technologies and products – viz., OM4SPACE Activity Service, RESERVOIR, Amazon SNS, IBM Tivoli Live Monitoring Service, Zimory and PESA – that can be used for providing active mechanisms in cloud environments. Our overview showed that these technologies and products mainly differ in the architectures they support and the cloud layers they provide.
Keywords—Cloud computing; events; active mechanisms.
I. INTRODUCTION
Cloud computing has become more and more popular. Many companies (viz., cloud providers) are outsourcing their IT resources into clouds so that users can hire those resources only if they really need the resources and give the resources back when they do not need the resources any longer. This creates a new challenge for cloud providers – they need to provide users with systems, which can automatically assign IT resources on the fly. These systems should give the possibility to evaluate events from different event sources at one or more external coordination points. These points can coordinate the usage of the IT resources in clouds. Thus, the systems should use active mechanisms for the coordination (e.g., scalability) of IT resources in cloud environments.
The purpose of this paper is to give an overview of existing technologies and products that can be used for providing active mechanisms in cloud environments. Technologies like OM4SPACE Activity Service, RESERVOIR and PESA are mostly theoretical concepts and not end products. There are also (commercial) end products like Amazon SNS, IBM Tivoli Live Monitoring Service and Zimory.
II. OM4SPACE ACTIVITY SERVICE
OM4SPACE [2] provides software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS). The crucial part of OM4SPACE is the Activity Service.
In cloud environments, often a large number of services occur on different layers. The Activity Service offers an approach for managing a large number of events from different event sources, processing these events and triggering appropriate actions on the events, e.g., starting new virtual machine instances when a specified threshold for the CPU load has been exceeded.
Figure 1 shows the architecture of the Activity Service, which consists of the following components:
- **Event Source**: This component can be an arbitrary part of a cloud environment; it generates different types of events (both simple and complex). Every event is sent to the Event Service for further processing. The Event Source can be on any layer of a cloud environment: SaaS, PaaS and IaaS.
- **Event Service**: This component receives events from an arbitrary number of Event Sources and performs the first step of processing, which is divided into two phases. The first phase dispatches the received events to Event Consumers that are registered for this type of events. The second phase consists of performing complex event detection (CED) on the incoming event stream. This CED can cause new complex and enriched events to be created by the Event Service and dispatched to the registered Event Consumers. Thus, the Event Service controls a granularity shift of the incoming event stream and helps to scale down the number of events, which enables complex event and rule processing by the Rule Execution Service.
- **Event Consumer**: This component receives a particular event type from the Event Service. Events can be of two types: simple events that the Event Service receives and complex events that the Event Service detects and generates. To receive events from the Event Service, an Event Consumer has to implement an appropriate event handler service, which needs to be published to the service registry.
The Event Service discovers event handler services by looking for the service registry. To inform the Event Service about the events an event handler service is interested in, filtering criteria have to be added to the WSDL description, which will be extracted by the Event Service.
- **Rule Execution Service**: This component receives events from the Event Service to match them against rules. Thus, it acts as an Event Consumer of the Event Service by registering an event handler service. Matching of the rules results in the execution of action handlers. An action handler needs to be implemented by each of the components that are to be called from within the rules. The rules are stored in the Rule Base, which is managed by the Rule Management Service.
- **Event Monitor (EM)**: Not all components are built for active notification by the Event Service. For those components, a monitor capsule mechanism is used. As a result, a small application that acts as a monitor capsule around the Event Source can be implemented in such a way that it obtains events from the Event Source and transfers them to the Event Service. Furthermore, the monitor capsule can provide conversion between different types of events.
OM4SPACE adapts an activity service embedded in active database management systems (ADBMSs) to cloud environments. Active mechanisms are divided into different components that are put into the cloud. Each of these components has one or more well-defined interfaces. So the implementation of the components is interchangeable. The communication between the components is implemented using a service-oriented architecture (SOA). In addition to processing a large number of events, the Activity Service can be used for monitoring and scaling applications in the cloud.
![Figure 1. Architecture of OM4SPACE Activity Service [2].](image)
**III. RESERVOIR**
RESERVOIR [3][4] provides IaaS. It requires the usage of manifests. A manifest serves as a contract between service and infrastructure providers.
Figure 2 shows the architecture of RESERVOIR, which consists of the following components:
- **Hypervisor**: This is a layer of abstraction, which runs on top of physical hardware. It allocates (physical) resources to virtual machines and manages and controls the execution of them by booting, suspending and shutting down resources as required. It can even provide the replication and migration of virtual machines. Examples of a Hypervisor include Xen [5] and VMWare [6].
- **Virtual Execution Environment Host (VEE Host)**: This is the lowest layer in the architecture and provides plug-ins for different hypervisors. It enables upper layers to interact with heterogeneous virtualized products.
- **Virtual Execution Environment Manager (VEE Manager)**: This layer implements the key abstractions needed for cloud computing and provides the functionality to control multiple VEE Hosts. Because of cross-site interactions between multiple different VEE Managers, the architecture offers the possibility to deal with and federate different sites, which implement heterogeneous virtualized products. Examples of a VEE Manager include OpenNebula [7].
- **Service Manager**: This layer is an interface to build the connection to the Service Providers. It ensures that the requirements of them are correctly enforced.
- **Service Provider**: This is the highest layer in the architecture and offers services to provide operations of specified businesses and uses the Service Managers to connect to the cloud.
RESERVOIR brings active mechanisms and the usage of events into cloud environments. The scalability of a service is enabled through an application description language (which introduces a monitoring framework along with Monitoring Agents) and key performance indicators (which describe the state of the service). The Monitoring Agents send Monitoring Events to the service management infrastructure, where these events are processed and rules are executed. The execution of these rules is additionally monitored by OCL operations to insure the correctness.
![Figure 2. Architecture of RESERVOIR [3].](image)
**IV. AMAZON SNS**
Amazon SNS (Simple Notification Service) [9] is a middleware product that offers a service for managing and sending notifications over cloud environments. The crucial part of Amazon SNS is topics.
Before sending notifications a topic has to be created. This is done by providing the topic name, which is used by Amazon SNS to generate a unique identifier called Amazon Resource Name. After the topic has been created, notifications can be sent to it. A notification consists of a message, the Amazon Resource Name and optionally a subject. Amazon SNS also adds meta-data like a signature and a timestamp to the notification.
To receive notifications, a subscriber needs to be registered for a topic. Every notification that arrives at a topic is delivered to all subscribers of this topic. Subscription contains endpoint information, which defines how the notification is delivered to the subscriber. Amazon provides the following types of endpoints:
- **Email**: The notification is sent via an email by using the SMTP protocol. The notification subject is mapped to the email subject and the notification message is mapped to the email message. Amazon SNS adds additional information to every outgoing notification, which contains an HTTP link for unsubscribing.
- **Email JSON**: The notification is also sent via an email but by using the human and machine-readable format called JSON [10]. All notification properties (e.g., the timestamp, the message and the subject) are stored in a list of key-value pairs.
- **HTTPS**: The notification is delivered by using the HTTPS protocol. In case of the incoming notification, Amazon SNS performs an HTTP-Post on a specified URL. The body of the HTTP-Post contains all notification information in the JSON format. All notification properties are encrypted and stored in a list of key-value pairs [11].
- **HTTP**: Like HTTPS but without using encryption.
- **Amazon SQS**: The notification is delivered to a queue of the Amazon SQS (Simple Queue Service). This is another Amazon service, which provides the functionality of sending text messages to the cloud. These messages are cached for a limited period of time while clients can request and receive them.
Every subscription needs to be confirmed by the receiver. For this purpose, Amazon SNS sends a confirmation request, which contains a specific token, to another endpoint. By transmitting this token back to Amazon SNS, the subscriber guarantees that it gets access to the endpoint and can receive notifications. Otherwise, it would be possible to enter arbitrary endpoints and flood them with notifications, which they do not want to receive. Amazon SNS supports an event-driven architecture (EDA) to decouple the notification sender from the receiver (i.e., the subscriber). Thereby it propagates this decoupling into the cloud.
Amazon allows for a detailed access control on Amazon SNS and its topics, e.g., by defining who is allowed to access a topic, who is allowed to add subscriptions to this topic and what types of subscriptions are permitted. For this purpose, Amazon introduces syntax for defining policies. These policies contain all information, which is important for security and access-control configuration.
Amazon SNS provides easy-to-use active mechanisms. But these mechanisms do not enable performing CED or condition-based rule execution. Another big problem with Amazon SNS is the small size of a message (8 kilobytes per notification).
**V. ZIMORY**
Zimory [13] provides IaaS. It is a product, where highly distributed components cooperate with each other using active mechanisms.
Monitoring and management are used to assure the scalability of virtual machine instances. The monitoring and management are done via a web interface through which users can specify rules that trigger one or more of the following actions when an event occurs [8]:
- **Storeback**: During this action, the virtual machine will be restored from a specified backup or set to a specified state.
- **Snapshot**: During this action, a snapshot of the current running state of the virtual machine will be created.
- **Clone**: During this action, the current running virtual machine will be duplicated (i.e., cloned). A clone can then be started immediately.
Zimory uses active mechanisms to distribute the CPU load to more than one virtual machine instance. In such a scenario, the cloud component (not named in the Zimory documentation), which monitors the instances, is acting like an event producing service that evaluates on what state the event should be produced. After this, the event is published to a component inside the cloud, which is able to receive events regarding the instances, a kind of event receiving service. This service then processes events and performs one or more of the actions listed above.
**VI. IBM TIVOLI LIVE MONITORING SERVICE**
IBM Tivoli Live provides SaaS. The crucial part of IBM Tivoli Live is the Monitoring Service [12].
The Monitoring Service is used to monitor network components and manage them in an online web portal. For this, a monitoring server is installed in the cloud. This server can monitor the cloud components (both active and passive) and send the collected data to the cloud. Here the data are stored and can be accessed via an online web portal. This portal gives users the possibility to set thresholds for the different monitored parameters of a component. When a threshold exceeds, an event is generated and an action for the event is performed. Unfortunately, the IBM documentation does not further specify how such an event or action can be used. But in theory it should be possible to send alarms via email to the users to notify them. Also, it should be possible to automatically perform an action on components, which use the Monitoring Service with an active agent. Such an action could be the execution of a script to automatically repair the state of a failed cloud component.
Figure 3 shows the architecture of the Monitoring Service. The following types of monitoring occur in the Monitoring Services:
• **Touchless monitoring:** A component, which is monitored, needs no further software installed, except a simple standard SNMP service. The Monitoring Service uses SNMP to retrieve current information about the component. This is only passive monitoring without any chance of interaction or controlling of the component. Also, the monitored data can be different across several components because SNMP does not standardize the monitored data but the messaging protocol.
• **Distributed monitoring:** This type of monitoring is based on an extra agent, which is deployed to the component being monitored. The monitoring server connects to this agent to retrieve information on the component. With the agent solution, it is also possible to control and manage the component.
• **Performance services:** These services are used to monitor data for long-time performance analysis and bottleneck indication. It should be noted that the performance services are used for manual analysis only.
The Monitoring Service uses multi-agent systems for active mechanisms. So it is more aligned to the “old-fashioned” IT systems. But the Monitoring Service can also be used in cloud environments because such environments are nothing else than virtualized IT systems.
![Figure 3. Architecture of the IBM Tivoli Live Monitoring Service [12].](image)
**VII. PESA**
PESA [11] is a technology that adds policies to an event-driven service-oriented architecture (ESA). A policy is a set of rules that control behaviors of services.
A large number of services that are deployed into different cloud environments can generate a large number of events. Because of that, it is not practical to set up a centralized management service that collects, stores, correlates and processes events. A better approach would be to build management services responsible for a small number of services, thereby reducing bottlenecks and speeding up the reaction of the whole system. PESA offers this approach. The management services are used for:
- Matching a pattern that consists of different single events, which can indicate some failure.
- Creating high-level business events out of simple events, which can indicate that some reaction of the business layer is needed to assure the execution of the business workflow.
- Adding data to events to give more information about the reason for the occurrence of an event.
- Triggering conditions for policies or rules to react on occurrence of events without human interaction.
- Invoking services or business workflows when their conditions are satisfied to assure that a business workflow can be executed.
As the services they are managing, the management services should be loosely coupled and highly distributed so that they can be set up in different cloud environments near the managed services. This will improve the performance while managing the services, reduces the complexity and eliminates drawbacks of a centralized management service (e.g., single points of failure). In such a scenario, the managed services can automatically incorporate service components for monitoring and management from their “local” management services. Furthermore, it becomes possible to compose the “small” management services into a “bigger” management service that provides coherent management for all services used in the business workflow.
Figure 4 shows the architecture of the management service. The components of the architecture can be assigned to one of the following layers:
- **Monitors and sensors, extraction and transformation tools:** Events are generated through monitoring services and sensors. Then every occurring event is transformed to a standard data format by a tool and the resulting events are published to an enterprise service bus (ESB). Both layers correspond to the event generation layer of an EDA.
- **Classification and categorization:** This is the layer where events are classified and assigned to a class of events in order to provide a coherent scope on the events for the layer above.
- **Analysis engines:** This layer correlates, associates and links events together to generate more complex events that have relevance to business. It corresponds to the event processing layer of an EDA.
- **Operational actions, policy actions and conflict resolution actions:** These are the layers where policies trigger actions based on events as their conditions. So the layers are responsible for performing the actions that assure the execution of a business workflow. The layers correspond to the event handling layer of an EDA.
All the layers communicate with each other through the ESB. Layers are built so that higher layers have broader scope that enables more complex analysis and management.
But it does not mean that the high and low-level management services are built with more or less complex components. Rather every management service is able to handle events and perform actions based on policies. Such a system is highly extensible, by adding more management services responsible for managing more services.
The intelligence that PESA adds to active mechanisms is the possibility to build business workflows that are very agile and can be easily adapted to changes, without human interaction during the execution of a business workflow. This becomes possible only by adding the events to signal changes and by adding the policies to react on those changes.
![Figure 4. Architecture of the PESA management service [11].](image)
**VIII. CONCLUSION**
This paper gave an overview of existing technologies and products – viz., OM4SPACE Activity Service, RESERVOIR, Amazon SNS, IBM Tivoli Live Monitoring Service, Zimory and PESA – that can be used for providing active mechanisms in cloud environments. Table I summarizes this overview.
| Technology / Product | Architecture | Cloud layer |
|-------------------------------|--------------|-------------|
| OM4SPACE Activity Service | SOA | SaaS, PaaS, IaaS |
| RESERVOIR | SOA | IaaS |
| Amazon SNS | EDA | PaaS |
| IBM Tivoli Live Monitoring Service | SOA | SaaS |
| Zimory | EDA | IaaS |
| PESA | ESA | PaaS |
The overviewed technologies and products mainly differ in the architectures they support: EDA, SOA or ESA.
An EDA [1] enables event producers to publish their events and event consumers to subscribe to and consume those events. In an EDA, events know nothing about their consumers. Events can also remain unconsumed because none of the consumers is interested in them. There are no direct relationships between the event producers and consumers. So the services built on top of an EDA are loosely coupled. This helps an EDA fit into a scenario where the services deployed into the cloud are managed by management services. Support of an EDA can be found in Amazon SNS and Zimory.
A SOA enables the composition of loosely coupled highly distributed services. These services can be deployed into different cloud environments where the clouds themselves take care of the services. Support of a SOA can be found in OM4SPACE Activity Service, RESERVOIR and IBM Tivoli Live Monitoring Service. The Activity Service in its initial version follows a cloud-native approach, by using an ESB to build a SOA and Web services to provide communication between loosely coupled highly distributed components. RESERVOIR also supports a SOA but it does not actually specify such communication. The Monitoring Service follows a more old-fashioned approach, by using multi-agent systems for active mechanisms. One possible reason for this is the growing structure of the whole IBM Tivoli Live.
An ESA [11] is the result of combining an EDA with a SOA. Such a combination is needed because a SOA typically composes services to business workflows. It does not account for events that occur across or outside of business workflows or complex events. Being combined with an EDA, a SOA can react on events. For example, a high-level business event can cause the execution of a single service or a set of services that can handle the problem occurred in a business workflow. Such a SOA enriched by events through an EDA can be used to build agile business workflows that adapt to changes, which occur during the execution of the business workflow. In such a scenario, the changes will be signaled by events. An EDA can also take advantage of the combination with a SOA because of the flexibility that a SOA provides through composing of services on different layers. As a result, it becomes possible to integrate an EDA on every layer. So an EDA can become responsible for publishing, subscribing and consuming events on both simple low service layer and high complex business layer. Because of these advantages, an ESA fits well into a scenario where events should be monitored, enriched and connected with each other on different levels. The connection between events is important because it can be used to connect multiple low-level system events to create a high-level business event. In such a scenario, events can occur everywhere, e.g., they can be created from applications, databases or services that are involved in a business workflow. Support of an ESA can be found in PESA. One possible reason for this is that cloud environments are typically environments for loosely coupled highly distributed services that can be orchestrated to a business workflow.
The overviewed technologies and products also differ in the cloud layers on which they provide their services: SaaS, PaaS or IaaS.
SaaS is a model of software deployment whereby a cloud provider licenses an application (i.e., software) to users for the usage as a service on demand. OM4SPACE Activity Service and IBM Tivoli Live Monitoring Service provide SaaS. One possible reason for this is the popularity of SaaS. (Currently, SaaS is the most popular type of cloud computing because of its simplicity, flexibility and scalability.)
PaaS is a model of application development and delivery. In particular, PaaS offers a development platform for users. OM4SPACE Activity Service, Amazon SNS and PESA provide PaaS.
Whereas SaaS allows for the usage of applications in cloud environments and PaaS offers the ability to develop and deliver these applications, IaaS provides users with the infrastructure for developing, running and storing the applications. RESERVOIR and Zimory provide IaaS. OM4SPACE Activity Service could also be used as an event processing component within IaaS.
IX. FUTURE WORK
There are advantages and disadvantages with all the overviewed technologies and products. An advantage of one is often a disadvantage of another. Therefore, the most promising approach would be to combine them all. This approach could be based on OM4SPACE Activity Service because it could possibly operate or be utilized on all different layers of a cloud environment and performs the complete event roundtrip by: generating events at an Event Source, sending events to the Event Service, performing CED at the Event Service and generating new complex events, sending events to an Event Consumer and the Rule Execution Service, performing rule processing and rule execution, and performing action invocations on action handlers in case of matching rules.
The Activity Service could benefit from using RESERVOIR. RESERVOIR defines a standard way to monitor a cloud component in order to read the parameters out of that component using a manifest, which assures that the Service Providers can deploy their services into the cloud. The Activity Service should also be able to monitor cloud components (both active and passive), but without concretely defining what parameters would be monitored. This may first not be seen as a real problem. But when rules for the events should be generated, it may become a big issue because the rules use the attributes of the events, which are set at a cloud component. Also, for the content enrichment of events at the Event Service, a concrete set of attributes should be defined. Thus, the usage of RESERVOIR could solve the problem of defining rules and getting the parameter dependencies for the rules.
Another improvement could be done in the action performing part of the Activity Service, which is currently implemented as a simple call to an action handler. This could be improved if the action handler and the Rule Execution Service used Amazon SNS as a transport mechanism. For example, the action handler could subscribe to a topic filled by the Rule Execution Service via Amazon SNS. But this problem cannot be solved by using Amazon SNS solely because this product is mostly not standardized and thus, can cause a vendor lock-in. Another problem with the usage of Amazon SNS is the small message size (8 kilobytes only). Therefore, a better solution would be if the Activity Service itself could implement a-la Amazon SNS transport mechanism. Such independence from a transport mechanism is currently implemented in the Activity Service to allow for better integration with existing cloud communication services.
ACKNOWLEDGMENT
Irina Astrova’s work was supported by the Estonian Centre of Excellence in Computer Science (EXCS) funded mainly by the European Regional Development Fund (ERDF).
REFERENCES
[1] J. Dunkel and R. Bruns. Event-Driven Architecture. Springer, 2010.
[2] A. Koschel, M. Schaaf, S. Gatziu Grivas, and I. Astrova. An Active DBMS Style Activity Service for Cloud Environments. 1st Intl. Conf. Cloud Computing 2010, pages 80–85, IARIA, Portugal, November 2010.
[3] C. Chapman, W. Emmerich, F. G. Marquez, S. Clayman, and A. Galis. Software architecture definition for on-demand cloud provisioning. 19th ACM International Symposium on High Performance Distributed Computing, HPDC’10, pages 61–72, New York, NY, USA, 2010. ACM.
[4] S. Eliot. Reservoir homepage. http://www.reservoir-fp7.eu/ Accessed: November 2011.
[5] Citrix Systems Inc. Xen. http://www.xen.org/ Accessed: November 2011.
[6] VMware Inc. Vmware. http://www.vmware.com/ Accessed: November 2011.
[7] OpenNebula. Opennebula. http://www.opennebula.org/ Accessed: November 2011.
[8] F. Galan, A. Sampaio, L. Rodero-Merino, I. Loy, V. Gil, and L. M. Vaquero. Service specification in cloud environments based on extensions to open standards. 4th Intl. ICST Conference on Communication System software and middleware, COMSWARE’09, pages 19:1–19:12, New York, NY, USA, 2009. ACM.
[9] Amazon Simple Notification Service (Amazon SNS). http://aws.amazon.com/de/sns/ Accessed: November 2011.
[10] Json (javascript object notation). http://www.json.org/ Accessed: November 2011.
[11] P. Goyal and R. Mikkilineni. Policy-based event-driven services-oriented architecture for cloud services operation and management. IEEE Intl. Conference on Cloud Computing, pages 135–138, 2009. IEEE.
[12] IBM Tivoli foundations and IBM Tivoli Live Monitoring Services. http://www-01.ibm.com/software/tivoli/products/monitor/ Accessed: November 2011.
[13] Z. GmbH. Zimory Enterprise Cloud Anwendungsbeispiel. http://www.zimory.de/index.php?id=75 Accessed: November 2011.
Information Technology Planning For Collaborative Product Development Through Fuzzy QFD
Jbid Arsenyan
Industrial Engineering Department
Bahcesehir University
Istanbul, 34100, Turkey
firstname.lastname@example.org
Gülçin Büyüközkan
Industrial Engineering Department
Galatasaray University
Istanbul, 34357, Turkey
email@example.com
Abstract—Collaborative Product Development (CPD) becomes a more complex process to manage by the rapid technological change. As a consequence of various system features introduced by research groups and commercial packages, CPD practitioners lose track of the available platforms, protocols, applications, system features, and tools supporting CPD processes. This study aims to provide a mapping between the technological requirements for CPD and system features of these various infrastructures. Fuzzy Quality Function Deployment (QFD) is employed for mapping between requirements and features. An industrial expert is consulted for evaluation of derived relationships and consequently system features are prioritized.
Keywords: Collaborative Product Development; Fuzzy QFD; Technology requirements.
I. INTRODUCTION
Due to its technology-centric nature, Collaborative Product Development (CPD) is typically based on technological infrastructures, which require for information technologies (IT) to be essential conveyors of good CPD performance [1]. However, the management of requirements and implementation of necessary tools to respond to these requirements constitute a complex process as the technological diversity grows rapidly. Current tools become hard to track and thus, evaluations are performed with incomplete and biased information given that assessing all systems is not possible.
Previous studies do not propose a comprehensive review of CPD systems mainly because these systems including various applications, tools, and plug-ins are numerous; they can be easily outdated by new researches and are only known by a limited community. On the other hand, various systems are proposed by literature and commercial ventures in order to facilitate collaboration, integration, co-design, and co-development processes of CPD teams.
In this highly uncertain environment, with various different requirements and numerous technological solutions, a systematic methodology is essential to plan the technological infrastructure needed to start and maintain the CPD process. Determining requirements and accordingly prioritizing technological response compose an important phase in IT planning. Some projects may require only communication tools, while others are dependent on highly skilled web-based engineering applications. A comprehensive and detailed planning methodology utilizing Quality Function Deployment (QFD) is introduced to help CPD practitioners in their development and collaboration efforts.
QFD is a well established methodology in transforming customer needs into engineering characteristics and therefore its House of Quality (HoQ) diagram appears to be a suitable tool for mapping needs of CPD into existing tools and technologies. Additionally, mapping is performed under a fuzzy environment in order to translate linguistic evaluations of the expert into quantifiable performance measures. The aim of this study is to introduce a comprehensible methodology for IT planning, which can be employed by CPD practitioners before launching CPD projects.
The study is organized as follows: next section introduces the technology planning literature, which covers studies in a general context. Then the fuzzy QFD is described and the methodology backgrounds are established. The fourth section presents CPD technology overview, which includes commonly used standards and environments, technology requirements and system features in CPD infrastructure. Then the IT planning with fuzzy QFD is presented with an evaluation of an industrial expert. The study concludes with a few remarks.
II. TECHNOLOGY PLANNING BACKGROUND
Use of proper technology is the most preferred factor in maintaining competitive advantage [2]. Systematic planning of technological infrastructure is therefore important in improving CPD performance. Efficiency and effectiveness of CPD are enhanced by appropriate implementation of tools and technologies enabling CPD [3], which can be attained through accurate mapping of requirements into the system features.
A technology planning framework is proposed by Porter et al. [4], which includes technology forecasting, as well as environmental analysis and aims to design organizational actions. Value adding chain concept requires the implementation of technology within all aspects of the business. Martin [5] also starts with technology forecasting
and applies scenario analysis to define technology allocations according to short term and long term needs. Rip and Camp [6] propose a four step methodology, which starts with market research then determine product features and technology options for these features and finally finishes with future consideration of technology resources.
Pretorius and Wet [7] define a framework based on the hierarchy of the enterprise, business processes and functions. Technological assessment can be mapped on the relationship between technology and processes on the three dimensional framework. Kumar and Midha [8] employ. They utilize the QFD approach to compare company's requirements in CPD with different functionalities of Product Data Management (PDM) systems and technical specifications are then compared to a specific PDM system.
Büyüközkan et al. [3] present a comprehensive review on tools, techniques and technologies enabling agile manufacturing in concurrent Product Development (PD). Rodriguez and Al-Ashaab [9] identify CPD supporting system characteristics and classify corresponding technological requirements. They also perform a survey in injection mould industry and they propose a knowledge based CPD system architecture responding to industrial requirements.
Koc and Mutu [2] present a technology planning methodology, from selection of competitive priorities to designing the activities, by integrating different system design perspectives through AD. Rueda and Kocaoglu [10] state that market and technology performance uncertainty make technological investment highly risky and they focus on diffusion of emerging technologies. They combine bibliometrics analysis, Delphi method, utility curves, and scenarios to define a composite indicator for the diffusion. Shengbin et al. [11] focus on technology roadmap concept and they present a visual guide to map market, product, and technologies to achieve technology selection. The three phased design process includes trend discussion, industrial and academic investigation, expert feedback on technological demand and it provides a tool to make strategic level technology selection decision.
Luh et al. [12] combine Design Structure Matrix with Fuzzy Sets Theory into FDSM to present a dynamic planning method for PD, increasing PD efficiency and decreasing development time. Ko [13] also employs FDSM to present a methodology enhancing PD management by organizing design activities and measuring dependency strength. Palacio et al. [14] presents a tool to facilitate collaboration in distributed Software Development (SD) teams, which aims to increase collaboration awareness by focusing on individuals and their activities.
Previous studies do not address a generic approach, which investigates and classifies the CPD requirements, as well as the tools and techniques provided by researchers and commercial packages. This study aims to introduce a planning framework within the fuzzy HoQ in order to capture these aspects and map their relationships.
III. FUZZY QFD OVERVIEW
HoQ, the planning tool within QFD methodology, can be described as a “conceptual map that provides the means of inter-functional planning and communications” [15]. It translates customer needs into customer attributes (CAs) in order to meet them through engineering characteristics (ECs).
As a first step in constructing a HoQ, CAs are collected from customers (Domain 1). Then engineering teams try to answer the question “how to achieve this attribute”. ECs that affect CAs are listed accordingly (Domain 2). CAs are prioritized in order to have a trade off basis in the case of conflicting objectives (Domain 3). As depicted in Fig. 1, right hand side of HoQ offers a benchmarking tool, where customer perception of other brands as well as focal firm’s brand in response to CAs is depicted (Domain 3a).
Then relationships between CAs and ECs are represented in symbols in accordance with the strength of the relationship (strong positive, medium negative, etc.). This step of the methodology serves to identify how an EC can affect a specific CA (Domain 4).
ECs effect on each other is represented in the roof matrix of the HoQ (Domain 5). Interdependent characteristics are thus displayed and the total outcome of engineering change is visualized. ECs are also marked regarding the direction of change in that specific characteristic (Domain 5a). Finally, target values and the degree of technical difficulty are set for ECs in order to present the amount of work and its complexity (Domain 6).
Majority of QFD applications stop at the planning stage, i.e. the HoQ and nevertheless, many benefits can be achieved through only the first matrix [16]. However, conventional HoQ matrix is not sufficient in describing the relationships between CAs and ECs. In some cases, application is performed in a fuzzy environment. Fuzzy QFD is employed in these cases in order to translate the vagueness of relationships and the subjectivity of the evaluator into quantifiable data.
Literature proposes many examples of fuzzy QFD applications. Şen and Baraçlı [16] investigate enterprise software selection requirements with fuzzy QFD. Linguistic variables are employed to prioritize non-functional criteria
in order to provide a decision making framework to determine the order of criteria to be satisfied during software selection decisions of a company. In their two concurrent studies [17] and [18], Vinodh and Chinthia investigate the enabling effect of fuzzy QFD to leanness and agility in a manufacturing organization. Fuzzy QFD is employed to prioritize the lean competitive bases, lean attributes, lean enablers in one case and the agile decision domains, agile attributes and agile enablers in the other by employing linguistic terms for both relationship matrix and correlations.
Lee and Lin [19] employ fuzzy QFD in PD. They incorporate fuzzy Delphi, fuzzy Interpretive Structure Modelling and fuzzy Analytic Network Process into QFD framework. Linguistic variables are employed for both relationships between CAs and ECs and the correlation between CAs to investigate priorities of PD in CAs, ECs, part characteristics, key process operations, and production requirements. Liu [20] employs fuzzy QFD to investigate priorities in product design and selection by (1) computing the relative importance of CAs, (2) computing the final importance of CAs and (3) computing the final importance of ECs through linguistic variables. Their methodology is also two phased, the second phase adopting a multi-criteria decision making approach. Jia and Bai [21] apply fuzzy QFD in manufacturing strategy development. Fuzzy integrated HoQ helps to capture the highly imprecise and vague nature of the strategy decisions.
In this study QFD is employed in a fuzzy environment considering that IT planning of CPD projects, in terms of requirements and features, is dependent on subjective judgments of CPD managers. We aim to translate subjective and linguistic judgments of evaluators into quantifiable relationships by integrating fuzzy sets theory into HoQ. In the proposed methodology; CA weightings, CA-EC relationships, and EC correlations are defined in linguistic terms and then translated into triangular fuzzy numbers (TFNs) in the form of \((l,m,u)\). After defining CAs and ECs for the study, the industrial expert is consulted for his judgments. Collected linguistic judgments are fuzzified.
Fuzzy computation processes in this study are adapted from Vinodh and Chinthia [18]. The relationship matrix and the weights of CAs are employed to compute the relative importance of ECs as follows:
\[
RI_j = \sum_{i=1}^{n} W_i \otimes R_{ij}, \quad j = 1, \ldots, m.
\]
(1)
Then the correlation matrix is considered. The final score of the \(j^{th}\) EC is computed by the following equation:
\[
\text{score}_j = RI_j \oplus \sum_{j \neq i} T_{ji} \otimes RI_i, \quad j = 1, \ldots, m.
\]
(2)
The final score is defuzzified in order to obtain a final crisp score:
\[
S_j = (l + 2m + u)/4
\]
(3)
The ECs are ranked in decreasing order of crisp scores. A higher score of EC implicates a higher priority to consider and thus, a higher importance to attribute.
IV. CLASSIFICATION OF IT FOR CPD
Technological change, especially in PD and collaborative technologies domains, are increasingly rapid and hard to track. However, services offered by various systems do not transform in the same pace as the complexity level increases.
CPD systems are generally built on various infrastructures. Commercial software and academic projects based on these infrastructures are numerous to cite and easily outdated, therefore out of the scope of this research. Nevertheless, some systems and commercial packages, summarized in [9] and [22], can be a reference on the services offered by researchers and industry.
A. Requirements overview
CPD literature and industrial experts express similar opinions when it comes to technological requirements in CPD projects, although some differences may be observed. Li and Su [23] state that CPD environment should comprise scalability, openness, heterogeneity, resources access and inter-operation, legacy codes reusability, and artificial intelligence as features. According to Rodriguez and Al-Ashaab [9], common access of design information, collaborative visualization of the component, and collaborative design of the component are the requirements to be supported by collaborative technologies. Palacio et al. [14] classify SD requirements in four groups: scale, uncertainty, interdependence, and communication. These requirements form a starting point for both collaboration and development processes. Requirements of CSD, which can be viewed as CPD sub-domain, include interaction, knowledge, awareness, coordination, communication, and control [14].
These aspects are categorized in nine groups under the Requirements domain, each requirement followed by its label.
Communication (CA1) emerges as a principal requirement in IT planning to assure awareness [22]. Project Management (CA2) and Knowledge Management (CA3) are two essential requirements as stated in [1,3,9,24], which clearly suggest that these two requirements should be considered within any type of project, regardless of its collaborative aspect. Another important requirement while planning the technological infrastructure of CPD is the product model (CA4) itself. The technological infrastructure should comprise a system that enables the representation, visualization, modification of the product model. Data Integration & Analysis (CA5) requirement can be described as a mechanism to integrate data available on different sites from different collaborating teams and to analyze this data in a most efficient manner [25]. Accordingly,
Interoperability (CA6) requirement emerges as a natural result of collaboration in order to assure diverse systems to work together.
Security (CA7) and privacy issues arise as CPD projects become a part of the business routine. This requirement implicates data protection as well as system back-up, as mentioned in [1]. Accordingly, defined by ISO 31000 as the effect of uncertainty on objectives, Risk Management (CA8) is a requirement to control uncertainties that may result in project failures. Lastly, CPD infrastructure requires Technical Support (CA9) given that collaborative infrastructure consisting of technology products may often necessitate maintenance and repair services.
The next section discusses the features presented by the various tools available in the technology arena. These features will be employed to respond to the aforementioned requirements.
B. System features overview
Nine requirement groups described in the previous section are met by various tools presented by commercial applications and academic researches. These tools are gathered in ten groups, labelled as features of CPD systems. Each feature is followed by its label.
Palacio et al. [14] state that technological infrastructure to meet the specified requirements should include features such as communication service, mechanism to share and filter relevant information, mechanism to spot individual project progress, interaction mechanism for team members, status updates and tasks progress, search tool based on profile, status, and activity; synchronous and asynchronous communication. PD oriented studies are also reviewed to support development process while technology planning. Sky and Buchal [26] categorize tools to support PD in six groups: information gathering, drawing and design, analysis and evaluation, general documentation, planning and scheduling, synchronous workspace sharing. Büyüközkan et al. [3] classify concurrent PD tools as networking and management tools, modelling and analysis tools, predictive tools, and intelligent tools.
Studies clearly emphasize the importance of communication tools. It is essential to assure coordination with ICT [1] and therefore communication tools are considered as primary features in a CPD system. Literature shows that synchronous and asynchronous communication tools are nearly always included in any collaborative system. Synchronous communication tools (EC1) assure real-time communication while spatially and temporally different communication is realized by asynchronous communication tools (EC2); which include e-mail, faxing, discussion boards, etc.
System integration mechanisms (EC2) are also widely studied in the literature. Some argue web-based interfaces to integrate various design models while others propose unification of modelling schemes [27]. Project management tool (EC4) is indispensable in a CPD project and it serves to control and coordinate the virtual team and their tasks [9]. Product visualization (EC4) is another feature of CPD systems. Collaborative visualization and collaborative design of the product allows teams to view, design, modify, mark-up, and measure the 3D virtual geometric model.
Document management tools (EC6) systems aim to store electronic documents and images, which enables engineering teams to create knowledge out of the information shared throughout the CPD project. Content management tools (EC7), serve to manage the workflow in collaborative environments.
Described as tools to keep track of history of a dataset [25], Data Tracking & Analysis Tool (EC8) enables the collaborating teams to make sense of the data they are handling. Data tracking is therefore important as it provides a detailed history of the data and the origin it generated from. Archiving tools (EC9) is also an important feature where large data is shared by distributed teams as storing, retrieving, and accessing the data are assured by archiving. It is important to be able to make use of the information created during the collaboration process. Decision support tools (EC10) become necessary at this stage, where a system is required to analyze all data and present an understandable report to assist decision makers’ in their decision process.
Overall, ten system features are identified in response to the nine requirements of CPD projects.
V. IT PLANNING USING FUZZY QFD
Defining the requirements and the system features provides a better understanding of the current situation of CPD infrastructure. However, a planning methodology is required in order to map the aforementioned requirements into the features. HoQ diagram, the most recognized form of QFD, emerges as an appropriate planning tool. The translation of customer requirements into technical specifications becomes IT requirements for CPD mapped into CPD system features. Consequently, CAs are mapped into ECs in order to define how the system features respond to CPD requirements.
| Linguistic variable | Abbreviation | TFN |
|---------------------|--------------|-------|
| Very low | VL | (0, 1, 2) |
| Low | L | (2, 3, 4) |
| Medium | M | (4, 5, 6) |
| High | H | (6, 7, 8) |
| Very high | VH | (8, 9, 10) |
| Linguistic variable | Abbreviation | TFN |
|---------------------|--------------|-------|
| Strong | Θ | (7, 10, 10) |
| Moderate | O | (3, 5, 7) |
| Weak | ▲ | (0, 0, 3) |
| Linguistic variable | Abbreviation | TFN |
|---------------------|--------------|-------|
| Strong positive | ⊕ | (3, 5, 7) |
| Positive | + | (0, 3, 5) |
| Negative | − | (-5, -3, 0) |
| Strong Negative | ⊖ | (-7, -5, -3) |
Our expert; an e-Business specialist, Knowledge Management Group leader, and CRM coordinator; is consulted for his industrial insight on the importance of requirements, requirement-system feature relationships, and system features correlations. He is asked to evaluate domains 3, 4, and 5 according to the scales presented in Table I.
The HoQ evaluation is displayed in Fig. 2. The expert evaluation, contrary to expectations, covers all pairwise relationships in Domain 4 and all pairwise correlations in Domain 5.

Fig. 2 lists the importance of the requirements. Then the mapping phase shows how these requirements are satisfied through the system features. Lastly, correlations between the system features are defined in order to observe their effect on each other.
These expert judgments are translated into TFNs according to the scales in Tables I. Priorities of system features are computed through equations (1) and (2). Final crisp priorities, displayed in Table II, are computed through equation (3). As a result, a priority vector is obtained for implementation of system features.
| System Feature | Priorities (Normalized) |
|---------------------------------|-------------------------|
| EC9 Archiving tools | 0.122 |
| EC7 Content management tools | 0.119 |
| EC5 Product visualization | 0.106 |
| EC6 Document management tools | 0.105 |
| EC8 Data tracking & analysis | 0.098 |
| EC4 Project management tool | 0.098 |
| EC10 Decision support tools | 0.094 |
| EC3 System integration mechanisms | 0.090 |
| EC2 Asynchronous communication tools | 0.088 |
| EC1 Synchronous communication tools | 0.080 |
Final ranking of normalized priorities clearly suggest the importance of the archiving tools. This outcome can be interpreted as the importance of co-learning in CPD projects [1]. Archiving tools assure communication of the created information and sharing during collaboration efforts. Content management tools hold the second level of priority. This feature is also strongly connected with co-learning, which is a result of CPD. In a CPD project, product visualization tools rank as the third important system feature. This tool enables various engineering teams from various sites to conduct development process and therefore emerges as another high priority feature.
It is interesting to observe that communication tools (asynchronous and asynchronous) are the two system features with the least priorities. However, when combined, they emerge as the system feature with the most priority. This outcome can be linked to the fact that communication tools do not require high technology or high specification. Even the most basic communication tools can achieve the communication required in the CPD projects.
The outcome can be interpreted as an investment route for IT implementation at the beginning of a CPD project. This HoQ outcome aims to provide an understanding of implementation priorities from our expert’s perspective.
**CONCLUDING REMARKS**
An essential part of CPD performance is IT; given that both the PD and the collaboration of various teams on different sites require a comprehensive technological infrastructure to support the communication as well as the integration of the firms. However, the rapid evolution of the IT complicates the forecasting and the planning of technological infrastructure.
The contribution of this study is two-fold. First, this paper proposes a set of technological requirements for CPD and a generic set of system features that includes tools and applications to respond to these requirements. On the other hand, a HoQ framework is employed to map the requirements to the system features. Importance of requirements, relationships between requirements and features, and correlations between system features included in the HoQ are evaluated by an industrial expert and Fuzzy QFD methodology is employed to interpret these evaluations.
Results show that system features associated with collaborative learning have the most priorities when planning technological infrastructure. However, it is apparent that all system features concur approximately to the same importance level. This can be interpreted as the need to cover all aspects of technological infrastructure within a CPD process. The outcome provides an implementation route for system features while considering IT infrastructure for CPD projects.
Further research includes extension of the current work through evaluations of different industrial experts in order to observe the differences in the priorities outcome according to industrial profile of the assessor. It is also anticipated to further develop the HoQ application in order to present a
comprehensive planning methodology, considering additional inputs.
ACKNOWLEDGMENT
Authors wish to thank TUBITAK for the financial support for this research, which is realized in the scope of TUBITAK Project No: 109M147. Authors also wish to thank Lemi Tuncer for his industrial insight during the development and evaluation processes.
REFERENCES
[1] J. Arsenyan and G. Büyükközkan, "Modelling Collaborative Product Development Using Axiomatic Design," in CD proceedings of 15th International Conference on Concurrent Enterprising (ICE 2009), Leiden, The Netherlands, 22-24 June 2009.
[2] T. Koc and Y. Mutu, "A Technology Planning Methodology Based on Axiomatic Design Approach," in PICMET 2006 Proceedings, Istanbul, Turkey, 2006, pp. 1450-1456.
[3] G. Büyükközkan, T. Dereli, and A. Baykasoglu, "A survey on the methods and tools of concurrent new product development and agile manufacturing," Journal of Intelligent Manufacturing, vol. 15, pp. 731-751, 2004.
[4] A. L. Porter, A. T. Roper, T. W. Mason, F. A. Rossini, and J. Banks, Forecasting and Management of Technology. New York: Wiley, 1991.
[5] M.J Martin, Managing innovation and entrepreneurship in technology-based firms. Canada: John Wiley & Sons, Inc., 1994.
[6] A. Rip and R. Kemp, "Technological Change," in Human Choice and Climate Change. Columbus, OH: Batele Press, 1998, pp. 327-399.
[7] M. W. Pretorius and G. de Wet, "A model for the assessment of new technology for the manufacturing enterprise," Technovation, vol. 20, no. 1, pp. 3-10, 2000.
[8] R. Kumar and P.S. Midha, "A QFD based methodology for evaluating a company's PDM requirements for collaborative product development," Industrial Management & Data Systems, vol. 101, no. 3, pp. 126-132, 2001.
[9] K. Rodriguez and A. Al-Ashaab, "Knowledge web-based system architecture for collaborative product development," Computers in Industry, vol. 56, pp. 125-140, 2005.
[10] G. Rueda and D.F. Kocaoglu, "Diffusion of emerging technologies: An innovative mixing approach," in PICMET 2008 Proceedings, Cape Town, South Africa, 2008, pp. 672-697.
[11] H. Shengbin, Y. Bo, and W. Weiwei, "Research on application of technology roadmap in technology selection decision," in Control and Decision Conference, 2008. CCDC 2008, Yantai, Shandong, 2008, pp. 2271 - 2275.
[12] D.-B. Luh, Y.-T. Ko, and C.-H. Ma, "A Dynamic Planning Approach for New Product Development," Concurrent Engineering, vol. 17, no. 1, pp. 43-59, 2009.
[13] Y.-T. Ko, "A dynamic planning method for new product development management," Journal of the Chinese Institute of Industrial Engineers, vol. 27, no. 2, pp. 103-120, 2010.
[14] R.R. Palacio, A. Vizcaino, A.L. Moran, and V.M. Gonzalez, "Tool to facilitate appropriate interaction in global software development," IET Software, vol. 5, no. 2, pp. 157-171, 2011.
[15] J.R. Hauser and D. Clausing, "The House of Quality," Harvard Business Review, vol. 66, no. 3, p. 63–73, 1988.
[16] C.G. Şen and H. Baraçlı, "Fuzzy quality function deployment based methodology for acquiring enterprise software selection requirements," Expert Systems with Applications, vol. 37, p. 3415–3426, 2010.
[17] S. Vinodh and S.K. Chinthra, "Application of fuzzy QFD for enabling leanness in a manufacturing organisation," International Journal of Production Research, vol. 49, no. 6, pp. 1627-1644, 2011.
[18] S. Vinodh and S.K. Chinthra, "Application of fuzzy QFD for enabling agility in a manufacturing organization: A case study," The TQM Journal, vol. 23, no. 3, pp. 343 - 357, 2011.
[19] A. Lee and C.-Y. Lin, "An integrated fuzzy QFD framework for new product development," Flexible Services and Manufacturing Journal, vol. 23, pp. 26-47, 2011.
[20] H.-T. Liu, "Product design and selection using fuzzy QFD and fuzzy MCDM approaches," Applied Mathematical Modelling, vol. 35, p. 482–496, 2011.
[21] G.Z. Jia and M. Bai, "An approach for manufacturing strategy development based on fuzzy-QFD," Computers & Industrial Engineering, vol. 60, p. 445–454, 2011.
[22] W. D. Li and Z. M. Qiu, "State-of-the-art technologies and methodologies for collaborative product development systems," International Journal of Production Research, vol. 44, no. 13, pp. 2525-2559, 2006.
[23] J. Li and D. Su, "Support modules and system structure of web-enabled collaborative environment for design and manufacture," International Journal of Production Research, vol. 46, no. 9, pp. 2397-2412, 2008.
[24] W. Shen, Q. Hao, and W. Li, "Computer supported collaborative design: Retrospective and perspective," Computers in Industry, vol. 59, p. 855–862, 2008.
[25] E.S. Lee, D.W. McDonald, N. Anderson, and P. Tarczy-Hornoch, "Incorporating collaboratory concepts into informatics in support of translational interdisciplinary biomedical research," International Journal of Medical Informatics, vol. 78, no. 1, pp. 10-21, 2009.
[26] R.W.E. Sky and R.O. Buchal, "Modeling and Implementing Concurrent Engineering in a Virtual Collaborative Environment," Concurrent Engineering, vol. 7, no. 4, pp. 279-289, 1999.
[27] G. Buyukozkan and J. Arsenyan, "Collaborative Product Development: A Literature Overview," Production Planning & Control, vol. Accepted., 2010.
Indoor IEEE 802.11g Radio Coverage Study
Sandra Sendra, Laura Ferrando, Jaime Lloret, Alejandro Cánovas
Instituto de Investigación para la Gestión Integrada de zonas Costeras - Universidad Politécnica de Valencia, Spain
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com
Abstract—Even though the wireless coverage inside buildings is widely studied, there are many deployments that are not optimal. An accurate design, not only allows more radio coverage inside the building, but could allow cost savings if we are able to reduce the number of access points that are used to implement the solution. In this paper, we will show a research study about the optimum location of access points inside the building of the Polytechnic University of Valencia (UPV) in order to provide better wireless Internet access to the students. The paper provides a comparative study for different Service Set Identifiers (SSIDs) (are currently available at the university). We will also compare them analytically. Finally we will obtain the mathematical expressions that allow us to model their behavior and we will see how the signals propagate following a very peculiar pattern.
Keywords—radio coverage; indoor study, WLAN; IEEE 802.11g.
I. INTRODUCTION
Today, wireless networks are widely used in companies and universities as a support to the wired network. These allow the user easy access to all services provided by the company’s network and usually allow Internet access.
When attempting to develop indoor wireless networks, many problems arise such as losses due to the walls, refractions over objects arranged randomly on the site or losses due to the use of different types of construction materials that cause different types of losses (brick, metal, glass, etc.) [1, 2].
Moreover to these issues, we should add the fact that the building interiors are almost never uniform making it very difficult to foresee in advance exactly what is the radio coverage of the wireless network [3].
It is necessary to have a correct and optimal design, in order to offer multiple additional services such as indoor positioning, object location, object tracking, etc. [4]. Moreover wireless coverage systems are being studied in other research fields such as Wireless Sensor Networks (WSN) [5].
In this paper, we analyze the behavior of the wireless signals from access points (APs) located in the Centre of resources for the research and learning (CRAI) of The Polytechnic High School of Gandia, of the Polytechnic University of Valencia (UPV). The obtained measurements will allow us to know the received signal strength evolution in function of the distance to the APs.
The rest of this paper is structured as follows. Section 2 shows some related works about radio coverage. Section 3 presents the scenario and the tools used to perform our measurements. Section 4 shows the result of the measurements in coverage maps. Section 5 makes a comparative study of the three analyzed signals on each floor. The analytical study and the equations which expressed this behavior are shown in Section 6. Finally, Section 7 shows the conclusion and future works.
II. RELATED WORKS
There have been many studies of wireless coverage, both empirically [6] and analytically [7]. Concretely, in [7], the authors performed an analytical study about the AP location and channel assignment. The treatment of these keys separately can lead to optimal designs. Authors propose an integrated model that addresses both aspects simultaneously in order to find a balance to optimize both objectives.
M. Kamenetsky et al. [8] examine the methods for obtaining a position close to the optimal entry points and evaluate their performance in a typical center or campus environment. System performance is evaluated by an objective function, which aims to maximize the coverage area and signal quality. The optimization algorithms used are a subjective function on a discrete search space, which significantly reduces the complexity inherent to the problem. Numerical results show that random search algorithms, can lead to good solutions. However, the convergence of simulated travel speed depends largely on the development of simulation parameters and a good parameters selection.
Kaemarungsi and Krishnanurthy [4] studied the features of the IEEE 802.11-based WLANs and analyze the data in order to understand the underlying features of location fingerprints. They pointed out that the user’s presence should be taken into account when collecting the location fingerprint for user related location-based service.
J. Lloret et al. [10, 11] showed studies about an empirical coverage radio model for indoor wireless LAN design. This model has been tested on a vast number of buildings of a great extension area with over 400 wireless APs in order to get the results successfully. The objective of the model is to facilitate the design of a wireless local area network WLAN using simple calculations, because the use of statistical methods takes too much time and it is difficult to implement in most situations. The proposed model is based on a derivation of the field equation of free propagation, and takes into account the structure of the building and its materials.
Sendra et al. [12] presented a comparison of the IEEE 802.11a/b/g/n variants in indoor environments in order to know which is the best technology. This comparison is made in terms of received signal strength indicator (RSSI), coverage area and measurements of interferences between channels.
III. SCENARIO DESCRIPTION AND USED TOOLS
The CRAI was built in 2007. It belongs to the Polytechnic high school of Gandia. It has 3 floors, where different services for the students are offered. It contains the library, some computer labs and open access classrooms. Figure 1 shows the location of this space. It is the H building.
Now, we are going to describe the scenario where the measurements have taken from the wireless networks and the hardware and software used to perform our research.
A. The building
The ground floor (see Fig. 2) contains the information desk, staff offices, the library and a large study room with a consultation area and several classrooms for group study. There is a multipurpose room where events and exhibitions are sometimes held.
On the first floor (see Fig. 3) we can find several computer labs, classrooms to perform Final Degree Projects, group study rooms and individual study rooms.
The second floor (see Fig. 4) has the magazine and journal library, the video library, some computer labs and some professor offices.
B. Description of UPV Wireless Network
Polytechnic high school of Gandia is a campus of the UPV and shares the 4 networks with the main campus: EDUROAM, UPVNET2G, UPVNET and UPV-INFO. Each one of these allows the university users to access Internet and the university resources.
- **UPVNET**: Wireless network with direct connection to all the resources of the UPV. It requires a wireless card with Wi-Fi Protected Access (WPA) and Wi-Fi Protected Access II (WPA2).
- **UPVNET2G**: Direct network connection to all resources of the UPV and the Internet. It requires a wireless card with WPA/WPA2.
- **EDUROAM**: This wireless network is widely deployed in universities and research centers in Europe. It provides internet access to all of their members. The users need the user name and password of their home institution. It requires a wireless card with WPA/WPA2. EDUROAM only provides Internet access.
- **UPV-INFO**: This wireless network is designed to provide information about how the wireless network should be configured. It uses private IP addressing and it does not allow Internet access. A second connection is needed to access Internet and UPV resources. This second connection is a Virtual Private Networking (VPN). It should only be used by very old computers that do not support WPA encryption.
In this paper, we will analyze three of these networks. They are UPVNET, UPVNET2G, EDUROAM.
C. Used software and hardware
In order to carry out this work, several measurements have been done along the three floors of the CRAI. We have used different network devices to perform the measurements:
- **Linksys WUSB600N** [13]: Is a USB wireless device that was used to gather the measurements. It is able to capture signals from the IEEE 802.11a/b/g/n variants. Its power transmission is 16 dBm for all variants and the receiver sensitivity is about -91dBm in both internal antennas. The transmission power consumption is less...
than 480mA and it consumes 300mA in the reception mode.
- **Laptop:** It was used to take the coverage measurements. It has a dual core processor with 2 GHz per core and 2 Gbyte of RAM Memory. It has Windows Vista.
- **Cisco Aironet 1130AG (AIR-AP1131AG-E-K9) [14]:** This AP is used in all floors of the building. Its data rate can reach up to 54 Mbps. It can work at 2.4 GHz or 5 GHz, with a maximum distance between 100m to 122m indoors (as a function of the IEEE 802.11a or IEEE 802.11g variant). The maximum distance for outdoor environments, is between 198 m to 274 m. It can be powered by PoE (Power over Ethernet).
In order to capture the received signal at each point of the building, we used the following program:
- **InSSIDer [15]:** Is a free software tool that detects and controls the wireless networks and the signal strength in a graphical way. This program lists all detected wireless networks and provides their details as Service Set Identifier (SSID), Media Access Control address (MAC address), channel, RSSI, network type, security, speed and intensity of the signal and also allows the control of the quality of the signal.
### IV. COVERAGE RESULTS
We have measured the walking area, where students and university staff can access. Bathrooms, exterior stairways, storage, etc are excluded. In order to perform this work, a grid of 4 meters x 4 meters has been drawn in each floor. This allows us to make measurements of the different networks in the same places. The laptop was located at a height of 100 cm above the ground.
#### A. Ground floor
This subsection shows the signal coverage measured on the ground floor. There are 5 APs that cover the entire plant. There are four places with the highest coverage level (the values are higher than -50 dBm). We highlight 2 rooms, Room A, the multipurpose room, and Room B, the computer room (see Figs. 5, 6, and 7). The AP located outside the wall of the computer room provides coverage levels below -70 dBm inside the classroom for all three cases.
Fig. 5 shows the coverage area and levels of UPVNET wireless network at the ground floor. Room A presents signal strength of -90 dBm due to the signal attenuation generated when crossing several walls.
Fig. 6 shows the coverage area for the UPVNET2G wireless network on the ground floor. Three places with higher signal strengths than -50 dBm can be seen. These places correspond to the location of APs. The multipurpose room has a very low coverage on the left side because the signal is greatly attenuated because it crosses several walls.
Fig. 7 shows the value of signal strength for EDUROAM wireless network on the ground floor. Again, there are three places with the high signal strength in excess -50 dBm, which correspond to the location of APs. In this case, more than half of the room B has signal strength levels below -70dBm.
#### B. First floor
This subsection shows the signal strength measured on the first floor. In this case, there are 4 APs that cover the entire plant. There are 4 places with the highest signal strength (higher than -50dBm).
Fig. 8 shows the signal strength for UPVNET wireless network on the first floor. The rooms at the left side have low radio coverage because the AP is not located in the correct place. The offices at the right side have also very poor signal strength because they are very close to the stairs and they suffer signal attenuation rather important.
Fig. 9 shows the UPVNET2G wireless network signal strength on the first floor. We can see that in the classroom on the left hand is not well covered because of the AP position. It is located on the right hand of the wall. The offices from the bottom right also have very poor coverage, because they are very close to the stairs, which generate significant signal attenuation.
Fig. 10 shows the EDUROAM signal strength on the first floor. In this case, we can see the same effect as in the other cases, but, moreover, there are tables in the study area (center of the picture), with a low signal strength (lower than -90 dBm).
#### C. Second floor
This subsection shows the signal strength measured on the second floor. The floor is covered by 4 APs. They offer a good coverage across most of the surface.
Fig. 11 shows the signal strength for UPVNET wireless network on the second floor. In this floor, there is good radio coverage across the whole plant.
Fig. 12 shows the signal strength of UPVNET2G on the second floor. This signal is correctly broadcasted through the floor and its signal strength levels are sufficient to cover the working places.
Fig. 13 shows the signal strength of EDUROAM wireless network on the second floor. The signal on this floor is very good.
After having analyzed the radio coverage images, it is easy to see that the behavior of the signal strengths for each floor is quite similar, with small variations. The received signal strength is very low in bathrooms and toilets. This is because the amount of water pipes and copper tubes in the walls which produce the signal attenuation. We have also found low signal strength levels in the stairwells. The stairs usually have metal framework and a foundation which prevents the spreading of the signal.
### V. COMPARATIVE STUDY
In this section, we compare the signals of the same plant. Fig. 14 shows the three signals from the ground floor. UPVNET2G provides better signal strength levels than UPVNET and EDUROAM. Signal strengths from the first floor are shown in Fig. 15. UPVNET2G is the network that presents the highest signal strengths. UPVNET and EDUROAM have similar behavior although there are some points where EDUROAM signal is better.
Fig. 16 shows the behavior of signal strengths of the second floor. UPVNET2G and EDUROAM have identical behavior from 3 meters to around 10 meters, but from 0 to 3 meters and 10 meters to 12 meters, EDUROAM signal strength is better. The lowest signal strength is presented by UPVNET all the time. Keeping in mind all graphs, it is easy to conclude that the wireless network that provides the best signal strength level is UPVNET2G.
VI. ANALYTICAL STUDY
After analyzing the above figures, we can estimate the behavior of the wireless signals in indoor environments. Therefore, this section shows how the signal strength varies depending on the distance to the AP.
The analytical study is performed for three networks (UPVNET, UPVNET2G and EDUROAM). In order to draw each one of these graphs, we have estimated the average value of the three signals provided by each wireless network.
Fig. 17 shows the average value of the signal strength depending on the distance to the AP on the ground floor. Expression 1 shows the equation for the trend line (black line in Fig. 17) of our measurements. As we can see, it is a polynomial expression of fifth grade, with a correlation coefficient of $R^2=1$. However, we can appreciate a slight difference between them in positions close to 3-4 meters, and further away than 17 meters of the APs.
$$Y = 0.0001x^5 + 0.0066x^4 - 0.1078x^3 - 0.6889x^2 - 2.3012x - 54.75 \quad (1)$$
where $Y$ represents the average value of the received signal strength in dBm and $X$ is the distance in meters to the AP.
Fig. 18 shows the average signal strength provided by the APs located on the first floor, as a function of the distance to the APs. In positions further than 8 meters of the APs, both graphs vary very few between them, although the rest of the graph is identical. Equation 2 shows the expression for the trend line (black line in Fig. 18) of our measurements. The behavior of wireless signals based on the distance is described by a cubic polynomial with a correlation coefficient of $R^2=1$.
$$Y = -0.0117x^3 + 0.0665x^2 - 3.9909x - 46.833 \quad (2)$$
where $Y$ is the signal level in dBm and $X$ is the distance in meters to the AP.
In Fig. 19, provides the behavior of the signal level on the second floor. Equation 3 shows the trend line (black line in Fig. 19) of our measurements. In this case equation 3 is a 3rd degree polynomial with a correlation coefficient of $R^2=1$. It shows that both graphs have a nearly perfect match, as its correlation coefficient shows.
$$Y = 0.0021x^3 + 0.0292x^2 - 2.2229x - 45 \quad (3)$$
where $Y$ is the average signal value in dBm and $X$ is the distance in meters to the AP.
VII. CONCLUSION
When the deployment of a wireless network on the inside a building is needed to offer complete coverage for all the users, we should pay particular attention to the correct placement of the AP. We have analyzed the behavior of the signal strengths of the APs located in the CRAL. The measurements provided have enabled us to represent the signal evolution depending on the distance to the AP.
With these measurements of the coverage maps of each floor, have been drawn the pictures processed for this analysis have allowed us to determine the places where the signal is not good (less than -70 dBm), and so relocate APs and add more, if it was needed. With all of these, we have performed an analytical and comparative study, with the three networks. After this study, we can see that, the EDUROAM and UPVNET have very similar coverage, while UPVNET2G is a little bit better. This phenomenon is a strange because the same APs give the coverage for the 3 wireless networks. Moreover, we have detected that the worst designed floor was the ground floor (in terms of APs distribution). We propose the relocation of some of the existent APs and to add new APs to cover the shadow areas where the signal strength is below than levels appropriate for Internet access.
We have characterized mathematically, the behavior of the signal strength in each floor. In all cases, the behavior can be expressed by a polynomial expression of degree equal to or greater than 3, depending on each floor. The APs of the CRAI building give acceptable radio coverage levels up to 16 meters from the AP's position.
On the other hand, we have estimated the average value for all floors, depending on the distance to the APs and we see that it could be modeled as a fifth degree polynomial with a correlation coefficient of 1. It is shown in equation 4. The signal strength Y is given in dBm and X the distance to the AP, in meters.
\[ Y = 4 \times 10^{-5}x^5 - 0.0024x^4 + 0.0455x^3 - 0.2065x^2 - 1.966x - 50.792 \]
Finally, we have observed a trend in the signal behavior where it reduces its strength in a staggered manner (as we see clearly in Fig.14 and 17). In all other plants, this behavior is less visible. However, due to new tests that we are carrying out in other buildings, it seems that this pattern is repeated.
We are now working on the design to place more than one AP in the same location in order to provide higher bandwidths for the students. Moreover, we are proposing the use of APs in standby mode to provide fault tolerance. Finally, APs should be updated to IEEE 802.11n standard in order to achieve higher speeds and greater distances.
REFERENCES
[1] J. Lloret, J. J. López, and G. Ramos, “Wireless LAN Deployment in Large Extension Areas: The Case of a University Campus”, In proceedings of Communication Systems and Networks 2003, Benalmádena, Málaga (España), September 8-10, 2003.
[2] N. Pérez, C. Pabón, J. R. Uzcátegui, and E. Malaver, “Nuevo modelo de propagación para redes WLAN operando en 2.4 Ghz, en ambientes interiores”. TELEMATIQUE 2010; Vol.9, Issue: 3, pp.1-22.
[3] B. S. Dinesh, “Indoor Propagation Modeling at 2.4 Ghz for IEEE 802.11 Networks”. Thesis Prepared for the Degree of master of science university of north texas. December 2005
[4] K. Kaemarungsi and P. Krishnamurthy, “Properties of Indoor Received Signal Strength for WLAN Location Fingerprinting”. In proceedings of The First Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services 2004 (MOBIQUITOUS 2004) . Boston, Massachusetts, USA , August 22-26, 2004.
[5] R. Mulligan and H. M. Ammari, “Coverage in Wireless Sensor Networks: A Survey”, Journal of Network Protocols and Algorithms, Vol. 2, No. 2, 2010.
[6] A. R. Sandeep, Y. Shreyas, S. Shivam, A. Rejat, and G. Sudashivappa, “Wireless Network Visualization and Indoor Empirical propagation Model for a Campus Wi-Fi Network ”, Journal of World Academy of Science, Engineering and Technology, Vol.42, Pp. 730-734, 2008.
[7] A. Eisenbl, H-F. Geerdes, and I. Siomina, “Integrated Access Point Placement and Channel Assignment for Wireless LANs in an Indoor Office Environment”. In proceedings of IEEE International Symposium on World of Wireless, Mobile and Multimedia Networks (WoWMoM 2007), Helsinki, Finland, June 18-21, 2007.
[8] M. Kameneisky and M. Unbehau, “Coverage Planning for Outdoor Wireless LAN Systems”, International Zurich Seminar on Broadband Communications 2002, Zurich (Switzerland), February, 19-21, 2002.
[9] IEEE Std 802.11 (2007) IEEE Standard for Information technology - Telecommunications and information exchange between systems - Local and metropolitan area networks - Specific requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications. Pp.1-1184. New York, USA.
[10] J. Lloret, J. J. López, C. Turró, and S. Flores, “A Fast Design Model for Indoor Radio Coverage in the 2.4 GHz Wireless LAN”, in proceedings of 1st. Int. Symposium on Wireless Communication Systems 2004 (ISWCS’04), Port Louis ( Mauricio Island), September 20-22, 2004.
[11] J. Lloret and J. J. López, “Despliegue de Redes WLAN de Gran Extensión, el Caso de la Universidad Politécnica de Valencia”, XVIII Simposium Nacional de la Unión Científica Internacional de Radio, A Coruña (Spain), September 10-12, 2003.
[12] S. Sendra, P. Fernandez, C. Turro, and J. Lloret, “IEEE 802.11a/b/g/n Indoor Coverage and Performance Comparison”, In proceedings of the 6th International Conference on Wireless and Mobile Communications (ICWMC 2010), September 20-25, 2010, Valencia, Spain.
[13] Data sheet WUSB600N, available in: http://www.linksysbycisco.com/EU/es/products/WUSB600N <retrieved: Nov., 2011>
[14] Specifications of cisco Aironet 1130 AG, Access Point. Available in: http://www.cisco.com/en/US/prod/collateral/wireless/pS5678/pS6087/product_data_sheet0900aecd801b9058.html <retrieved: Nov., 2011>
[15] Web page of inSSIDer available in: http://www.metageek.net/products/inssider <retrieved: Nov., 2011>
Security Issues of WiMAX Networks with High Altitude Platforms
Ilija Basicevic, Miroslav Popovic
Faculty of Technical Sciences
University of Novi Sad
Novi Sad, Serbia
firstname.lastname@example.org, email@example.com
Abstract—In this paper, we discuss the possibility of securing High Altitude Platform Networks (HAP) networks with intrusion detection systems (IDS). We assume that it is a 802.16 network, in point-to-multi point mode. An analysis of possible threats and attack sources is given. Based on that analysis and specific properties of HAP networks an IDS concept is proposed. The main idea of the concept is that a network-based IDS system is collocated with the base station (BS) software. The BS is on board the HAP. The extensions to the concept, in order to provide for prevention feature, are outlined. In that case, the correlation module is a publish/subscribe server for dissemination of events that are the results of alert correlation. Its subscribers are policy enforcement points in the HAP network.
Keywords-high altitude platforms; IEEE 802.16; intrusion detection system; network security
I. INTRODUCTION
In the recent years, there has been a strong incentive in research of High Altitude Platform (HAP) networks. While in the first place they were envisioned as a means for rapid provisioning of connectivity in the case of disasters (because of the short time needed to launch a HAP vehicle and establish connectivity), soon other scenarios have been proposed as well. For example, in rural areas, with scarce or not existing ground infrastructure, HAPs can be used to provide broadband connectivity, see Fig 1. Another possible application is in mobile sites (e.g., trains). Applications in military communications are also considered. Researchers envisioned high-rate communications (up to 120 Mb/s) delivered directly to a user in line of site of a HAP within a coverage area up to 60 km wide [6]. There are three expected scenarios regarding the position of HAP in the end-to-end path [6]:
- Isolated from any core networks, providing connectivity for private networks
- Between core networks as point-to-point trunk connections
- In the access network, providing users with access to core networks
The significance of first-responder communications (e.g., Enhanced 911 service in US) during catastrophic events is utmost. Such services can be provided by dispatching HAP vehicle with telecommunications equipment in the affected area.
HAP is defined as a solar-powered unmanned airship or airplane, capable of long endurance on-station (several months or more) [7]. The HAP payload can be a complete base station. Besides up- and down-links to the user terminals, and backhaul links, links to satellites can be established as well. In some scenarios, where networks of HAPs are applied, there are also inter-HAP links.
The coverage region is determined by line-of-sight propagation and the minimum elevation angle at the ground terminal. The advantages of HAP communications are [7]:
1. Large area coverage (compared with terrestrial systems)
2. Flexibility to respond to traffic demands - flexible and responsive frequency reuse patterns and cell sizes, unconstrained by the physical location of base-stations.
3. Low cost - cheaper to launch than a geostationary satellite or a constellation of Low Earth Orbit (LEO) satellites, cheaper to deploy than a terrestrial network.
4. Incremental deployment - service may be provided initially with a single HAP and expanded gradually - in contrast to LEO satellites.
5. Rapid deployment - it is possible to design, implement and a deploy HAP service relatively quickly, especially when compared to satellites.
6. Platform and payload upgrading - can be relatively easily and safely brought down for payload upgrading.
7. Environmentally friendly
The backhaul link is realized using cellular scheme too, because a single link can not provide full backhaul capacity. Thus there are going to be a number of distributed backhaul ground stations, though this number can be fewer than the number of user cells served because of the higher order modulation schemes that would be used in backhaul links, which would provide greater capacity [7].
HAP-based services have been allocated frequencies by the ITU at 47/48 GHZ, also at 28 GHz in ITU Region 3 - Asia.
Most of the scenarios predict use of HAPs for 802.16 networks, although Universal Mobile Telecommunications System (UMTS) is present in application scenarios as well, albeit in much smaller extent. We assume that 802.16 network is in point-to-multi point (PMP) mode. With regard to physical characteristics of the network, HAP is usually positioned at an altitude of approximately 17-22 kilometers. It covers up to 256 cells.
Use of HAP platforms for different applications has been studied in the scope of several projects (HAPCOS – EU COST action 297, HELINET and CAPANINA EU Framework Programme projects), but to the best of our knowledge this is the first analysis of the possibilities for protection of HAP WiMAX networks with IDS systems.
Section 2 briefly presents security mechanisms that are used in 802.16 networks. Section 3 describes architecture of a network based intrusion detection system for 802.16 networks. It includes analysis of possible threats and possible improvements in order to realize prevention feature (besides detection). Section 4 contains concluding remarks.
II. SECURITY MECHANISMS IN 802.16 NETWORK TECHNOLOGY
Compared to IEEE 802.11, a serious effort has been undertaken in designing the security mechanisms in IEEE 802.16. The following description is based on [1].
IEEE 802.16 protocol stack contains Media Access Control (MAC) layer, which is divided into three sublayers (convergence sublayer, common part sublayer and privacy sublayer). Service specific convergence sublayer has two types, one that interfaces ATM as upper layer, and the other for TCP/IP. Common part sublayer is the core part of IEEE 802.16 MAC. It manages connections and bandwidth, among other functions. There are three types of connections: Primary, Basic and Secondary. Primary are used for authentication. Basic are used for time critical MAC control messages. Secondary are used for standards based management messages (e.g., SNMP [5]). MAC is connection oriented, and all data communications are in the context of connection. Connections are added, modified and deleted dynamically. Privacy sublayer is responsible for security functions:
- Encryption,
- Decryption,
- Authentication,
- Secure key exchange.
This sublayer contains two protocols: Encapsulation and the Privacy and Key Management Protocol (PKM).
Security protocols use Security Associations (SA). There are three types of SAs: primary, static and dynamic. Primary are established during initialization process. Static and dynamic can be shared between different subscriber stations (SS). Shared information may include traffic encryption key (TEK) and initialization vector (IV).
BS is responsible for maintaining keying information for all SAs. SA keying material has limited lifetime. When BS provides SS with a keying material, it includes information on remaining lifetime. Keying system is a two-tier one. The first tier is: using public keys, BS sends authorization key (AK) to SS. The second tier is that by using AK, the TEK exchange is protected.
PKM protocol is used for synchronization of keying information between BS and SS. PKM has two finite state machines: Authorization and TEK exchange. PKM authorization is realized as an exchange of three messages. In this exchange, SS provides BS with its certificates (certificate of the manufacturer and of the station itself) - BS authenticates SS, and BS provides SS with AK and with identification of SAs it is authorized to access. Key encryption key (KEK) and message authentication keys are derived from AK. Security components use X.509 Version 3 certificates. PKM TEK exchange is an exchange of two or three messages, in which BS sends TEK parameters for each requested SA. Three messages are exchanged. PKM is a client/server protocol where SS is a client. PKM uses RSA with SHA-1. IEEE 802.16 encryption uses DES CBC over payload. Generic MAC Header (GMH) and CRC fields are not encrypted.
New subscriber station enters the network in five steps:
1. SS scans for a BS downlink signal and uses it to establish channel parameters
2. Primary management connection established
3. SS authorized using PKM
4. SS sends a register request and BS responds with second management connection Id
5. Transport connections are created
The first phase is security capabilities negotiation during which SS informs BS which cryptographic suites it supports and BS tells SS which of those to use in the subsequent communication (this information is contained in the descriptor of the primary SA).
BS generates AKs and TEKs using random or pseudo-random generators. IVs are generated in such a manner as to be unpredictable.
III. NIDS FOR IEEE 802.16 - HAP CASE
Network-based IDS are an important class of contemporary IDS systems. In this class of systems, IDS processes the stream of packets that are transmitted over the network. In the typical wired environment IDS is usually placed in the network perimeter. In the HAP network, which is a 802.16 network in the point-to-multi point (PMP) mode, a logical place for NIDS is the base station. We propose architecture with a NIDS sensor in each cell, a NIDS sensor that monitors link to the gateway and a coordination and correlation module which has the following functions:
It correlates alerts from NIDS sensors in 802.16 cells
It sends the results of the correlation phase to the network operations center
It downloads signature updates from public repository or vendor web site
It uploads signature updates to NIDS sensors
Snort is a popular IDS system, significantly present both in everyday use, and in research. It is an IDS system for wired networks, but there are important common concepts that IDS systems for wired and wireless networks share. The Snort 3.0 architecture [8] promotes separation of the Snort Security Platform (SnortSP) from the Engines module which contains analytics modules. We propose that Snort 3.0 architecture can be used as IDS architecture for HAP systems. In the HAP case, the Dispatcher module connects not only to the local Data Source, but also receives alerts from the coordination module.
SNMP [5] is a de-facto standard for network management in Internet environment, thus we propose that in HAP IDS system the same protocol would be used. In case of HAP networks, it is usually assumed that the network operations center is placed on the ground, and that a wireless link is used for network management of the HAP network, for software updates and maintenance and also in some cases for the maneuvering of the HAP vehicle. The network operations center contains the Security Officer console, which allows for visual inspection of the state of HAP IDS, the list of most recent alerts and similar features.
A. Threats to the HAP network
As a primary attack venue, we see subscriber stations. We divide the attacks into three classes:
- Attacks at the physical layer,
- Attacks at the MAC layer,
- Attacks at higher layers.
Subscriber stations in the HAP cell can mount physical layer attacks. In the literature are often mentioned jamming and packet scrambling, belonging to this class. The jamming attack is mounted using information from UL-MAP message received from BS, and if it is a targeted attack, attacker has to map the CID from UL-MAP message to the station address. This attack can be realized with short transmissions and low radiated power, which protects the attacker [11].
At MAC layer, subscriber stations are capable of mounting Denial of Service (DOS) attacks. DOS attacks at MAC layer are realized as flooding of signalization requests (authentication, capabilities negotiation, key management frames, etc.). The primary means of those attacks are resource intensive cryptographic operations.
Some of messages in IEEE 802.16 are not authenticated (Traffic Indication Message, Neighbor Advertisement Message, Fast Power Control, Multicast Assignment Request, Downlink Burst Profile Change Request, Power Control Mode Change Request) [12] which leaves space for attacks.
At higher layers, a distributed variant of DOS (DDOS) attacks is possible. DOS attacks at application layer that disturb the normal application operation instead of depleting network resources as in classical DOS, are becoming more and more serious threat recently. At application layer, also are possible targeted attacks at users. Typical examples are different types of malware hidden in email attachments. Protective measures include application level filters at network servers. Those are outside the scope of this paper.
One often cited type of attack, which is possible in 802.16, although it is more difficult to realize than in 802.11 is the rogue base station attack. This type of attack belongs to the class of man-in-the middle attacks. In the attack, the rogue base station impersonates a legitimate one. A short description of the attack is given in [4]. Other attacks in this class are more probable in a mesh network, rather than in a network in PMP mode. The proposed IDS system at this moment does not include the detection feature for this type of attacks.
Besides subscriber stations, the source of attacks in higher layers of protocol stack can be in external networks - mounted over the link to the gateway. Those attacks are targeted at stations in the HAP network. As this is the last hop in the communication path between attacker and its target, DOS attacks are already amplified and easily detectable, but the possibility for reaction is limited.
The last is that although SNMPv3 includes authentication, it has to be noted that the link to operations center presents another attack venue. The privileges that are given to management personnel are wide: software updates, installation of software modules, restart, power on/off, maneuvering in case of HAP airplanes, etc. Since the operations that are realized over the management interface are of great security impact, the damage that an attacker who successfully impersonates the network operations center could make is critical.
B. Remarks on the construction of HAP IDS
The first phase of detection in a NIDS is the packet "sniffing". While relatively simple for realization in a wired network, in wireless networks the NIDS system has to scan traffic at a set of frequencies. Each of the frequencies is scanned in specific intervals of time. Typically not the same time interval is devoted to scanning of all frequencies in the set, and there is usually a heuristic algorithm (sometimes based on fuzzy logic) applied to determine how long to scan each of the frequencies. The integration of IDS sensor software with the BS protocol stack software would provide for the simple method of monitoring of the communication between subscriber stations and the base station. The concept of integration can be similar to the use of filter hooks [9] and filtering platform callout drivers [10] in Microsoft Windows OS in packet filtering applications for wired networks. Fig 2 presents the structure of HAP IDS/IPS at one BS, including the information flows.
Average traffic load on HAP BS can be estimated in the following way. A traffic stream from one mobile user to the HAP BS can be modeled as 4IPP [13] (traffic model for IEEE 802.16.3). Number of terrestrial users is 240-256 per cell in published simulations [14]. Thus, the total traffic on
HAP BS coming from terrestrial users in one cell in average case can be modeled as 256 4IPP streams. The HAP IDS should be able to inspect such a stream, without losing packets. The 4IPP average rate is 3 pkts/unit-of-time. The bandwidth of SS-HAP link is 1 Mbps in simulations [15]. The packet size is 1500 Bytes. The packet rate is $100000/(1500*8)$, which is 83 packets per sec. The parameters of the basic 4IPP model (see Table 1, the 4IPP Average Rate is calculated as a sum of IPP stream rates and equals to 3 pkts/unit-of-time) should be scaled by $83/3=27.8$ unit-of-time per sec. The resulting parameters of IPP streams for model of communication in HAP network are given in Table 2.
**TABLE 1. PARAMETERS OF IPP STREAMS IN BASIC 4IPP TRAFFIC MODEL**
| Source # | $\lambda_i$ IPP in ON state (pkts/unit-of-time) | Averaged over both ON and OFF states (pkts/unit-of-time) |
|----------|-----------------------------------------------|--------------------------------------------------------|
| IPP#1 | 2.679 | 1.1480 |
| IPP#2 | 1.698 | .7278 |
| IPP#3 | 1.388 | .5949 |
| IPP#4 | 1.234 | .5289 |
**TABLE 2. PARAMETERS OF IPP STREAMS IN HAP WiMAX TRAFFIC MODEL**
| Source # | Averaged over both ON and OFF states (pkts/sec) – for 1 SS | Averaged over both ON and OFF states (pkts/sec) – for 256 SS |
|----------|------------------------------------------------------------|-------------------------------------------------------------|
| IPP#1 | 31.91 | 8168.96 |
| IPP#2 | 20.23 | 5178.88 |
| IPP#3 | 16.54 | 4234.24 |
| IPP#4 | 14.7 | 3763.2 |
Since it aggregates events from several cells, in some cases, the correlation module can detect low volume DDOS attacks (at higher layers of protocol stack), that would otherwise (without the aggregation and correlation of alerts coming from different cells) pass unnoticed.
In case of multi-HAP network, operation of HAP IDS systems belonging to specific HAP networks can be coordinated in centralized manner (from the network operations center on the ground), or those can cooperate in a distributed manner. The realization of such a cooperative system is outside the scope of this paper.
Attacker location is an important feature in wireless security. Application of techniques such as triangulation for that purpose is outside the scope of this paper.
**C. Possible improvements**
Besides the aforementioned cooperation in case of multi-HAP networks, there are two directions in which the proposed concept can be improved and/or extended.
The first one is that the 802.16 network can be used in mesh mode. There are already some proposals for IDS systems for wireless mesh networks: OpenLIDS [2], WATCHERS, TIARA, CONFIDANT, MoblIDS, RESANE, SCAN [3]. The change from PMP to mesh mode would require a substantial rework of the concept.
The other direction is that having provided the detection functionality, the next step is the reaction feature (intrusion prevention). Such architecture is based on the use of Policy Enforcement Point engine (PEP) at the base station. It is often implemented as a firewall. In that case the functionality of the correlation module would be extended with the following function: dispatching of new alerts that are the results of correlation phase back to NIDS sensors. In order to achieve efficient use of communication and processing resources, the correlation module should be able to filter the alerts that it sends to sensors. There are strict limitations with respect to the weight of the load that can be placed in the HAP that imply the efficient use of processing resources. For that reason we propose that the coordination and correlation module is a lightweight topic-based publish/subscribe system. There is a publish/subscribe association between this module and PEP engines. In this association, the correlation module is the publisher and PEP engines are subscribers. We remark that in this design the PEP engines are collocated with sensors.

The proposed system is a good platform for realization of the reaction feature, because the communication stream from the correlation module to the PEP engines, which carries
results of correlation phase, in the average case contains enough information for decisions on reaction.
By implementing IDS/IPS as described, we allow for swift reaction in case of handovers of malicious subscriber stations. Once recognized as malicious, such a station can be disconnected and prevented from moving to the neighboring cell. This is especially of interest in overlapping areas, where mobile station can choose one of up to three cells (in average case) that it will use for communication.
IV. CONCLUSION AND FUTURE WORKS
This paper presents an approach to securing of HAP networks by using intrusion detection systems. The protected network is a 802.16 network in point-to-multi point mode. The proposed system is a distributed network-based IDS system with a NIDS sensor in each cell. IDS system is collocated with the base station software. The BS is on board the HAP. IDS sensors monitor communications links between base station and subscriber stations. The backhaul link to ground station is monitored as well. The IDS is collocated with the base station device, on board the HAP vehicle.
The system allows for detection of distributed DOS attacks in the HAP network. The paper describes the required modifications to the system in order to include reaction feature (intrusion prevention) in a straightforward manner. The correlation module in the extended system is the publish/subscribe server that publishes results of the correlation phase to the policy enforcement point engines in HAP cells.
The system could be further developed to include support for cooperation of IDS/IPS systems in multi-HAP networks.
ACKNOWLEDGMENT
This paper is a continuation of the research conducted in the scope of HAPCOS project (COST action 297). This work was partially supported by the Ministry of Education and Science of the Republic of Serbia under the project No. 32031 and 44009, year 2011.
REFERENCES
[1] IEEE Std 802.16™-2009, IEEE Standard for Local and metropolitan area networks, Part 16: Air Interface for Broadband Wireless Access Systems, IEEE-SA Standards Board, The Institute of Electrical and Electronics Engineers, Inc.
[2] F. Hugelshofer, P. Smith, D. Hutchison, and N.J.P. Race, OpenLIDS: A Lightweight Intrusion Detection System for Wireless Mesh Networks, Mobicom 09, Beijing, China, 2009, pp. 309-320
[3] T.M. Chen, G.S. Kuo, Z.P. Li, and G.M. Zhu, Intrusion Detection in Wireless Mesh Networks, chapter in Security in Wireless Mesh Networks, Auerbach Publications, 2008
[4] M. Barbeau, J. Hall, and E. Kranakis, Detecting Impersonation Attacks in Future Wireless and Mobile Networks, Workshop on Secure Mobile Ad-hoc Networks and Sensors, MADNES 2005
[5] U. Blumenthal and B. Wijnen, User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3), RFC 3414, 2002, The Internet Society
[6] D. Grace, M. Mohoric, M.H.Capstick, M. Bobbio Pallavicini, and M. Fitch, "Integrating Users into the Wider Broadband Network via High Altitude Platforms", IEEE Wireless Communications, Vol. 12, No. 5, pp. 98-105, October 2005
[7] T.C. Tozer and D. Grace, "High Altitude Platforms for Wireless Communications", IEE Electronics and Communications Engineering Jnl, Vol. 13, No. 3, June 2001, pp. 127-137
[8] Snort 3.0 Architecture Series Part 1: Overview, http://securitysauce.blogspot.com/2007/11/snort-30-architecture-series-part-1.html, retrieved: November, 2011.
[9] Filter-Hook Drivers, http://msdn.microsoft.com/en-us/library/windows/hardware/ff540489%28v=vs.85%29.aspx, retrieved: November, 2011.
[10] Introduction to Windows Filtering Platform Callout Drivers, http://msdn.microsoft.com/en-us/library/ff569544%28VS.85%29.aspx, retrieved: November, 2011.
[11] Security of IEEE 802.16, Arkoudi-Vafea Aikaterini, Master Thesis, Department of computer and Systems Science, Royal Institute of Technology, Stockholm, Sweden, 2006
[12] A. Deiminger, S. Kiyomoto, J. Kurihara, and T. Tanaka, “Security Vulnerabilities and Solutions in Mobile WiMAX”, IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.11, November 2007.
[13] C. R. Baugh, 4IPP Traffic Model for IEEE 802.16.3, IEEE 802.16.3c-00/51,
[14] Floriano De Rango, Mauro Tropea, and Salvatore Marano, Integrated Services on High Altitude Platform: Receiver Driven Smart Selection of HAP-Geo Satellite Wireless Access Segment and Performance Evaluation, International Journal of Wireless Information Networks, Vol. 13, No. 1, January 2006, pp. 77-94, DOI: 10.1007/s10776-005-0020-z
[15] C. E. Palazzi, C. Roseti, M. Luglio, M. Gerla, M. Y. Sanadidi, and J. Stepanek, Enhancing Transport Layer Capability in HAPS-Satellite Integrated Architecture, Wireless Personal Communications, Vol 32 Issue 3-4, February 2005
Alteration Method of Schedule Information on Public Cloud for Preserving Privacy
Tatsuya Miyagami\textsuperscript{1}, Atsushi Kanai\textsuperscript{1}, Noriaki Saito\textsuperscript{2}, Shigeaki Tanimoto\textsuperscript{3}, Hiroyuki Sato\textsuperscript{4}
\textsuperscript{1} Graduate School of Engineering Hosei University, Tokyo, Japan
firstname.lastname@example.org, email@example.com
\textsuperscript{2} NTT Information Platform Laboratories, Tokyo, Japan
firstname.lastname@example.org
\textsuperscript{3} Chiba Institute of Technology, Chiba, Japan
email@example.com
\textsuperscript{4} The University of Tokyo, Tokyo, Japan
firstname.lastname@example.org
Abstract—We are currently experiencing an explosion of cloud technologies. However, a cloud service administrator may be an untrustworthy third party. Therefore, companies dealing with confidential information cannot use a public cloud. In this paper, we propose a method for preventing the leakage of private information on a cloud schedule service. In this method, even a cloud administrator or a hacker who steals a cloud service login key cannot read the true schedule because the schedule date is altered and schedule content is encrypted. Consequently we can safely use a public cloud schedule service with this method. We also evaluated the method’s performance using an actual alteration program on Google server. We implement the proposed method and show the performance is practical by evaluating actually.
Keywords—cloud computing; Internet security; privacy; Google Calendar; schedule service; date alteration.
I. INTRODUCTION
We are currently experiencing an explosion of cloud technologies. However, by considering “cloud” as a social infrastructure, we must also consider security [1]. For example, a cloud system is not managed by a user; therefore, cloud users cannot be certain that their critical information is actually safe [2].
Since schedule services are useful for a variety of fields, many people use them to manage their personal scheduling information. However, companies are unable to use scheduling services because information may include private or confidential information [3]. If there is a malicious administrator in the cloud, he might steal a user’s privacy information or confidential information. If a malicious administrator exploits a company’s confidential information, such as an important meeting schedule or customer information, its finances and reputation may be seriously damaged [4].
To solve this problem, it is necessary to protect private or confidential information from third party tapping. It is easy to preserve privacy or confidential information by encryption with respect to documents. The contents of a schedule can be encrypted on the schedule server. However, the schedule dates cannot be encrypted on the schedule server because if the value of the encrypted data becomes binary, the value cannot be saved in the schedule server as a schedule date. Therefore, the encryption of schedule dates cannot be used with a calendar service without changing the calendar interface.
In this paper, we propose a method for altering dates. The original schedule dates are completely changed to different dates. By using both alteration and encryption, a schedule can be protected from a third party. This is a good solution for managing both security and convenience of existing cloud schedule services at the same time. Note that we are not concerned here with encryption of schedule contents, which is rather simple, but with the alteration of schedule dates.
The organization of this paper is as follows. Section II describes related work. Section III introduces our alteration method for schedule services. Section IV describes the saving and reading algorithms created with our method. Section V discusses the evaluation of our alteration method. Section VI summarizes this paper.
II. RELATED WORK
Techniques of preserving privacy have been discussed for On-Line Analytical Processing (OLAP). Furthermore, Database-As-a-Service (DAS), which provides data management services for cloud computing, has become familiar.
With these technologies, server administrators may be untrustworthy third parties; therefore, privacy-preserving technologies have been necessary. In OLAP, perturbation techniques have been investigated for privacy-preserving data mining [5] [6] [7]. Using this technique, original values are perturbed and stored in a database; however, results of statistical queries remain correct. Consequently, privacy as original values is preserved. In DAS, cryptography has been commonly used to perform queries on encrypted data stored on a database [1] [8] [9].
The above techniques need to be applied to the basic functions of database systems, and it is necessary to replace or develop a new server system to use these technologies. On the other hand, we assume a current schedule server in the cloud. In this case, data types not treated on the schedule server cannot be used. This means dates need to be stored as dates in the schedule server through APIs. Therefore, dates cannot be encrypted because the binary value as an encrypted result cannot be stored in the date field in schedule databases. Furthermore, an altered date must be able to be decoded back to the original date but we need to maintain data mining results to be proper. For this reason, we developed a date alteration method.
III. CONCEPT OF ALTERATION METHOD OF SCHEDULE DATES
In this section, we describe the methodology and reading algorithms for altering dates.
A. Overview of alteration method
The process of altering schedule dates is shown in Figure 1. First, the original schedule date is prepared on the client side. Next, the original schedule is converted to a different date by using a password, which is inputted and stored on the client side. Finally, the altered date is transferred to the cloud server.
B. Trust model
A trust model is shown in Figure 2. We assume a public cloud is not trusted; consequently, a password of a public cloud service for account authentication is also not trusted. For example, a security aware cloud [10][11] has been proposed with this kind of trust model. For this reason, another password (Key), which is different from the original password, and alteration of the original schedule date have to be prepared. This password must be kept on the client side, and there must be alteration and encryption modules on the client PC because the PC is assumed as trusted.
C. Alteration
A schematic of saving a schedule is shown in Figure 3. First, the original schedule data is divided into the elements of "true-year (TY)", "true-month (TM)", "true-day (TD)", "true-time", and "schedule-content" (which is the scheduled event). When the original date is altered, a Key, which is prepared by the client, is used. The TY, TM and TD are converted into “altered-year (AY)”, “altered-month (AM)” and “altered-day (AD)” by using an alteration algorithm and the Key. We only altered the TY, TM, and TD and true-time was not. Note that when the true date is altered, the information necessary to confirm the true date is created. A detailed explanation of this information is described in Section IV.
A schematic of reading a schedule is shown in Figure 4. It is assumed that the TY, TM, and TD are known by the client, and true-time and schedule-content is unknown. The TY, TM and TD are altered again using the same Key. A client accesses the altered date, the encrypted schedule contents, and true-time. The schedule is decrypted with a Key, and the schedule contents and information to confirm the true date are divided. Finally, the true date is output using the information for confirming it.
D. Date alteration pattern
We divided our date alteration method into three patterns, and examined the efficiency of saving and reading a schedule. These three patterns are shown in Figure 5.
In Pattern 1, the day is converted to another within the same month, the month is converted to another within the same year, and the year is converted to another in the entire range of years contained in the system. In Pattern 2, the day is converted to another within the same month and the month is converted to another in the entire range of months contained in the system. In Pattern 3, the day is converted into another day within the entire range contained in the system.
The performance and degree of vulnerability differ depending on each pattern. This is discussed in more detail in Section V.
IV. SAVING AND READIG ALGORITHMS
In this section, we describe the saving and reading algorithms of dates.
These algorithms were created using our alteration method. The relationship between true date and altered date is shown in Figure 6. The user’s password is input and converted into a series of numbers. These numbers are defined as a Key. The length of the Key should be 16 bits when using the exclusive-OR function. Moreover, when using a block cipher, the length of the key depends on the block cipher algorithm. Both algorithms are described as follows.
A. Saving algorithm
The saving algorithm of our alteration method is described as follows. The service’s range means the entire period of the calendar in the specific service. Note that “A”, “B”, “C”, “D”, and “E” are defined as “year of starting date”, “year of terminal date”, “year of true date”, “month of true date”, and “day of true date”.
(1) The difference between the true date and starting date of a particular service’s range is calculated.
a) Pattern 1
\[ \text{dif\_year} = C - A \]
(1)
b) Pattern 2
\[ \text{dif\_month2} = (C - A) \times 12 + (D - 1) \]
(2)
c) Pattern 3
The number of days from the starting date to schedule date are calculated using the function $F_1(x)$. Here, $F_1(x)$ is the number of days.
$$\text{dif\_day3} = F_1(A, 1, 1, C, D, E) \quad (3)$$
(2) The temporal values are calculated using the exclusive-OR function or a block cipher.
a) Pattern 1
$$EY = \text{dif\_year} \oplus Key \quad (4)$$
$$EM = (D - 1) \oplus Key \quad (5)$$
$$ED = (E - 1) \oplus Key \quad (6)$$
The following equations are used if a block cipher is used.
$$EY = E_{Key}(\text{dif\_year}) \quad (7)$$
$$EM = E_{Key}(D - 1) \quad (8)$$
$$ED = E_{Key}(E - 1) \quad (9)$$
b) Pattern 2
$$EM2 = \text{dif\_month2} \oplus Key \quad (10)$$
$$ED2 = (E - 1) \oplus Key \quad (11)$$
The following equations are used if a block cipher is used.
$$EM2 = E_{Key}(\text{dif\_month2}) \quad (12)$$
$$ED2 = E_{Key}(E - 1) \quad (13)$$
c) Pattern 3
$$ED3 = \text{dif\_day3} \oplus Key \quad (14)$$
The following equations are used if a block cipher is used.
$$ED3 = E_{Key}(\text{dif\_day3}) \quad (15)$$
(3) The calculated values in A-(2) are calculated using the “modulo function” to interpose between the service’s ranges. Note that the function is to provide the remainder of the division. The value of reminders using the mod function is the altered date. The value of the quotient using the “division function” is the information to confirm the true date mentioned in Section II-C. Note that the function is to provide the quotient of the division. This information is called the Element of Read Data (ERD). ND means the number of days in one month.
a) Pattern 1
A quotient is calculated with the altered date as a result of the respective divisions. The altered date is then determined to be AY1, AM1, and AD1.
$$AY1 = \text{mod}(EY, B - A) + A \quad (16)$$
$$AM1 = \text{mod}(EM, 12) \quad (17)$$
$$AD1 = \text{mod}(ED, ND) \quad (18)$$
$$ERD_{year} = \text{div}(ED, B - A) \quad (19)$$
$$ERD_{month} = \text{div}(EM, 12) \quad (20)$$
$$ERD_{day} = \text{div}(ED, ND) \quad (21)$$
b) Pattern 2
The year and month are combined and calculated as the number of months. SRM is the number of months in a service’s range.
$$SRM = (B - A) \times 12 + 12 \quad (22)$$
$$AMN = \text{mod}(EM2, SRM) \quad (23)$$
$$AD2 = \text{mod}(ED2, ND) \quad (24)$$
$$ERD_{month\_number} = \text{div}(EM2, E - B) \quad (25)$$
$$ERD_{day2} = \text{div}(ED2, ND) \quad (26)$$
The altered date is determined to be AY2, AM2, and AD2.
$$AY2 = \text{div}(AMN, 12) + A \quad (27)$$
$$AM2 = \text{mod}(AMN, 12) + 1 \quad (28)$$
c) Pattern 3
SRD is the number of days in a service’s range. The days of the system range using $F_1(x)$ is calculated.
$$SRD = F_1(A, 1, 1, B, 12, 31) \quad (29)$$
$$ADN = \text{mod}(ED3, SRD) + 1 \quad (30)$$
$$ERD_{day\_number} = \text{div}(ED3, SRD) \quad (31)$$
Using $F_2(x)$, the altered date is determined as AY3, AM3, and AD3. Here, $F_2(x)$ provides the altered-year, altered-month, and altered-day.
$$(AY3, AM3, AD3) = F_2(A, 1, 1, ADN) \quad (32)$$
(4) The contents of the schedule and ERD are concatenate.
(5) The above data is saved to the cloud server as the schedule contents. If necessary, the schedule contents are encrypted using the Key.
The number of months from the starting date is computed using the altered year and altered month.
\[ AMN = AY \times 12 + AM \]
(39)
The original date is calculated from the ERD.
\[ TMN = ((ERD_{month\_number} \times SRM + AMN) \oplus Key) \]
(40)
\[ TY2 = div(TMN, 12) + A \]
(41)
\[ TM2 = mod(TMN, 12) + 1 \]
(42)
\[ TD2 = ((ERD_{day2} \times ND + AD2) \oplus Key) + 1 \]
(43)
When the block cipher is used, the true date is calculated as follows.
\[ TMN = (E_{Key}(ERD_{month\_number} \times SRM + AMN)) \]
(44)
\[ TD2 = (E_{Key}(ERD_{day2} \times ND + AD2)) + 1 \]
(45)
Note that if "(44)" is applied, "(41)" and "(42)" are applied, and TY2, TM2 are calculated.
c) Pattern 3
The number of days from the starting date is calculated using the AY and AM, and the difference in the days from starting date to the altered schedule date is calculated using \( F_1(x) \).
\[ ADN = F_1(A, I, I, AY3, AM3, AD3) \]
(46)
The true date is calculated from the ERD.
\[ (TY3, TM3, TD3) = F_2(A, I, I, ((ERD_{day\_number} \times SRD + ADN) \oplus Key)) \]
(47)
When the block cipher is used, the original date is calculated as follows.
\[ (TY3, TM3, TD3) = F_2(A, B, C, (E_{Key}(ERD_{day\_number} \times SRD + ADN))) \]
(48)
(4) The computed TY, TM, and TD are compared with C, D, and E. If the two values are equal, the calculated schedule becomes available.
V. EVALUATION AND DISCUSSION
A. Implementation
We developed alteration modules on "Google Calendar" [12], which is a type of software as a service (SaaS). We did not use an existing calendar application, but we used the Google Calendar API [13] to evaluate the algorithm. The environment of this module is shown as Figure 8.
In actual use, Google Calendar applications should be used for building the proposed algorithm into the application for user convenience, as shown in Figure 9.
B. Performance Evaluation
We evaluated the performance of the proposed method under the three patterns mentioned in the previous section, using the implemented modules discussed in the previous subsection.
B-1) Reading time for a month in all three cases.
B-2) Reading time for a year in Patterns 1 and 2.
We did not evaluate the performance of saving schedule data because there was no difference in performance among the three patterns. Therefore, we only evaluated performance of reading schedule data. When the schedule data is read, it is usually appropriate to read the data for one month or one week. In other words, it is not practical to use reading data for one year. However, by comparing the performances of the algorithm, the schedule data was read for a year.
For B-1, all schedules within the specified month given by a user are read, and the elapsed time of reading the schedules was evaluated in the three alteration patterns. The measured results for B-1 are shown in Figure 10. The elapsed time of Pattern 3 was much longer than those of the other two patterns. Patterns 1 and 2 produce the schedule for a month when making only one API call. On the other hand, Pattern 3 produces the schedule for a month by making an API call which is based on the number of days in a month.
There were not many differences in the elapsed time for Pattern 3 when the number of schedule dates included in one month increased. As a result, the elapsed time did not much depend on the number of schedule dates in B-1 because the number of API calls increased during the elapsed time.
Equation “(49)” is derived from Figure 10. Note that “T” is the elapsed time.
\[ T = t_n \times (A - x) + t_e \times x \]
\[ t_n : \text{The elapsed time when there are no date for one API call.} \]
\[ t_e : \text{The elapsed time when there is a date for one API call.} \]
\[ x : \text{The number of API calls} \]
\[ A : \text{The maximum number of API calls} \]
Here, \( t_n \) and \( t_e \) are calculated from the elapsed time, which is measured for a month (maximum number of days is 31). Note that \( t_n \) and \( t_e \) are assumed to be constant numbers.
For calling schedule dates of a month, we substitute \( A = 31, x = 5, \) and \( T = 9000 \text{ ms} \) into “(49)”.
\[ t_n \times (31 - 5) + t_e \times 5 = 9000 \]
When \( x = 10 \),
\[ t_n \times (31 - 10) + t_e \times 10 = 9300 \]
From “(50)” and “(51)”, \( t_n \) and \( t_e \) are calculated as
\[ t_n = 280 \text{ (ms)} \]
\[ t_e = 344 \text{ (ms)} \]
By substituting "(52)" and "(53)" into "(49)", T is represented as
\[ T = 280 \times (A - x) + 344 \times x \]
When substituting \( x = 20 \) to confirm the elapsed time,
\[ T = 280 \times (31 - 20) + 344 \times 20 = 9960 \text{ (ms)} \]
The calculated value was close to the measured value.
For B-2, we compared the elapsed time of reading schedule data for a year. The measured results for B-2 are shown in Figure 11. The elapsed time of Pattern 2 was longer than that of Pattern 1. This pattern obtained the schedule for a year with one API call. On the other hand, Pattern 2 obtained the schedule for a year with twenty API calls. Only the theoretical value of Pattern 3 was published in Figure 11 at this time because this pattern was not able to make API calls for a year in the experimental environment.
The theoretical value of the elapsed time was calculated. Using "(54)".
The elapsed time of pattern 1 is
\[ T = 280 \times (1 - 1) + 344 \times 1 = 344 \text{ (ms)} \]
The elapsed time of pattern 2 is
\[ T = 280 \times (12 - 8) + 344 \times 8 = 3872 \text{ (ms)} \]
The elapsed time of pattern 3 is
\[ T = 280 \times (365 - 20) + 344 \times 20 = 103480 \text{ (ms)} \]
When "(56)", "(57)" and Table II were compared, the actual measurement and theoretical values were close.
According to Tables I and II, the elapsed time was at most 10 seconds. In a cloud environment, it is thought that the processing time increases. Therefore, an elapsed time of 10 seconds is appropriate for practical use. However, since the theoretical value of Pattern 3 was at most about 103 seconds, it is not practical for reading schedule dates for a year.
The elapsed time is proportional to the number of API calls. Therefore, it can be said that Pattern 1 with API calls is superior to the others patterns in terms of performance. Therefore, the degree of module performance is higher in reverse order, Pattern 1 > Pattern 2 > Pattern 3.
To improve the elapsed time, it is necessary to develop an algorithm for reducing the number of API calls.
C. Security of alteration
A possible attack is described as follows.
First, the original schedule date may be predicted by a third party. As mentioned in Section II, Pattern 3 is the safest pattern because a date is mapped throughout the schedule range. Alteration of Pattern 2 is performed day to day within a month. Therefore, altered date distribution does not change month to month compared to the true distribution. Therefore, if a hacker observes a newly added schedule, he may know what month the current month is because the current month must be the month that the number of added schedules is largest. Therefore, it is difficult to guess the true date.
A user prepares a password (Key) beforehand for altering the date. However, if the same key is used for a long time, the schedule date range will not be large; therefore, a hacker can predict the key by brute force. To prevent this kind of attack, a user should periodically change the key. When the key is changed, it is necessary to simultaneously calculate the ERD again.
Second, the original schedule might be guessed from the altered schedule. A schematic of guessing the true schedule from an altered one is shown in Figure 12. For instance, there are many schedules in 2011, and there were few in 2010. First, 2010 and 2011 are altered to 2030 and 2005, respectively. It is assumed that the server administrator knows the schedule frequency in 2010 and 2011 of a client. The server administrator investigates the altered schedule frequency in 2030 and 2005 and compares each year. If it turns out that 2005 had many schedules, it may turn out the 2005's altered schedules are equal to those of 2011. This is the same not only for the combination of "year and month" but also "month and day." Therefore, Pattern 3 is safest, and Pattern 2 is safer than Pattern 1.
Figure 12. Guessing true schedule from altered schedule
### TABLE III. AVAILABLE FUNCTIONS AND APIs
| Function of current schedule APIs | Availability of alteration |
|----------------------------------|----------------------------|
| Create an event | ○ |
| Create a new calendar | ○ |
| Repeat an event | ○ |
| Privacy settings for individual events | ○ |
| Edit or view event details | ○ |
| Delete or remove an event | ○ |
| Delete a calendar | ○ |
| Events that last all day | ○ |
| Color Code an Event | ○ |
| Edit your calendar name | ○ |
| Notifications (Daily Agenda) | × |
| Notifications (Event Reminders) | × |
### D. Available APIs
Schedule dates are all altered on the Google Calendar server, so some APIs might not perform properly. Existing Google Calendar APIs are listed in Table III. Existing schedule services have many functions. For instance, a notification API will be set at the date after alteration. As a result, an alarm will be activated on the wrong date.
### VI. CONCLUSION AND REMARKS
There are various advantages in cloud computing; however, there are still many security problems. We proposed an alteration method to protect private or confidential information from third party tapping. We also implemented two alteration modules and evaluated the proposed method’s performance in three patterns on Google calendar. We found that the method’s performance was lower than usage without alteration, but it is still useful. We plan to develop a better performing algorithm in the future.
For actual use, Google Calendar applications should be used with the proposed modules built into the application for user convenience. This is for future work.
### REFERENCES
[1] C. Almond, “A Practical Guide to Cloud Computing Security”, Accenture and Microsoft, August 27, 2009
[2] “Public Cloud Computing Security Issues”. [Online] Available: http://www.thebunker.net/managed-hosting/cloud/public-cloud-computing-security-issues/. <retrieved: November, 2011>
[3] D. Yuefa, W. Bo, G. Yaqiang, Z. Quan, and T. Chaojing, “Data Security Model for Cloud Computing”, ISBN 978-952-5726-06-0, Proceeding of the 2009 International Workshop on Information Security and Application (IWISA 2009) Qingdao, China, November 21-22, 2009
[4] BalaBit IT security, “Cloud Security Risks and Solutions”, First Edition, July 1, 2010
[5] R. Agrawal, R. Strikant, and D. Thomas, “Privacy Preserving OLAP” Proc. 25th ACM SIGMOD Int’l Conf. Management of Data, ACM Press, 2005, pp. 251-262
[6] N. Zhang and W. Zhao, “Privacy-Preserving Data Mining Systems”, IEE Computer Society Computer, April 2007, pp. 52-58
[7] J. Vaidya and C. Clifton, “Privacy-Preserving Data Mining: Why, How, and When”, IEEE Security & Privacy Building Confidence in a Networked World, November/December, 2004
[8] D. Boneh, G. Crescenzo, R. Ostrovsky, and G. Persiano, “Public Key Encryption with Keyword Search”, Proceedings of EUROCRYPT ’04, vol. 3027
[9] Z. Yang, S. Zhong, and N. Wright, “Privacy-Preserving Queries on Encrypted Data”, Proceedings of the 11th European Symposium On Research In Computer Security (ESorics), LNCS4189, pp. 479-495, 2006.
[10] H. Sato, A. Kanai, and S. Tanimoto, “A Cloud Trust Model in Security Aware Cloud”, Proceedings of 10th International Symposium on Applications and the Internet (SAINT 2010), pp. 121
[11] H. Sato, A. Kanai, and S. Tanimoto, “Building a Security Aware Cloud by Extending Internal Control to Cloud”, Proceedings of 10th International Symposium on Autonomous Decentralized Systems (ISADS 2011), 2011.
[12] Google, “Date API Developer’s Guide”. [Online] Available: http://code.google.com/intl/ja/apis/calendar/data/2.0/developers_guide.html <retrieved: November, 2011>
[13] Google, “Calendar help”. [Online] Available: http://www.google.com/support/calendar/ <retrieved: November, 2011>
Identifying Potentially Useful Email Header Features for Email Spam Filtering
Omar Al-Jarrah*, Ismail Khater† and Basheer Al-Duwairi†
*Department of Computer Engineering
†Department of Network Engineering & Security
Jordan University of Science & Technology, Irbid, Jordan 22110
‡Department of Computer Systems Engineering
Birzeit University, Birzeit, West Bank, Palestine
Email: email@example.com, firstname.lastname@example.org, email@example.com
Abstract—Email spam continues to be a major problem in the Internet. With the spread of malware combined with the power of botnets, spammers are now able to launch large scale spam campaigns causing major traffic increase and leading to enormous economical loss. In this paper, we identify potentially useful email header features for email spam filtering by analyzing publicly available datasets. Then, we use these features as input to several machine learning-based classifiers and compare their performance in filtering email spam. These classifiers are: C4.5 Decision Tree (DT), Support Vector Machine (SVM), Multilayer Perception (MP), Nave Bays (NB), Bayesian Network (BN), and Random Forest (RF). Experimental studies based on publicly available datasets show that RF classifier has the best performance with an average accuracy, precision, recall, F-Measure, ROC area of 98.5%, 98.4%, 98.5%, and 98.5%, respectively.
Index Terms—Email Spam, Machine Learning
I. INTRODUCTION
Email spam, defined as unsolicited bulk email, continues to be a major problem in the Internet. Spammers are now able to launch large scale spam campaigns, malware and botnets helped spammers to spread spam widely. Email spam cause many problems, increase traffic and leading to enormous economical loss. Recent studies [1], [2] revealed that spam traffic constitute more than 89% of Internet traffic. According to Symantec [3], in March 2011 the global Spam rate was 79.3%. The cost of managing spam is huge compared with the cost of sending spam which is negligible. It includes the waste of network resources and network storage, the cost of traffic and the congestion over the network, in addition to the cost associated with the waste in employees’ productivity. It was estimated that an employee spends 10 minutes a day on average sorting through unsolicited messages [4]. Other studies [5], [6], [7] reported that spam costs billions of dollars. Ferris Research Analyzer Information Services estimated the total worldwide financial losses caused by spam in 2009 as $130 billion; $42 billion in the U.S. alone [8].
Spammers are increasingly employing sophisticated methods to spread their spam emails. In addition, they employ advanced techniques to evade spam detection. A typical spam campaign involves using thousands of spam agents to send spam to a targeted list of recipients. In such campaigns, standard spam templates are used as the base for all email messages. However, each spam agent substitutes different set of attributes to obtain messages that do not look similar. Moreover, spammers are increasingly adopting image-based spam wherein the body of the spam email is converted to an image, which renders text-based and statistical spam filters useless.
Blocking spam email is considered a priority for network administrators and security researchers. There have been tremendous research efforts in this field that resulted in a lot of commercial spam filtering products. Header-based email spam filtering is considered as one of the main approaches in this field. In this approach, a machine learning classifier is applied on features extracted from email header information to distinguish ham from spam, and the accuracy of the header-based email spam filter depends greatly on the email header fields used for feature selection. In this paper, we identify potentially useful email header features based on analyzing large publicly available datasets to determine the most distinctive features. Also, we include most of the mandatory and optional email header fields in order to fill any gap or missing information that is required for email classification.
This paper presents a performance evaluation of several machine learning-based classifiers and compare their performance in filtering email spam based on email header information. It also proposes including important email header features for this purpose. The rest of this paper is organized as follows: Section II reviews related work. Section III discusses the main features of email header considered in our work. Section IV evaluates the performance of different machine learning-based classifiers in filtering header-based email spam. Finally, Section V concludes the paper.
II. RELATED WORK
An email message typically consists of header and body. The header is a necessary component of any email message. The Simple Mail Transfer Protocol (SMTP) [15] defines a set of fields to be contained in the email message header to achieve successful delivery of email messages and to provide important information for the recipient. These fields include:
email history, email date, time, sender of the email, receiver(s) of the email, email ID, email subject, etc. Header-based email spam filtering represents an efficient and lightweight approach to achieve filtering of spam messages by inspecting email message header information. Typically, a machine learning classifier is applied on features extracted from email header information to distinguish ham from spam. For example, Sheu [10] categorized emails into four categories based on the title: sexual, finance and job-hunting, marketing and advertising, and total category. Then he classified them according to the attributes from email message header. He proposed a new filtering method based on categorized Decision Tree (DT), namely, applying the Decision Tree technique for each of the categories based on attributes (features) extracted from the email header. The extracted features are from the sender field, email’s title, sending date, and the email’s size. Sheu applied his filter on a Chinese emails and obtained accuracy, precision, and recall of 96.5%, 96.67%, 96.3%, respectively.
Wu [11] proposed a rule-based processing that identifies and digitizes the spamming behaviors observed from the headers and syslogs of emails by comparing the most frequent header fields of these emails with their syslog at the server. Wu noticed the differences in the header filed of the sent email from what is recorded in the syslog, and he utilized that spamming behavior as features for describing emails. A rule-based processing and back-propagation neural networks were applied on the extracted features. He achieved an accuracy of 99.6% with ham misclassification of 0.63%. Ye et al. [12] proposed a spam discrimination model based on SVM to sort out emails according to the features of email headers. The extracted features from email header fields are the return-path, received, message-id, from, to, date and x-mailer; They used the SVM classifier to achieve a recall ratio of 96.9%, a precision ratio of 99.28%, and an accuracy ratio of 98.1%.
Wang [13], presented a statistical analysis of the header session message of junk and normal emails and the possibility of utilizing these messages to perform spam filtering. A statistical analysis was performed on the contents of 10,024 junk emails collected from a spam archive database. The results demonstrated that up to 92.5% of junk emails are filtered out when utilizing mail user agent, message-id, sender and receiver addresses as features.
Recently, Hu et al. [9] proposed an intelligent hybrid spam-filtering framework to detect spam by analyzing only email headers. This framework is suitable for extremely large email servers because of its scalability and efficiency. Their filter can be deployed alone or in conjunction with other filters. The extracted features from the email header are the originator field, destination field, x-mailer field, sender server IP address, and email subject. Five popular classifiers were applied on the extracted features: Random Forest (RF), C4.5 Decision Tree (DT), Nave Bayes (NB), Bayesian Network (BN), and Support Vector Machine (SVM). The best performance was obtained by the RF classifier with accuracy, precision, recall, and F-measure of 96.7%, 92.99%, 92.99%, 93.3%, respectively. These results were obtained when applying the classifiers on a dataset of 33,209 emails and another dataset of 21,725 emails. The work presented in this paper focuses mainly on potentially useful header features for email spam filtering. These features were selected by analyzing publicly available datasets (described in Subsection IV-B). Table I provides a summary of the main email header features considered by different spam filtering techniques as reported in the literature. It also shows the main features that we consider in our work.
III. Feature Selection
Feature selection represents the most important step of Header-based email spam filtering technique. In this step, we study information available in the email message header and carefully select some of them to be among the features used for classification. It is important to mention that the selection of email header features is based on analyzing large publicly available datasets (described in Subsection IV-B) to determine the most distinctive features. It is also important to point out that we include most of the mandatory and optional email header fields in order to fill any gap or missing information that is required for email classification. Figure 1 shows the process of building a feature vector of an email. This process starts by preprocessing of email messages to convert them into a standard format as described in RFC 2822. After that, we extract the header of the email to select the required features and build the feature vector which summarizes all the needed information from an email. This feature vector is then used to build the feature space for all emails that are needed for the classification phase.
The following subsections describe the fields of email message header that we consider in our work which turn to be of important value to classify email messages.
A. Received Field
Each email can contain more than one “Received” field. This field is typically used for email tracking by reading it from bottom to top. The bottom represents the first mail server that got involved in transporting the message, and the top represents the most recent one, where each received line represents a handoff between machines. Hence, a new received field will be added on the top of the stack for each host received the email and transport it, and to which host the message will be delivered, in addition to the time and date of passing. The following are the features that we extract from this field:
### Table I
**Email Header Features Considered by Different Machine Learning Spam Filtering Techniques**
| Sheu, 2009 [10] | Ye et. al., 2008 [12] | Wu, 2009 [11] | Hu et. al., 2010 [9] | Wang & Chen, 2007 [13] | Our Approach |
|-----------------|-----------------------|---------------|----------------------|------------------------|--------------|
| Length of sender field, Sender field, Title (more than one category), Time, Size of email | Received field (domain add., IP add., relay servers, date, time), From field, To field, Date field, Message-ID, X-Mailer | Comparing header fields with syslog | Originator fields, Destination fields, X-Mailer field, Sender IP, Email subject | Sender address validity, Receiver address (To, CC, BCC), Mail User Agent, Message-ID | Received field # of hops, Span Time, Domain add. Legality, Date & Time Legality, IP add. Legality, sender add. legality, # of Receivers (To, CC, BCC), Mail User Agent, Message-ID, Email subject Date of reception |
1. **The number of hops.** This feature represents the number of the relay servers used to deliver the message from its origin to its final destination. It was noticed based on different datasets that most of spam messages have a small number of hops. That means the spammers have exploited a predefined relay servers for delivering their spam, so the number of hops is limited, while in the normal case the number of relay servers may vary according to the paths the message follow to reach its final destination.
2. **Span time.** Span time represents the total time of the email through its journey from its origin to its final destination. This feature is considered as one of the most important features in our work. It is noticed that most of the spam emails have a large span time as compared to legitimate emails and some of them is negative in value.
3. **Domain address existence.** Domain address existence feature expresses whether the domain address of the host that delivers the message exists or not. This could be of little value to discriminate the spam emails from ham emails, but we keep it as a supporting feature.
4. **Date and time legality.** The purpose of this feature is to discover illegal date and time of email messages. The idea here is to check the date and time of email messages as they travel from one relay server to another. We believe this is an important feature because typically the date and time of legitimate email servers would be adjusted correctly. However, this is not necessarily the case for compromised machines that are used as email relays as we have discovered in the spam dataset.
5. **IP address legality.** This feature checks the legality of the host IP address, because spammers tend to hide or obfuscate IP addresses of their spam messages in order to avoid being blacklisted. We just check the format and the existence of the IP address.
### C. Number of Receivers
The recipients addresses of an email message are listed in one or more of the “To”, “CC”, and “BCC” fields. The “To” field contains the addresses of the primary recipients and the carbon copy “CC” field contains the addresses of the secondary recipients of the email, while the blind carbon copy “BCC” field contains the addresses of the recipients that are not included in copies of the email sent to the “To” and “CC” recipients. Many studies (e.g., [9], [13]) showed that spammers prefer to use the “BCC” field in order to send spam emails to a large number of recipients, at the same time no one of the recipients can obtain the list of the addresses that are collected by the spammers, because the SMTP server send a separate email to each one of the recipients listed in the “BCC” field, and every recipient has no information about the other recipients. In fact, most of the spam emails usually have small number of addresses in the “To” field which suggests that these emails were originally sent to many recipients using the “BCC” field such that individual recipients would not be able to identify other recipients of the same email.
### D. Date of Reception
The “Date” field is a mandatory field that represents the date and time of the email when it is sent by the sender at the Mail User Agent (MUA). It is to be mentioned that the time recorded in this field is based on the location of the mail server of the sender which could belong to a time zone different from that of the recipient. Therefore, we convert all timing information into Universal Time Coordination (UTC) to have a common base for comparison. Basically, we compare the date of sending the email with the date of reception as recorded at the final hop in the “Received” field. We noticed that most spam emails do not have valid date of reception which suggests that this feature could be very helpful in our study.
### E. Mail User Agent (MUA)
This is an optional field in the email header, appears as “X-Mailer” field which contains the email program used for the generation of the email. In this field, the email client or MUA name and version is recorded. Spammers usually tend to leave this field empty or fill it with random text. Based on
that, we take this field into consideration by checking whether it is existing or it is missing from the email message header.
F. Message-ID
This is a globally unique ID for each generated message. The “Message-ID” field is a machine readable ID which takes the name of the machine and the date and time of the email when it is sent. This field consists of two parts separated by @ sign. The right side part specifies the domain name or the machine name. This could be of a particular interest, because we noticed that most of spammers tend to hide this part or even fake the domain name to avoid being blacklisted. Therefore, it is required to make sure that the domain name in the “Message-ID” field is the same as the domain name in the “From” field. Inconsistency of this information would indicate a spamming behavior. It is important to mention here, that some mail user agents append the machine name to the domain name to the right of the @ sign. To overcome this issue, we used the partial matching with the domain name in the “From” field, and we noticed mismatches in most of the spam emails.
G. Email Subject
The subject contains a limited number of characters as described in RFC 822 and RFC 2822 [15]. It contains the topic and a summary of the email. Spammers may exploit the subject and use some special characters or words (e.g., “Try it for free!!”, “$ Money Maker $”, “* URGENT ASSISTANT NEEDED *”, etc.) to attract the user to open the email. Therefore, having special characters/phrases in the subject line may strongly indicate that the email is spam.
IV. PERFORMANCE EVALUATION
In this section, we evaluate the performance of several machine learning-based classifiers and compare their performance in filtering email spam based on email header information mentioned in Section III. In particular, we consider C4.5 Decision Tree (DT), Support Vector Machine (SVM), Multi-layer Perception (MP), Nave Byays (NB), Bayesian Network (BN), and Random Forest (RF). Basically, our experiments involve evaluating the performance of these classifiers in terms of accuracy, precision, recall, and F-measure as defined Subsection IV-A using publicly available datasets. Email spam datasets have been divided into a train and test sets according to the cross validation technique, where we used 10-fold cross validation. Weka tool [14] has been used for applying the machine learning techniques. Weka requires that the used features must conform to the input format of Weka. Therefore, the used features were ordered in a CSV file in the following format:
feature 1, feature 2, , feature n, class label
By default the class labels are located at the end of each row. In our experiments, we have two class labels used to categorize the image in the email, a legitimate email is marked as Ham, while the spam email is marked as Spam.
A. Performance Metrics
We use the following standard performance metrics to evaluate the proposed technique: accuracy, precision, recall, F-measure, which are defined as follows:
\[
Accuracy = \frac{TP + TN}{TP + TN + FP + FN}
\]
\[
Precision = \frac{TP}{TP + FP}
\]
\[
Recall = \frac{TP}{TP + FN}
\]
\[
F - measure = \frac{2 \times Precision \times Recall}{Precision + Recall}
\]
where \(FP, FN, TP, TN\) are defined as follows:
- **False Positive (FP):** The number of misclassified legitimate emails.
- **False Negative (FN):** The number of misclassified spam emails.
- **True Positive (TP):** The number of spam messages that are correctly classified.
- **True Negative (TN):** The number of legitimate emails that are correctly classified.
Precision is the percentage of correct prediction (for spam email), while spam Recall examines the probability of true positive examples being retrieved (completeness of the retrieval process), which means that there is no relation between precision and recall. On the other hand, F-measure combines these two metrics in one equation which can be interpreted as a weighted average of precision and recall. In addition, we use Receiver Operating Characteristics (ROC) curves which are commonly used to evaluate machine learning-based systems. These curves are basically a two-dimensional graphs where TP rate is plotted on y-axis and FP rate is plotted on x-axis. Therefore, depicting the tradeoffs between benefits TP and costs FP [19]. A common method to compare between classifiers is to calculate the Area Under ROC Curve (AUC).
It is important to mention that our definition of the performance metrics is mainly based on the confusion matrix shown in Figure 2.
B. Datasets
Our experiments are based on the following two publicly available recent datasets.
- CEAS2008 live spam challenge laboratory corpus [16] which contains 32703 labeled emails. Among these emails there are 26180 spam emails and 6523 ham
email, this dataset was collected during the CEAS 2008 conference and it is considered as one of the TREC public spam corpus.
- CSDMC2010 spam corpus [18]. This dataset contains 4327 emails out of which there are 2949 non-spam (ham) emails and 1378 spam emails.
It is important to mention that these datasets were used for training and testing.
C. Experimental Results
1) Results based on CEAS2008 dataset: Figure 3 depicts the performance of the different classifiers in terms of accuracy, precision, recall, F-measure and the area under ROC. This figure shows the disparity among the classifiers in terms of precision, recall, F-measure and accuracy. It can be seen that RF classifier outperform all the other classifiers with an average accuracy, precision, recall, F-Measure, ROC area of 98.5%, 98.4%, 98.5%, 98.5%, and 99%, respectively. The ROC curves for all classifiers considered in this study are shown in Figure 4. This figure confirms that the RF classifier has the best performance compared to other classifiers as it maintains the best balance between false positive rate and true positive rate. DT classifier comes after RF classifier, then MP and SVM classifiers, while the BN and NB classifiers comes last. NB classifier was the worst in this group.
It is important to be mentioned that the results of other classifiers were as follows: DT classifier achieved an average precision and recall of 98.4%, which indicates that DT classifier succeeds in classifying most of the emails based on their header information. For the SVM classifier, it can be seen that it achieved good results for this dataset. However, the results were not that good in case of small size dataset as described in Subsection IV-C2. The other issue is the trade-off between FP and FN, which can be described by the ROC area. In the case of the MP classifier, the datasets were divided using the cross validation technique. Having the trained network; we can use it in recognizing spam emails of the testing set by invoking the simulation function, which takes the input feature vector and the trained network as inputs and computes the outputs according to the weights of the neurons, then it finds the output of the maximum weight. This classifier achieved an average precision and recall of 97.8%.
2) Results Based on the CSDMC2010 dataset: In order to confirm the results obtained using CEAS2008 dataset, we repeated our experiments using another recent dataset (however, with smaller size). Figure 5 depicts the performance of the different classifiers using this dataset in terms of accuracy, precision, recall, F-measure and the area under ROC. It can be seen that RF classifier outperform all the other classifiers with an average accuracy, precision, recall, F-Measure, ROC area of 95.8%, 95.8%, 95.8%, 95.8% and 98.1%, respectively. It is to be noted that all classifiers achieved comparable performance this time indicating that the performance of some classifiers depends on the dataset used for testing and training. The MP classifier was very successful in recognizing 99% in both cases, RF classifiers was on top of thlist in terms of performance. The ROC curves for all classifiers considered in this study are shown in Figure 6. This figure confirms that the RF classifier has the best performance compared to other classifiers as it maintains the best balance between false positive rate and true positive rate.
D. Comparison with Previous Work
In this subsection, we compare the performance of the proposed scheme with other header-based email spam filtering techniques ([9], [10], [11], [12], [13]) based on the results reported in the literature for these techniques. Table II shows the best performance obtained them and compare it to the results obtained using the proposed work. It can be seen that applying RF classifier to the email header features described in Section III results in better performance as compared to
### Table II
**Performance of the Proposed Work Compared to Other Header-Based Email Spam Filters. A: Accuracy, P: Precision, R: Recall, F: F-Measure**
| Spam Filter | Sheu, 2009 [10] | Ye et al., 2008 [12] | Wu, 2009 [11] | Hu et al., 2010 [9] | Wang & Chen, 2007 [13] | Our Approach |
|-------------|-----------------|----------------------|---------------|---------------------|------------------------|--------------|
| Classifier(s) used | DT | SVM | Rule-based & back-propagation NN | RF, DT, NB, BN, SVM | Statistical analysis | DT, SVM, MP, NB, BN, RF |
| Best performance obtained | A=96.5%, P=96.67%, R=96.3% | A=98.1%, P=99.28%, R=96.9% | A=99.6% (ham misclassification = 0.63%) | RF (A=96.7%, P=93.5%, R=92.3%, F=93.3%) | 92.5% of junk emails are filtered out | RF (A=98.5%, P=98.9%, R=99.2%, F=99%) |
---
**Fig. 6.** ROC curves for the six classifiers applied on CSDMC2010 dataset
---
### V. Conclusion
Spammers are increasingly employing sophisticated methods to spread their spam emails. Also, they employ advanced techniques to evade spam detection. A typical spam campaign involves using thousands of spam agents to send spam to a targeted list of recipients. In such campaigns, standard spam templates are used as the base of all email messages. However, each spam agent substitutes different set of attributes to obtain messages that do not look similar. In this paper, we evaluated the performance of several machine learning-based classifiers and compared their performance in filtering email spam based on email header information. These classifiers are: C4.5 Decision Tree (DT), Support Vector Machine (SVM), Multilayer Perception (MP), Nave Bays (NB), Bayesian Network (BN), and Random Forest (RF). We adopted header-based email spam filtering by including additional header information features that found to be of a great importance to improve the performance of this technique. We evaluate the proposed work through experimental studies based on publicly available datasets. Our studies show that RF classifier outperform all the other classifiers with an average accuracy, precision, recall, F-Measure, ROC area of 98.5%, 98.4%, 98.5%, 98.5%, and 99%, respectively.
### References
[1] C. Kreibichy, et al., “Spamcraft: An Inside Look At Spam Campaign Orchestration,” Proceedings of the Second USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET '09), Boston, Massachusetts, April 2009.
[2] M. Intelligence, “MessageLabs Intelligence: 2010 Annual Security Report,” 2010. Retrieved: July, 2011. Available at: http://www.clearnorthitech.com/images/MessageLabsIntelligence_2010_Annual_Report.pdf
[3] Symantec, March 2011 Intelligence Report, Retrieved: July, 2011. Available at: http://www.symantec.com/about/news/release/article.jsp?prid=20110329_01
[4] S. Hinde, “Spam, scams, chains, hoaxes and other junk mail,” Computers & Security, vol. 21, pp. 592 - 606, 2002.
[5] A. R. B. Blog, October, 2010, The Dangers of SPAM. Retrieved: June, 2011. Available: http://www.anthonyricigliano.info/the-dangers-of-spam/
[6] A. C. Solutions, January 7, 2011 Statistics and Facts About Spam. Retrieved: July, 2011. Available: http://www.acsl.ca/2011/01/07/statistics-and-facts-about-spam/
[7] H. R. Courmane A, “An analysis of the tools used for the generation and prevention of spam,” Computers & Security, vol. 23, pp. 154-66, 2004.
[8] R. JENNINGS, JANUARY 28, 2009, Cost of Spam is Flattening – Our 2009 Predictions, Retrieved: July, 2011. Available at: http://ferris.com/2009/01/28/cost-of-spam-is-flattening-our-2009-predictions/
[9] Y. Hu, et al., “A scalable intelligent non-content-based spam-filtering framework,” Expert Syst. Appl., vol. 37, pp. 8557-8565, 2010.
[10] J-J. Sheu, “An Efficient Two-phase Spam Filtering Method Based On E-mails Categorization,” International Journal of Network Security, vol. 9, pp. 34-43, July 2009.
[11] C.-H. Wu, “Behavior-based spam detection using a hybrid method of rule-based techniques and neural networks,” Expert Systems with Applications, vol. 36, pp. 4321-4330, April, 2009.
[12] M. Ye, et al., “A Spam Discrimination Based on Mail Header Feature and SVM,” In Proc. Wireless Communications, Networking and Mobile Computing, 2008. WiCOM ’08. 4th International Conference on Dalian Oct. 2008.
[13] C.-C. Wang and S.-Y. Chena, “Using header session messages to antispamming,” Computers & Security, vol. 26, pp. 381-390, January 2007.
[14] Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH. “The WEKA Data Mining Software: An Update. SIGKDD Explorations”, 2009.
[15] P. R. Network Working Group, Editor. “Request for Comments RFC 2822,” Retrieved: July, 2011. Available: http://tools.ietf.org/html/rfc2822.html
[16] CEAS 2008 Live Spam Challenge Laboratory corpus. Retrieved: March, 2011. Available at: http://plg1.uwaterloo.ca/cgi-bin/cgiwrap/gvcormac/fooceas.
[17] R. Beverly and K. Sollins, “Exploiting Transport-Level Characteristics of Spam,” presented at the CEAS, Mountain View, CA, August 2008.
[18] C. GROUP (2010, Spam email datasets, CSDMC2010 SPAM corpus. Retrieved: March, 2011. Available at: http://csmining.org/index.php/spam-email-datasets.html
[19] T. Fawcett, “An introduction to ROC analysis,” Pattern Recognition Letters - Special issue: ROC analysis in pattern recognition, vol. 27, pp. 861-874, June 2006
Fault Tolerant Distributed Embedded Architecture and Verification
Chandrasekaran Subramaniam
Research and Development
Rajalakshmi Engineering College/ AUT
Chennai, India
firstname.lastname@example.org
Prasanna Vetrivel, Srinath Badri
Electrical and Electronics Engineering
Easwari Engineering College/ AUT
Chennai, India
email@example.com, firstname.lastname@example.org
Sriram Badri
Electronics and Communication Engineering
Sri Venkateswara College of Engineering/ AUT
Chennai, India
email@example.com
Abstract— The objective of the work is to propose a distributed embedded architecture model for tolerating faults while performing security functions using multiple field programmable gate arrays (FPGA). The hardware encryption and decryption modules are used as customized modules within the devices to act as a cooperative system to tolerate omission and commission faults. The different security functions are communicating through a common UART channel and security operations are synchronized with standard protocols initiated by an embedded micro controller. The decision in locating the working and available modules among a pool of devices is carried out by the microcontroller using an intelligent F-map mechanism and directs the control instructions. The model is scalable with increased number of similar devices when connected across the common communication channel. The model is verified for all its paths using Symbolic Model Verifier, NuSMV to assert the dynamic behavior of the architecture in case of different faulty conditions.
Keywords- Distributed architecture; Fault tolerance; Security module; Model verification; Assertion technique.
I. INTRODUCTION
The security architecture of embedded systems depends not only on the functional and performance requirements but also on the cost and spatial requirements suited to the target platforms. For example, the data path should be secured against many privacy attacks in the case of embedded systems used in the mobile applications. The data flow based on the dependence graph is solely determined by the components embedded as intellectual properties to perform the expected computations in real time. The architecture suitable for such computations with the security primitive components should be formally verified in order to avoid security errors like communication and synchronization errors. Due to the heterogeneity of various security hardware components from different component vendors, integration of them may lead to further challenges in the architectural design. The earlier SAFES architecture [1] focuses on the reconfigurable hardware that monitors the system behavior to realize the intrusion detections. The other proposed SANES architecture [2] focuses on security controller and component controller components to monitor the abnormal behavior in the system run time. Irrespective of the cryptographic algorithm used in the security primitive, the primitive components are to be self tested since the data path may vary dynamically in the case of a distributed embedded system. The components may not be available at some point of time when they are in need and they should be available in a fault free condition within the distributed mesh of devices. The correct selection of the primitive components either in the source or in the destination FPGA is to be decided in a power efficient manner among a pool of similar devices. The security processes are to be completed in the correct sequence and the operations are to be enabled as in the same dataflow form when they are completed [3]. The components are treated as resources within a single device to complete the submitted task and other similar devices are considered as coordinating devices controlled by a centralized controller. A field mapping technique is proposed through which the resource components available in the devices will be connected to form a distributed system considering the power consumption and the propagation delay involved in the on demand architecture. The distributed security architecture model has to be formally verified for its behavior to meet the reachability and fail free conditions. The standard NuSMV tool [5] supports LTL model checking [5] where the individual parameters can be inspected to investigate the effect of choices. Existing embedded system architectures are not capable of keeping up with the computational demands of security processing, due to increasing data rates and complexity of security protocols [7]. High assurance cryptographic applications require a design to be portioned and physically independent to ensure information cannot leak between secure design partitions. Ensuring this partition separation in the event of
independent hardware faults is one of the principle tenets of a high-assurance design [8]. FPGAs are highly promising devices for implementing private-key cryptographic algorithms. Compared to software based solutions FPGA [9] based implementations can achieve superior performance and security. Hence the main focus of the work is to propose a fault tolerant distributed architecture model for security hardware and check the model for its expected behavior meeting the specifications or against it.
The organization of the paper is as follows: Section 2 proposes the distributed embedded architecture for the security hardware using a single reconfigurable FPGA device with all the needed security primitive components. Section 3 discusses the sequences of processes to meet the performance requirements of the security architecture. The next section 4 extends the single system on chip to multi FPGA model where the data packet is routed between four similar devices under different faulty situations that lead to the worst case performance of the model. The next section explains the role of the micro controller in managing the field mapping of the devices when needed security components are faulty. Section 7 illustrates the model checking using NuSMV and its verified output and explores the scalability of the architecture along with its limitations like the issue of packet conflicts along the communication path in UART.
II. DISTRIBUTED EMBEDDED SYSTEM ARCHITECTURE
The architecture selection depends on the resource availability and reliability requirements of the security system. A reconfigurable platform like FPGA in addition to a high speed micro controller can organize the different computations needed to control the sequence of operations in the system. The resources are available in the form of intellectual properties (IPs) within the chip and these are to be assigned with the tasks in an efficient manner by the micro controller as the central manager. The best suitable architecture is the distributed embedded architecture where the resources are different sets of workers and the operations are performed in data flow form. The basic security hardware architecture consists of an encryption and decryption modules for 64 bit size along with their corresponding activation switches. The data transmission and reception takes place through the transmitter and receiver modules placed in the same chip and configured for different baud rates and actual process takes places through a universal asynchronous receiver transmitter (UART) module. The configuration of the security modules are self tested by the wired built-in self test (BIST) components connected with them to tolerate functional faults and controlled by a BIST Controller to regulate the data flow. All the components are selected or enabled by the control signals from device selector (DEVICE SEL) as shown in Fig. 1. The device reads the plain text from the keyboard and routes through the encryption and decryption modules as per the instructions received the micro controller that may reside inside or outside the chip assembly and connected through a bus. The deciphered text will be displayed in the activated display unit of the selected chip.
The micro controller is responsible for maintaining the queue of tasks and allocates the tasks to computational module as and when they are fail free. Because of the different execution times needed for different resources for processing the tasks, the micro controller has to run an intelligence algorithm to decide the available resource for that instant of time to forward the data. The micro controller gets the updated components information by regular sample intervals and refreshes its resource status (RST) and device status table (DST) in the private registers. The control signals are issued by the device select component as and when needed to send or receive. In case where there are concurrent requests from multiple devices over the limited bandwidth channel, then the micro controller forms a priority table to decide the order in which the queue of tasks may be completed. The micro controller is embedded with the algorithm to assign the priority to various requests.

III. SEQUENCE OF PROCESSES
The security components needed to perform cryptographic operations are to be initialized with requests from the central manager component that is the micro controller. The sequence of processes is determined based on the availability of those components in their fault free conditions so that the data flow can be triggered. The enabling and disabling of the components in the pool of resources is taken care by the intelligent algorithm called F-Map embedded in the micro controller. For encryption of the text that is coming through an input peripheral say, keyboard connected to any one of the FPGA device, the sequence of processes is different from that of the decryption processes towards the output peripheral say, display device connected to other or the same FPGA device. For any FPGA to work it is mandatory that the Transmission, Reception switches & the UART connection to function properly. The microcontroller maintains two
table a Device Status Table (DST), where a Farm of FPGAs that can take part in the security process are listed and Resource Status Table (RST), where the resource status of each FPGA in the device farm is listed. During runtime micro controller forms a Run Time Table in which all the working set of FPGAs, obtained from DST with all its resources in fault-free condition (resource status obtained from the RST) for the current operation i.e. encryption or decryption, are enlisted depending on power dissipation (Power Aware mode) or propagation delay (Performance mode) of the FPGAs.
A. Sequence of Operation
- Check for updates in the run-time table_encryption
- Get data from the SOURCE_FPGA.
- Check the status of resources in the SOURCE_FPGA.
- If Fault free, send data to FPGA for encryption.
- Else, select next FPGA based upon the mode of operation in the runtime table to encrypt the data.
- Check updates in run-time table_decryption.
- Check the status of resources in the DESTINATION_FPGA
- If Fault free, send encrypted data to DESTINATION_FPGA for decryption.
- Else, select next FPGA in the runtime table and instructs to decrypt
- Transmit data to destination.
- Wait and Go to 1.
IV. MULTI FPGA BASED FAULT TOLERANT DISTRIBUTED EMBEDDED SYSTEM
The distributed embedded architecture on chip proposed in the work can be integrated with similar devices so as to make a distributed fault tolerant model. A micro controller is connected to instruct and manage the resource allocation between the devices based on the fault free conditions of the needed components. The algorithm that is embedded in the controller accepts the status of all the devices in terms of their resource components and decides the routing that the plain or cipher text has to follow as shown in Fig 2. The reliability of the distributed system is enhanced by component dynamic redundancy technique as and when the fault gets detected. Even though the functional component is static within a device, the controller calls the fault free components dynamically. The availability and system reliability is enhanced at the cost of communication overhead between the controller and the status table register multiple times. Since the performance depends on the speed of completion of the submitted task, the reliability of the system gets improved over the expected performance level.
In the distributed architecture, the microcontroller behavior starts from instructing the distributed architecture of FPGAs in its Null state. After a time interval (when an implicit or watchdog timer expires) its state transits from Null to Get_Status state (MT1). The state changes from Get_Status to Monitor_Req when the microcontroller successfully reads the status of all the FPGAs (MT2). The coordinating micro controller continues to remain in
Monitor\_Req state until any request is made (MT3). The state transits from Monitor\_Req to Get\_Info if a request is made by any one of the FPGAs (MT4). On occurrence of any data error during the request, the controller tracks back from Get\_Info to Monitor\_Req (MT5). On successfully acquiring the information from FPGAs, a state transition occurs from Get\_Info to Analyse\_Requester\_Status (MT6). If all the resource primitive components at the source and destination FPGAs are fault-free, a state changes from Analyse\_Requester\_Status to Normal\_Service (MT7). On successful completion of the task state transits from Normal\_Service state to Monitor\_Req state (MT8). If source or destination devices or both have non availability errors in any of their resources, the controller changes state from Analyse\_Requester\_Status to Worst\_Case\_Service (MT9). On successful completion of the event, the state changes from Worst\_Case\_Service state to Monitor\_Req (MT10). The control remains in the Normal\_Service state until the predefined time elapses on occurrence of a communication error (MT11). The microcontroller changes state from Normal\_Service to Get\_Info state if the error continues to persist at the end of the predefined time interval (MT12). While encountering various errors, the control remains in the Worst\_Case\_Service state until the predefined time elapses (MT13). The control changes Worst\_Case\_Service to Get\_Info state if the error exists when the predefined time is exhausted (MT14). The control of operations shift from Worst\_Case\_Service to Normal\_Service when the micro controller finds the updated status of the resource primitive components at the source and destination FPGAs to be fault-free (MT15) as in Fig 3. The fault tolerant technique using available fault free components can be extended to the situation when multiple FPGA devices are interconnected. Assuming similar devices, the security operations are executed in different devices based on the primitive components availability. If not, the correct routing instruction will be issued by the micro controller based on its updated device table and corresponding resource tables. The reliability of the communication path between the devices is a major challenge in the design of the above multi FPGA based distributed embedded security system. The synchronization in the execution of primitive operations is taken care by the embedded algorithm residing in the micro controller.

**Figure 4.** F Map for Power Aware Application.
### V. FMAP FOR RESOURCE SELECTION
The Farm map (F-Map) is an intelligent algorithm by which the micro controller identifies the next device that has the needed security primitive components. The mapping brings the current status of the resources and fills the cell as ‘1’ in the START FPGA which indicates that the DEVICE SELECT and the BIST for Encryption components are in failed states as shown in Fig 5. The algorithm searches the next device, depending upon whether the FPGAs are operated in power save mode or performance mode and in which all the needed components are fault free that is indicated by 1 cell for encryption process. Similarly the same algorithm is iterated for the decryption process also to get the data packet encrypted and communicated to the destination. The decryption may take place in the destination FPGA device. If multiple ‘1’ cells are appearing at any time among different devices, then the controller selects the shortest path to minimize the communication delay to avoid any attack on the device itself that is possible in the distributed FPGA architecture. The Device Select at the destination FPGA is in failed state which makes the micro controller iterate for the next FPGA with the required resources in Fault-free state. The alternate FPGA for decryption process is selected depending on whether the distributed FPGA architecture is operated in performance mode or power aware mode. ACTIVE\_DECRYPT\_FPGA1 is chosen when operating in performance aware mode and
ACTIVE_DECRYPT_FPGA2 is chosen while operating in power aware mode. The performance characteristics of the distributed FPGA vary with the operating mode. For quicker response, the Distributed architecture may be operated in performance mode which causes higher power dissipation in the device. While operating in power save mode, the propagation delay is high, but the power consumed is minimal, as shown in Table. 1.
Propagation Delay = n * T
Where, n = R + C - 1
‘R’ Denotes the Row & ‘C’ Denotes Column in which the FPGA is located in the DST.
| Order in DST (R,C) | Power Dissipation | Propagation Delay(ms) | Remarks |
|--------------------|-------------------|-----------------------|---------------|
| 1,1 | 6.41W | T | Performance |
| 2,3 | 4.23W | 4*T | Normal/Safe |
| 1,9 | 3.47W | 9*T | Power Aware |
| 6,12 | 7.9W | 17*T | Poor |
Figure 5. F Map for Power Aware Application.
VI. PERFORMANCE IN BEST CASE AND WORST CASE SCENARIO
In the distributed embedded system, if all the resources in the source and destination FPGAs are fault-free then the micro controller chooses the normal operating mode which is the best case scenario. The behavior of the embedded system during its best case performance is shown in Fig. 6 and its transitions are mentioned in Table. 2. The behavior of the distributed embedded system varies when a resource in source or destination FPGA is in failed state, depending upon the mode of operation viz. Performance Mode, Power Aware Mode, Safe Mode, depicted by the worst case behavior is shown in Fig. 7.
The microcontroller iterates various paths from the source to the destination depending upon the mode of operation and selects the FPGAs in source or destination side, with its required resources in fault free condition.
Figure 6. Best Case Scenario.
| Transition ID | Events |
|---------------|------------------------------------------------------------------------|
| NT1 | If main state machine reaches NORMAL_SERVICE state |
| NT2 | Failure of getting source status || Failure of getting destination status |
| NT3 | After getting successful status of source and destination |
| NT4 | If not getting correct data from source |
| NT5 | If not getting correct data from source && reaches maximum tries |
| NT6 | Getting successful data from source |
| NT7 | Not sending correct data to destination |
| NT8 | If not sending correct data to destination |
| NT9 | If sending correct data to destination |
VII. MODEL CHECKING OF ARCHITECTURE USING NuSMV
There are two extreme cases in which the fault tolerant capability can be measured. In one case, all the needed security components are fail free and available in a single chip and thereby the communication delay will be only due to transmit and receive signal propagation making power consumption very high. The other extreme is when one set of encryption components may be available in nearby device where as the decryption set of components in other end of the array or pool of devices and vice versa [6]. This condition leads to huge delay due to multiple enable, multiple address and data signals along with multiple receive, transmit and multiple device select signals. The worst case performance where the microcontroller searches for the fail free encryption and decryption components is verified using the symbolic model verifier as shown below. The model checking is done to check the reliability of the model during the worst case performance and to verify the availability of alternate resources to complete the specified task. The LTL and CTL operations are applied as specifications to verify the availability of devices and the needed security resources for the encryption and decryption processes as given in NuSMV code below:
NuSMV CODE FOR AVAILABILITY IN THE WORST CASE SCENARIO
```
MODULE main
VAR
state : { null_state, get_update_status, s_device_select,
get_data, d_device_select, send_data };
status: boolean;
source_device_select : boolean;
data_get : boolean;
data_sent : boolean;
time : boolean;
dest_device_select : boolean;
INIT
state=null_state;
ASSIGN
next(state):= case
state = null_state : get_update_status ;
state = get_update_status & !status : null_state;
state = get_update_status & status : s_device_select;
state = s_device_select & !source_device_select : get_update_status; state = s_device_select & source_device_select : get_data;
state = get_data & !data_get & time : get_data;
state = get_data & !data_get & !time : get_update_status;
state = get_data & data_get : d_device_select;
state = d_device_select & !dest_device_select : get_update_status;
state = d_device_select & dest_device_select : send_data;
state = send_data & !data_sent & time : send_data;
```
The model checking algorithm reports true if specification holds true for every state of the system model. Otherwise, states not satisfying the specification are identified. A transition path from a defined initial state to a state identified as not satisfying the specification is called counterexample [4]. The CTL properties of the proposed security farm architecture is verified using NuSMV for different specifications and the result with instances and counter example is shown below:
**SPECIFICATUIONS FOR THE NUSMV CODE USING LTL AND CTL PROPERTIES**
-- specification EF state = null_state is true
-- specification AG (AF (!time -> state = get_update_status)) is true
-- specification AG (EF (!status -> state=null_state)) is true
-- specification AG (AG (AF (time -> (!data_get -> state = get_update_status)))) is false
-- as demonstrated by the following sequence
Trace DEscription: CTL Conterexample
Trace Type: Counterexample
-> State: 1.1 <-
state = null_state
status = FALSE
source_device_select = FALSE
data_get = FALSE
data_sent = FALSE
time = FALSE
dest_device_select + FALSE
-> State: 1.2 <-
state = get_update_status
status = TRUE
-> State: 1.3 <-
state = s_device_select
status = FALSE
source_device_select = FALSE
time = TRUE
-- Loop starts here
-> State: 1.4 <-
status=get_data
source_device_select = FALSE
-> State: 1.5 <-
**VIII. CONCLUSION AND FURUTE WORKS**
A distributed embedded architecture model for security hardware with fault tolerant features is proposed to tolerate non availability faults and resource component faults. The model is a centralized system in which sequential control is exercised by a micro controller through the F-map algorithm embedded in it and can be scaled as multi FPGA devices pool with power or performance awareness. The collective behavior of the architecture model is formally verified using a model checker based on LTL and CTL properties. The limitation of the proposed model is in the communication overhead when all devices want to send the data concurrently over the limited bandwidth of UART channel. The constraints are the assumptions that the basic transmission and reception switches must always be fault free even to intimate the error report to the central controller. The actual implementation of the project with multiple FPGAs and triple micro controllers to enhance the reliability is the future work planned.
**REFERENCES**
[1] G. Gogniat, T. Wolf and W. Burleson: *Reconfigurable Hardware for High-Security / High-Performance Embedded Systems: The SAFES Perspective*, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol. 16, No 2, February 2008, pp. 1-10.
[2] *Reconfigurable Security Architecture for Embedded Systems*, http://vcs.gi.eecs.umass.edu/essg/papers/MOCHASubmit.pdf, pp. 1-7.
[3] S. Hauck and A. Dehon: Morgan Kaufmann Publications, *Reconfigurable Computing*, pp 107-110.
[4] Sebastian Steinhorst: Dissertation, *Formal Verification Methodologies for Nonlinear Analog Circuits*, Frankfurt, 2011, pp. 55-69.
[5] A.Cimatti, E.Clarke, F.Giunchiglia and M.Roveri: *NuSMV :A New Symbolic Model Verifier*, pp.1-5.
[6] G. K. Palshikar: *An Introduction to Model Checking*, html page, pp. 1-8.
[7] S. Ravi, A. Raghunathan, P. Kocher and S. Hattangady: *Security in Embedded Systems: Design Challenges*, ACM Transactions on Embedded Computing Systems, Vol 3, No. 3, August 2004, pp. 6-10.
[8] P. Quintana: *Fail-Safe FPGA Design Features for High-Reliability Systems*, Paper ID: 900566, IEEE 2009, pp. 3-5.
[9] A. Dandalis, V.K. Prasanna and D.P. Rolim: An Adaptive Cryptography Engine for IPSec Architectures, ACM Transactions on Design of Automation of Electronic Systems, Vol. 9, July 2004, pp. 333-353.
Determining Authentication Strength for Smart Card-based Authentication Use Cases
Ramaswamy Chandramouli
Computer Security Division, Information Technology Lab
National Institute of Standards and Technology
Gaithersburg, MD, USA
firstname.lastname@example.org
Abstract - Smart cards are now being extensively deployed for identity verification (smart identity tokens) for controlling access to Information Technology (IT) resources as well as physical resources. Depending upon the sensitivity of the resources and the risk of wrong identification, different authentication use cases are being deployed. Assignment of authentication strength for each of the use cases is often based on: (a) the total number of three common orthogonal authentication factors – What You Know, What You Have and What You are – used in the particular use case and (b) the entropy associated with each factor chosen. The objective of this paper is to analyze the limitation of this approach and present a new methodology for assigning authentication strengths based on the strength of pair wise bindings between the five entities involved in smart card based authentications – the card (token), the token secret, the card holder, the card issuer and the person identifier stored in the card. The use of the methodology for developing an authentication assurance level taxonomy for a real world smart identity token deployment is also illustrated.
Keywords - Identity Verification; Smart Identity Token; Authentication Strength
I. INTRODUCTION
Smart cards are now being extensively deployed for identity verification for controlling access to Information Technology (IT) resources as well as physical resources [1,2,3]. We refer to them as Smart Identity Tokens and use the two terms interchangeably throughout this paper. These types of smart cards generally carry: (a) A Person Identifier (PI), (b) A Secret (TS) usually in the form of a cryptographic key [4], and (c) A Credential linking the Secret and the Identifier (CR). Along with these data, a PIN (a combination of numbers) is often used for: (a) Activating the card (token) and for (b) Restricting access to certain data objects and operations. In some instances, presentation of a live biometric data (such as a fingerprint) is used to enable the above functions instead of a PIN. In any enterprise deploying smart cards, there may be different types of resources that may have to be protected by restricting access to only those whose identity is verified through a smart card based authentication mechanism. Depending upon the sensitivity of the resource and the risk associated with wrong identification of the entity requesting access to those resources, authentication mechanisms using different combinations of the three data types enumerated above (PI, TS or CR) along with/without an activation data may be used. A set of authentication mechanisms used by an enterprise for controlling access to different types of resources (or stated differently- different applications of smart identity token) are called Authentication Use Cases.
In general (irrespective of whether a smart identity token is used or not), the choice of an Authentication Use Case in the context of an access control to a resource is often made based on authentication strength or assurance level associated with the token artifact used in the Authentication Use Case. These artifacts are: (a) an identifier specific to a domain and (b) a credential that is a combination of an identifier and a secret – examples for the latter being: (a) a PIN (b) a one-time password and (c) a cryptographic key. The usage of a token by a claimant during an authentication event results in a value called Authenticator that is generated by the token and is transmitted from the token to the authentication module or the verifier. The basis for designating an authentication strength associated with a token is a fundamental unit called “Authentication Factor”. There are three main authentication factors [5]:
- What the Entity Knows (e.g., Password, PIN, etc)
- What the Entity Has (e.g., possession of a token that generates one-time passwords)
- What the Entity Is (e.g., inherent physiological characteristic such as a Fingerprint)
A token that uses one of the above three factors is called a single factor token (e.g., a password that belongs to “What the Entity Knows” factor). A token that uses a combination of two or more of the above factors is called a multi-factor token. A smart card that contains an embedded private cryptographic key (thus using What the Entity Has authentication factor) that can be used to generate an authenticator when it is activated by a PIN (using the What the Entity Knows authentication factor) is deemed a multi-factor token. An authentication use case may use one or more tokens and hence may involve the use of one or more authentication factors. In general, the authentication strength associated with an authentication use case is determined based on the combination of the following metrics:
• The number of authentication factors used in the authentication use case
• The Entropy associated with each of the authenticator factor used
In this paper, we argue that the logic for assigning authentication strength based on the number of authentication factors in an authentication use case is valid only under certain limiting conditions and that these conditions do not hold in the case of authentication use cases using smart cards as identity tokens. This is the rationale for proposing a new methodology for assigning authentication strengths for various authentication use cases involving smart identity tokens.
The description of the conditions under which the number of authentication factors can be used as a reliable metric for authentication strength and an illustration of how those conditions do not hold in the case of smart cards are given in Section II. Section III discusses the basis vector that is applicable for smart card based identity verification approaches. The development of our methodology for determining authentication strengths for various smart card-based authentication use cases based on the basis vector referred to above is the topic of Section IV. The application of this methodology for assigning authentication strengths for building a taxonomy of authentication assurance levels for the set of authentication use cases specified for a major government smart card-based identity verification deployment is done in Section V. Section VI presents the benefits of our methodology and provides the conclusions.
II. LIMITATIONS OF AUTHENTICATION FACTOR-BASED APPROACH FOR DETERMINING AUTHENTICATION STRENGTHS
In order that the number of authentication factors is a valid metric for determining the authentication strength of an authentication use case, it must satisfy the following properties:
• AF-AS-P1: The authentication factors must be mutually independent. If there is any mutual dependency between any two authentication factors, then assuming the additive property is not valid for computing the metric indicating the authentication strength. This is not an issue as the three authentication factors – What You Know, What you Have and What you Are do not have any pair wise mutual dependency.
• AF-AS-P2: All authenticators used in the authentication use case must flow directly from the claimant to the verifier in the resulting authentication message protocol. This property must hold since any authentication decision by the verifier is based entirely on the outcome of the process of verifying one or more authenticators received from the claimant. Hence any authentication decision based on a lesser number of authenticators is certainly of lower authentication strength than an authentication decision using a higher number of authenticators.
We illustrate through an example that the second property is not satisfied in many smart card based authentication use cases deployed in real-world implementations [3,8]. For example, in an authentication use case called Challenge-Response, the smart card responds to a random challenge string sent by the authentication system by encrypting the string with its private key and sending the encrypted string back. Some cards are programmed to require the card holder to provide a PIN to perform this private key operation. This authentication use case is classified as a two factor authentication (since it involves demonstrating the presence of a secret cryptographic key (one factor) and the PIN (second factor)) although the only authenticator that flows to the authentication system (verifier) is the encrypted challenge. Thus, we see that, in order to truly assess the authentication strength associated with smart card based authentication use cases, we need a basis vector other than just the number of authentication factors. To identify and derive such a basis vector, we find that there is a need to look at the various basic entities that participate in authentication protocols using smart cards and the nature of pair-wise binding that exists among them. The logic for development of these pair wise bindings is described in the next section.
III. DEVELOPMENT OF BASIS VECTOR FOR SMART CARD-BASED AUTHENTICATION USE CASES
Before we start using the pair-wise binding as components of a basis vector used for determining authentication strengths, we need to make a comprehensive list of the basic entities involved in them. These basic entities, building on the smart card contents we saw in the last section are: the physical token (smart card), the card holder, the token secret, card issuer and the person identifier. Please note that we do not term the credential as a basic entity since credential is a derived artifact providing the binding of the two basic entities – Person Identifier and the Token Secret. Before we start listing the pair-wise bindings, we find that any authentication use case is itself built from some primitive authentication usage modes each of which uses one or more of three categories of smart card data – Person Identifier, Token Secret and Credential. Hence every pair-wise binding should trace its link to a primitive authentication usage mode and the associated smart card data used in that mode. This link is provided through the data in Table I. Table I, in addition to providing the bindings, also provides the strength associated with each binding based on the nature of the primitive authentication usage mode and the associated data used in it. Out of the six possible valid bindings, the person identifier participates in three of them being associated with card issuer (through digital signature), token secret (being used in digital certificate) and card holder (being used in biometric object).
### TABLE I. SMART IDENTITY CARD – PRIMITIVE AUTHENTICATION USAGE MODES & BINDINGS
| Smart Card Data | Primitive Authentication Usage Mode | Pair-wise Bindings with associated strength |
|---------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------|
| Embedded Cryptographic Key (private key of an asymmetric Key Pair) – Token Secret | PUM-1: Verifying Presence of embedded token secret (tested by sending an input data from the Verifier and receiving an associated Authenticator) | Token–Token Secret Binding (Strong) |
| Embedded Cryptographic Key (private key of an asymmetric Key Pair) – Token Secret that requires an activation data to demonstrate its presence | PUM-2: Same as previous + card holder providing a PIN for generating the authenticator | 1. Token – Token Secret Binding (Strong)
2. Card Holder – Token Binding (Strong or Weak depending upon entropy of activation data) |
| Person Identifier | PUM-3: Person Identifier’s origin and integrity checked using its associated digital signature | Person Identifier–Card Issuer Binding (Strong) |
| Credential (A Public Key Certificate) linking the token secret to the Person Identifier | PUM-4: Trust in the certificate established through Certificate Validation | Token Secret – Person Identifier Binding (Strong) |
| Credential (A digitally signed Biometric Object) linking a Card Holder Trait (biometric) to the Person Identifier | PUM-5: The digital signature associated with biometric data object is verified. Live biometric sample sent to the card for matching with the stored biometric data | Card Holder – Person Identifier Binding (Strong or Weak depending upon how live sample is collected) |
### IV. METHODOLOGY FOR ASSIGNING AUTHENTICATION STRENGTHS FOR AUTHENTICATION USE CASES
In the previous section, we identified the primitive authentication usage modes and the bindings (along with their associated strength) enabled by those modes. An authentication use case that is used in a smart identity token deployment will be a combination of one or more of the primitive authentication usage modes. Now our final goal is the determination of authentication strength for a given authentication use case. In order to compute this value, we need to know the security properties satisfied and the weakness in each of the primitive authentication usage modes that constitute that authentication use case. The derivation of these security properties satisfied and weaknesses from the bindings (and their associated strengths) provided by each of the five primitive authentication usage modes (taking into consideration the state of smart card technology) is shown in Table II.
Now, based on the observation that the primitive authentication modes are independent of each other (except for PUM-2 which is a superset of PUM-1), the security properties associated with the set of primitive authentication usage modes constituting an authentication use case can all be added up to obtain the total set of security properties satisfied in an authentication use case.
Let us consider the following Authentication use case which we shall call as BIO-A:
1. The Authentication Module (Verifier) reads the signed biometric object on the card.
2. The digital signature of the biometric object is verified.
3. The Authentication station is attended by a guard under whose watch the claimant provides his/her fingerprint through a scanner present in the station.
4. The Live sample of the biometric is compared with the stored biometric data on the card.
5. When the match is successful, the person identifier extracted from the signed biometric object is compared with identifier stored in the identifier object on the card. The digital signature associated with identifier object is verified.
6. If the verification is successful, the identifier is sent to the Physical Access Control Server which in turn sends a signal to open the door leading to the facility controlled by the authentication station.
From the description of the above steps in our example Authentication Use Case BIO-A, we find that steps 1-4 map to our primitive authentication usage mode PUM-3. Step 5 maps to our usage mode PUM-3. Hence adding the properties associated with these primitive authentication usage modes, we find that the authentication use case BIO-A satisfies the following total set of properties:
1. Card Holder is authenticated (Strong – since the live sample is collected under a supervised condition ensuring freshness and hence no replay using duplicated fingerprints possible)
2. Validity of the Identifier is established
The security property set associated with an authentication use case can be used as a metric for establishing a partial order among the various authentication use cases specified for a smart card based identity verification deployment scenario. This partial order can then be used to construct an authentication assurance level taxonomy for that deployment instance.
| Primitive Authentication Usage Mode | Bindings Established with associated strength | Security Properties Satisfied (WEAKNESS in CAPS) |
|------------------------------------|---------------------------------------------|-------------------------------------------------|
| PUM-1: Verifying Presence of embedded token secret (tested by sending an input data from the Verifier and receiving an associated Authenticator) | Token – Token Secret Binding (Strong) | 1. Card is Authenticated
2. STOLEN CARD
3. CARD HOLDER IS NOT AUTHENTICATED
4. NO LINK FROM TOKEN SECRET TO PERSON IDENTIFIER |
| PUM-2: Same as previous + sending an activation data to the token | (a)Token – Token Secret Binding (Strong)
(b)Card Holder – Token Binding (Strong or Weak depending upon activation data) | 1. Card is Authenticated
2. Card Holder is Authenticated (Strength based on Activation Data)
3. NO LINK FROM TOKEN SECRET TO PERSON IDENTIFIER |
| PUM-3: Person Identifier’s origin and integrity checked using its associated digital signature | Person Identifier – Card Issuer Binding (Strong) | 1. Validity of the Identifier is established
2. STOLEN CARD
3. CLONED CARD
4. CARD IS NOT AUTHENTICATED
5. CARD HOLDER IS NOT AUTHENTICATED
6. NO LINK FROM TOKEN SECRET TO PERSON IDENTIFIER |
| PUM-4: Trust is established on a credential (a public key certificate) linking embedded token secret and the person identifier through certificate validation | Token Secret – Person Identifier Binding (Strong) | 1. Link from Token Secret to Person Identifier established
2. STOLEN CARD
3. CLONED CARD
4. CARD IS NOT AUTHENTICATED
5. CARD HOLDER IS NOT AUTHENTICATED |
| PUM-5: Trust is established on a credential (signed biometric object containing the identifier in addition to biometric data) by verifying the digital signature. Live biometric sample sent to the card for matching with the stored biometric data | Card Holder – Person Identifier Binding (Strong or Weak depending upon how live sample is collected) | 1. Card Holder is Authenticated (Strength based on how live biometric sample is collected)
2. CLONED CARD
3. CARD IS NOT AUTHENTICATED |
V. ILLUSTRATION OF METHODOLOGY FOR A REAL WORLD SMART IDENTITY TOKEN DEPLOYMENT
In this section, we illustrate the application of our methodology for assignment of authentication strengths for authentication use cases used in a real world smart identity token deployment scenario. The first step of our methodology is to express each authentication use case specified for the deployment in terms of our primitive authentication usage mode. This will automatically provide us the total set of security properties associated with that authentication use case. We then use the property set containment to derive a partial order among the authentication use cases and to finally derive an authentication assurance level taxonomy for the entire smart identity token deployment. The real world smart identity token deployment we have chosen for our illustration is the Implementation of Personal Identity Verification (PIV) smart card for controlling physical access to federal facilities and logical access to U.S government IT systems [7,8]. For the sake of space and brevity, we do not describe each of the authentication use cases in the PIV deployment scenario. We also do not illustrate the process by which our primitive authentication usage modes can be composed to obtain a PIV authentication use case. These liberties have been taken since our final goal is just to illustrate the use of our methodology for developing an authentication assurance level taxonomy. Table III below provides a compilation of all the PIV Authentication uses [8], the list of primitive authentication usage modes that comprise each authentication use case and total set of security properties satisfied by each authentication use case specified in a PIV deployment instance.
| PIV Authentication Use Case | Set of Primitive Authentication Usage Modes involved | Properties Satisfied |
|-----------------------------|---------------------------------------------------|----------------------|
| Authentication using PIV CHUID (CHUID) | PUM-3: Identifier’s origin and integrity checked using its associated digital signature | 1. Validity of the Identifier is established |
| Unattended Authentication using PIV Biometric (BIO) | PUM-5: Trust is established on a credential (signed biometric object containing the identifier in addition to biometric data) by verifying the digital signature. Live biometric sample sent to the card for matching with the stored biometric data
PUM-3: Identifier’s origin and integrity checked using its associated digital signature | 1. Card Holder is authenticated (Weak)
2. Validity of the Identifier is established |
| Attended Authentication using PIV Biometric (BIO-A) | PUM-5: Trust is established on a credential (signed biometric object containing the identifier in addition to biometric data) by verifying the digital signature. Live biometric sample sent to the card for matching with the stored biometric data
PUM-3: Identifier’s origin and integrity checked using its associated digital signature | 1. Card Holder is authenticated (Strong)
2. Validity of the Identifier is established |
| Authentication using PIV Asymmetric Cryptography (PKI-AUTH) | PUM-4: Trust is established on a credential (a public key certificate) linking embedded token secret and the person identifier through certificate validation
PUM-2: Verifying Presence of embedded token secret (tested by sending an input data from the Verifier and receiving an associated Authenticator) (derived from PUM-1) + sending a activation data of robust strength to the token
PUM-3: Identifier’s origin and integrity checked using its associated digital signature | 1. Link from Token Secret to Identifier established
2. Card Holder is authenticated (Strong)
3. Card is Authenticated
4. Validity of the Identifier is established |
| Authentication using Card Authentication Certificate Credential (PKI-CAK) | PUM-4: Trust is established on a credential (a public key certificate) linking embedded token secret and the person identifier through certificate validation
PUM-1: Verifying Presence of embedded token secret (tested by sending an input data from the Verifier and receiving an associated Authenticator)
PUM-3: Identifier’s origin and integrity checked using its associated digital signature | 1. Link from Token Secret to Identifier established
2. Card is Authenticated
3. Validity of the Identifier is established |
Based on the property containment relationship between the various PIV authentication use cases, we derive a partial order and use that partial order to develop a complete authentication assurance level taxonomy. The taxonomy thus derived is shown in Figure 1 below:
VI. CONCLUSIONS AND BENEFITS
The observation that not all authenticators flow between the smart identity token and the authentication module (verifier) has driven the need for a new basis vector other than just the number of authentication factors and an associated methodology for assigning authentication strengths for various authentication use cases involving smart cards. In this paper, we developed such a methodology which uses pair wise bindings between the five entities involved in smart identity token authentication use cases –i.e., token (the card), the token secret, the card holder, the card issuer and the person identifier - as the basis for deriving a set of properties satisfied for each primitive authentication usage mode. The primitive authentication usage modes are in turn identified based on the types of data a smart identity token usually holds. Next, we illustrated the process of expressing an authentication use case in terms of the combination of primitive authentication usage modes and using the additive properties associated with each usage mode, derived the total set of properties satisfied by an authentication use case. Finally the property set associated with an authentication use case is used to derive a partial order among the use cases. This partial order was then used to derive an entire authentication assurance level taxonomy for a smart identity token deployment scenario. The advantages of this approach are: (a) It takes into account all entities participating in the authentication protocol (the five that we referred to earlier) and the pair wise bindings between them and (b) considers technology-specific weaknesses (e.g., token can be stolen and cloned) that may affect the security properties satisfied in each primitive authentication usage mode and by extension in an authentication use case.
REFERENCES
[1] Securing e-business applications using Smart Cards, IBM Systems Journal, Vol 40, Number 3, 2001, (Oct, 2011), http://www.research.ibm.com/journal/sj/403/hamann.html
[2] Kumar, M.: New Remote User Authentication Scheme Using Smart Cards, IEEE Transactions on Consumer Electronics. Volume 50, Issue 2, 597 – 600 (2004)
[3] TWIC Reader Hardware And Card Application Specification, May 30, 2008, (Nov, 2011) http://www.tsa.gov/assets/pdf/twic_reader_card_app_spec.pdf
[4] NIST SP 800-63-1 Recommendation for Electronic Authentication, Dec 2008, (Oct, 2011) http://csrc.nist.gov/publications/drafts/800-63-rev1/SP800-63-Rev1_Dec2008.pdf
[5] OMB M04-04 – E-Authentication Guidance for Federal Agencies, Dec 16, 2003, (Oct, 2011) http://www.whitehouse.gov/omb/memoranda/fu04/m04-04.pdf
[6] Internet X.509 PKI Certificate & CRL Profile, (Nov, 2011) http://www.ietf.org/rfc/rfc5280.txt
[7] Identity Management Task Force Report, National Science and Technology Council (NSTC) Subcommittee on Biometrics and Identity Management, 2008, (Oct, 2011) http://www.biometrics.gov/documents/idmreport_22sep08_final.pdf
[8] FIPS 201 – Personal Identity Verification of Federal Employees and Contractors, (Oct, 2011) http://csrc.nist.gov/publications/drafts/fips201-2/Draft_NIST-FIPS-201-2.pdf
|
Steroid-Induced Sex Determination at Incubation Temperatures Producing Mixed Sex Ratios in a Turtle with TSD
THANE WIBBELS\(^1\) AND DAVID CREWS
Institute of Reproductive Biology, Department of Zoology, University of Texas at Austin, Austin, Texas 78712
Accepted June 7, 1995
Previous studies have shown that exogenous steroid hormones can affect sex determination in reptiles with temperature-dependent sex determination (TSD). These studies have also suggested that the sensitivity of TSD to exogenous steroids may vary with incubation temperature. The majority of these studies, however, have utilized incubation temperatures producing all males or all females in the control groups, rather than temperatures which produced mixed sex ratios in control groups. The goals of the current study were to examine the effects of steroids on sex determination in a turtle (*Trachemys scripta*) at temperatures which produced mixed sex ratios in the control groups. Collectively, the results of single-treatment experiments indicate that at incubation temperatures producing mixed sex ratios in control groups, (1) estradiol-17β, tamoxifen, norethindrone, and testosterone all showed a similar “type” of effect (i.e., feminizing) as in previous studies utilizing male-producing temperatures, (2) sex determination has significantly increased sensitivity to estradiol-17β in comparison to its effect at temperatures producing all males, and (3) sex determination is sensitive to the masculinizing effects of dihydrotestosterone (DHT) (in previous studies utilizing female-producing temperature DHT did not affect sex determination). Last, a set of double-treatment experiments was performed in which eggs received both estradiol-17β and DHT treatments. No significant increases in the production of males were detected. Significant increases in the production of females were detected, but only in the groups receiving the highest dosage of estradiol-17β (1.0 μg). This contrasts the results of the single-treatment experiments in which lower dosages of estradiol-17β were effective (0.1 and 0.01 μg), thus suggesting that DHT in some way decreases the effectiveness of estradiol-17β. Further, a number of hatchlings in the double-treatment experiments developed intersex gonads (i.e., the gonads had well-developed medullary and cortical regions), suggesting that cortical and medullary development of the gonads are not mutually exclusive.
A variety of past studies have shown that estrogen, estrogen-related compounds, and testosterone can induce female sex determination at male-producing temperatures in reptiles with temperature-dependent sex determination (TSD) (Pieau, 1974; Raynaud and Pieau, 1985; Gutzke and Bull, 1986; Bull *et al.*, 1988; Crews *et al.*, 1989, 1991; Dorizzi *et al.*, 1991; Lance and Bogart, 1991, 1992; Wibbels and Crews, 1992, 1994). It has been hypothesized that steroid-induced sex determination is an estrogen-specific event and the feminizing effect of testosterone may be due to its aromatization to estrogen (Crews *et al.*, 1989; Dorizzi *et al.*, 1991; Desvages and Pieau, 1991, 1992a,b; Wibbels and Crews, 1992; Wibbels *et al.*, 1994). This hypothesis is supported by recent studies showing that aromatase inhibitors can induce male sex determination in a turtle with TSD (Wibbels and Crews, 1994; Crews and Bergeron, 1994) and have disrupted ovarian development in alligators (Lance and Bogart, 1992). Several studies have also indicated that putative estrogen antagonists such as tamoxifen and norethindrone can act as estrogen agonists in this sex determination system (Lance and Bogart, 1991; Wibbels and Crews, 1992). Further, one study suggests that the sensitivity of sex determination to estradiol-17β varies with incubation temperature (Wibbels *et al.*, 1991b), with
\(^1\) To whom correspondence and reprint requests should be addressed at current address: Department of Biology, University of Alabama at Birmingham, Birmingham, AL 35294-1170. Fax: (205)975-6097. E-mail: Internet, firstname.lastname@example.org.
the greatest sensitivity occurring at temperatures which produced mixed sex ratios in control groups. This may also be the case with the putative masculinizing effect of the nonaromatizable androgen dihydrotestosterone (DHT), which has been reported to induce male sex determination when utilizing an incubation regimen in which eggs were shifted from male-producing to female-producing temperatures late in the temperature-sensitive period, resulting in a 1:1 sex ratio in control groups (Wibbels et al., 1992). However, DHT does not appear to affect sex determination at incubation temperatures which produce all females (Crews et al., 1989; Wibbels and Crews, 1992). Thus, the effects of steroids on sex determination can significantly vary depending upon the incubation temperature utilized. Considering these latter findings and the fact that the great majority of past studies have utilized temperatures producing either all males or all females in control groups, there is a need for studies that examine the effects of steroids at temperatures producing mixed sex ratios.
The current study addresses several of the hypotheses reviewed above by examining (1) if sex determination is sensitive to DHT at constant incubation temperatures which produce mixed sex ratios in control groups, (2) if the feminizing effects of estradiol-17β increase at incubation temperatures which produce mixed sex ratios in control groups, and (3) if several reputed estrogen antagonists and the androgen testosterone show similar effects at incubation temperatures which produce mixed sex ratios in control groups in comparison to their previously described effects at male-producing temperatures (i.e., they exhibit feminizing effects at male-producing temperatures). Last, the current study also includes a double-treatment experiment which examines if masculinizing effects of DHT and feminizing effects of estradiol-17β (respectively) can be simultaneously stimulated at a mixed sex ratio incubation temperature.
MATERIALS AND METHODS
Freshly laid eggs from the red-eared slider, *Trachemys scripta*, were obtained commercially (Robert Kliebert, Hammond, LA). After transport to our laboratory, they were placed in containers with moistened vermiculite (vermiculite:water, 1:2) which were placed in incubators set at 28.6, 29.0, or 29.2°. Temperatures set points were based on thermometers which had been calibrated against an NIST-traceable thermometer. Temperatures were taken a minimum of twice daily. Based on the twice-daily readings, typical temperature variations around a given set point were approximately ±0.15° (standard deviation).
Previous studies indicated that continuous incubation at 31° produces all female hatchlings, whereas 26° produces all male hatchlings and incubation temperatures near 29.0° produce mixed sex ratios (Bull et al., 1982; Crews et al., 1991; Wibbels et al., 1991a; Wibbels, Bull, and Crews, unpublished data). As such, the three temperatures used in the current experiment (28.6, 29.0, and 29.2°) were chosen in an effort to obtain mixed sex ratios in the control groups.
Embryonic development was monitored by candling eggs and by dissecting two to four eggs approximately twice a week to verify specific developmental stages, based on criteria described by Yntema (1968). When embryos reached stage 17–18, a time period within the temperature-sensitive window in this species (Wibbels et al., 1991a), the eggs were randomized into control and experimental groups. A series of single-treatment and double-treatment experiments were performed. In single-treatment experiments, eggs from experimental groups received a single treatment of a specific ligand (i.e., hormone, agonist, or antagonist) suspended in 5 μl 95% ethanol. Specifically, the effects of estradiol-17β (dosages = 0.001, 0.01, 0.1, and 1.0 μg) and dihydrotestosterone (i.e., 5α-androstano-17β-ol-3-one or DHT; 1.0, 10.0, 100.0, 200.0 μg) were examined at all three temperatures. Furthermore, three additional ligands were examined at 29.2° (the temperature which produced an approximate 1:1 sex ratio): tamoxifen at dosages of 10.0 and 100.0 μg, norethindrone at dosages of 1.0, 10.0, and 100.0 μg, and testosterone at dosages of 10.0, 100.0, and 200.0 μg. All ligands were obtained from Sigma (St. Louis, MO). The dosages chosen for each ligand were based on previous studies with turtles (Gutzke and Bull, 1986; Bull et al., 1988; Crews et al., 1989, 1991; Wibbels and Crews, 1992). Eggs in control groups received a single treatment consisting of 5 μl 95% ethanol. A series of double-treatment experiments were carried out at a single incubation temperature (29.0°) in which eggs received various dosages of both estradiol-17β and DHT (Table 1). Previous studies suggest that a female-producing temperature can negate any possible masculinizing effect of DHT (Crews et al., 1989; Wibbels and Crews, 1992; Wibbels et al., 1992); therefore, a temperature producing a male bias in control groups (29.0°) was utilized. Each of the two treatments was delivered in 5 μl ethanol as described above and control eggs received two 5-μl treatments of ethanol.
All treatments were applied topically to the vascularized portion of the upper shell (Crews et al., 1991). Groups of approximately 30 eggs were used for each treatment group. After receiving treatments, all eggs were placed back into their respective incubators until they hatched. Turtles were euthanized approximately 2–4 weeks after hatching. Gonadal sex was assessed by examination of the reproductive
tracts under a dissection microscope. In the great majority of cases the gonads of hatchling *T. scripta* were well differentiated and appeared distinctly testicular or ovarian when viewed under a dissection microscope (Crews et al., 1991; Wibbels et al., 1991a). Ovaries are long and flat whereas testes are, relative to the ovaries, shorter, round, and have visible seminiferous tubules (see Crews et al., 1991). In cases where the gonads did not appear distinctly male or female, gonads were examined histologically. Histological analysis included fixation in Bouin’s solution and paraffin embedding, followed by 8.0-μm sectioning and hematoxylin/PAS staining (Humason, 1972).
The sex ratios from the dosage groups within each treatment were initially pooled at each temperature and compared to the respective control group using Fisher exact tests. If significance was detected, Fisher exact tests were then used to compare the sex ratio of each dosage of a particular treatment to the respective control group. Additionally, Fisher exact tests were used to compare dosages within a treatment group to one another to determine if certain dosages were more effective than others. Logistic regression of sex ratios from the single-treatment experiments was used to compare the effects of given hormones at the different temperatures (Chatterjee and Price, 1977; Gabriel, 1978; Sokal and Rolf, 1981; see detailed example by Crews et al., 1995). Specifically, the effect of each hormone (estradiol-17β or DHT) within each temperature group was determined using polynomial logistic regression. Regression coefficients from the different temperatures were then compared to coefficients by computing lower and upper comparison limits (Gabriel, 1978). That is, coefficients were significantly different if their comparison limits did not overlap. These analyses can reveal a synergism between temperature and hormone treatment if the $b_1$ regression coefficients at different temperatures are significantly different from one another.
**RESULTS**
The results of the single-treatment experiments are sown in Figs. 1 and 2. The pooled sex ratios from the estradiol-17β dosages at each temperature were significantly different from their respective control groups ($P < 0.001$). A comparison of each dosage group to the control groups revealed that at the 28.6° incubation temperature each of the three highest dosages of estradiol-17β (0.01, 0.1, and 1.0 μg) resulted in the production of significantly more females than in the control groups (Fisher exact tests, $P < 0.01$), whereas the lowest dosage (0.001 μg) did not significantly induce female sex determination ($P > 0.05$). At 28.6° a distinct dose response was evident with each dosage of estradiol-17β producing significantly more females.

than the next lower dosage ($P < 0.05$). At the 29.0 and 29.2° incubation temperatures, each of the four dosages of estradiol-17β resulted in the production of significantly more females than in the control groups (Fisher exact tests, $P < 0.01$). Comparisons of sex ratios from different dosages from those two temperatures did not reveal any clear dose-response patterns. Logistic regression analysis revealed a synergistic effect of temperature and estradiol-17β, with estradiol-17β exerting a significantly greater effect on sex determination at 29.0 and 29.2° in comparison to the results at 28.4° (see Table 2). Additionally, pooled sex ratios from the different dosage groups as well as the individual sex ratios from each dosage group of tamoxifen, norethindrone, and testosterone at the 29.2° incubation temperature (Fig. 2) produced significantly more females than in the control groups (Fisher exact tests, $P < 0.01$). At the 28.6 and 29.0° incubation temperatures, no significant differences were detected between the sex ratios of the control groups and the pooled DHT-treated groups from each temperature (Fisher exact tests, $P > 0.05$), whereas at 29.2° the pooled DHT-treated groups produced significantly more males than the control groups ($P = 0.001$). A comparison of the sex ratio of each dosage group to the controls revealed that each of the three highest dosages of DHT resulted in the production of significantly more males than in the control groups (Fisher exact tests, $P < 0.01$). Comparison of individual dosages of DHT at 29.2° indicated that each of the three highest dosages (10, 100, and 200 μg) produced significantly more males than the lowest DHT dosage (1.0 μg). The sex ratios of the three highest dosages of DHT at 29.2° were not significantly different from one another ($P > 0.05$). Logistic regression revealed no synergistic effect of DHT and temperature ($P > 0.05$).
**TABLE 1**
Summary of Double-Treatment Experiments in Which Eggs Received Estradiol-17β and Dihydrotestosterone (DHT) Treatments
| Group | DHT (μg) | Estradiol (μg) | n | ♂ | ♀ | ♂/♀ | % ♂ | FET |
|-------|----------|----------------|-----|----|----|------|-----|-----|
| Control | — | — | 26 | 21 | 5 | 0 | 80.7| — |
| 1 | 1.0 | 0.01 | 25 | 21 | 4 | 0 | 84.0| ns |
| 2 | 1.0 | 0.10 | 29 | 21 | 8 | 0 | 72.4| ns |
| 3 | 1.0 | 1.00 | 28 | 10 | 17 | 1 | 35.7| ** |
| 4 | 10.0 | 0.01 | 22 | 22 | 0 | 0 | 100.0| ns |
| 5 | 10.0 | 0.10 | 26 | 23 | 3 | 0 | 88.5| ns |
| 6 | 10.0 | 1.00 | 26 | 13 | 13 | 0 | 50.0| * |
| 7 | 100.0 | 0.01 | 29 | 28 | 1 | 0 | 96.5| ns |
| 8 | 100.0 | 0.10 | 27 | 19 | 6 | 2 | 70.4| ns |
| 9 | 100.0 | 1.00 | 28 | 6 | 15 | 7 | 20.7| *** |
*Note.* Incubation temperature was maintained at 29.0°. ♂/♀, intersex gonads; FET, Fisher exact test comparing sex ratio of each group to control; ns, nonsignificant; *$P < 0.05$; **$P < 0.01$; ***$P < 0.001$.
### Table 2
**Separate Regression Results (in Columns) for the Effects of Increasing Estradiol-17β Concentrations on Hatchling Sex Ratios at Three Different Incubation Temperatures**
| Regression coefficients | 28.6° | 29.0° | 29.2° |
|-------------------------|-------|-------|-------|
| $B_0$ | $-3.38 \pm 0.79^a$ | $-1.30 \pm 0.46^{a,b}$ | $-0.14 \pm 0.38^b$ |
| | $(LR\chi^2 = 18.21, P < 0.001)$ | $(LR\chi^2 = 7.96, P = 0.005)$ | $(LR\chi^2 = 0.14, P = 0.71)$ |
| | $[l = -4.73, u = -2.03]$ | $[l = -2.09, u = -0.51]$ | $[l = -0.79, u = 0.51]$ |
| $B_1$ | $298 \pm 104^a$ | $2584 \pm 711^b$ | $2887 \pm 929^b$ |
| | $(LR\chi^2 = 8.23, P = 0.004)$ | $(LR\chi^2 = 13.22, P < 0.001)$ | $(LR\chi^2 = 9.65, P = 0.002)$ |
| | $[l = 120, u = 475]$ | $[l = 1367, u = 3801]$ | $[l = 1297, u = 4477]$ |
| $B_2$ | $-2767 \pm 1068^a$ | $-237,631 \pm 76,363^b$ | $-304,414 \pm 101,462^b$ |
| | $(LR\chi^2 = 6.72, P = 0.009)$ | $(LR\chi^2 = 9.68, P = 0.002)$ | $(LR\chi^2 = 9.0, P = 0.003)$ |
| | $[l = -4594, u = -940]$ | $[l = -368,288, u = -106,974]$ | $[l = -478,015, u = -130,813]$ |
| $B_3$ | $2476 \pm 965^a$ | $2,330,304 \pm 762,527^b$ | $3,042,939 \pm 1,031,432^b$ |
| | $(LR\chi^2 = 6.59, P = 0.01)$ | $(LR\chi^2 = 9.34, P = 0.002)$ | $(LR\chi^2 = 8.7, P = 0.003)$ |
| | $[l = 825, u = 4127]$ | $[l = 1,025,620, u = 3,634,988]$ | $[l = 1,278,159, u = 4,807,719]$ |
| $B_4$ | ns | $-2,095,250 \pm 686,874^b$ | $-2,741,399 \pm 932,462^b$ |
| | | $(LR\chi^2 = 9.31, P = 0.002)$ | $(LR\chi^2 = 8.64, P = 0.003)$ |
| | | $[l = -3,270,491, u = -920,009]$ | $[l = -4,336,841, u = -1,145,957]$ |
*Note.* Regression coefficients ± 1 standard error with $\chi^2$ and probability values in parentheses; there is 1 df for each regression coefficient. Regression coefficients with different superscripted letters are significantly different at a level of $\alpha = 0.05$ for different incubation temperatures. The coefficients are significantly different if their comparison limits in brackets do not overlap. $B_0$ represents the effect of temperature. Synergism of temperature and estradiol-17β is present if $B_1$ is significantly greater at the intermediate temperatures (29.0 and 29.2°) in comparison to the baseline temperature (28.6°). ns, not significant.
---
In the double-treatment experiment (Table 1) the highest dosage of estradiol-17β (1.0 μg) was combined with one of the three dosages of DHT, resulting in the production of significantly more females than in the control groups (Fisher exact tests, see Table 1). The two lower dosages of estradiol-17β (0.01 and 0.10 μg) did not significantly increase the production of females when combined with any of the DHT dosages (see Table 1). A number of intersex gonads resulted from the double-treatment experiments, including 7 of 28 from the group receiving 1 μg of estradiol-17β and 100 μg DHT. A longitudinal section of an intersex gonad from that treatment group is shown in Fig. 3. The gonad has both a distinct cortex and distinct seminiferous tubules in the medullary region of the gonad. Normal male hatchlings lack a cortex and normal female hatchlings lack developing seminiferous tubules in the medullary region of the gonad (see Wibbels *et al.*, 1991a, for detail photographs of normal hatchlings).
---
**DISCUSSION**
The results from the current study support and extend several of the hypotheses concerning the effects of steroids on TSD. First, comparisons of the sex ratios produced at all three temperatures indicate that the sensitivity of sex determination to estradiol-17β is significantly greater at the temperatures producing mixed sex ratios in control groups in comparison to the temperature which produced all males in the control group. In fact, the effects of estradiol-17β and temperature were shown to exert a synergism on sex determination at the temperatures producing mixed sex ratios in the control groups. These findings are consistent with the results of a past study comparing the effects of estradiol-17β at a cooler male-producing temperature (26°) to a higher temperature which produced primarily males in the control group (28.2°, sex ratio = 1 female:30 males) (Wibbels *et al.*, 1991b). Additionally, sex determination
and testosterone treatments are consistent with the hypothesis that steroid-induced female sex determination is an estrogen-specific event and that the effects of testosterone may be mediated via aromatization to estrogen (Crews et al., 1989; Dorizzi et al., 1991; Wibbels and Crews, 1992, 1994; Wibbels et al., 1994; Crews and Bergeron, 1994; Crews et al., 1995). Tamoxifen and norethindrone induced female sex determination in the current study at the temperature producing a near 1:1 sex ratio (29.2°). These data are similar to those of past studies using male-producing temperatures (Crews et al., 1989; Lance and Bogart, 1991; Wibbels and Crews, 1992), thus indicating that estrogen-related compounds (including some compounds reputed to be estrogen antagonists) consistently induce female sex determination at temperatures producing mixed sex ratios in control groups as well as at temperatures which produce all males in control groups. Additionally, the aromatizable androgen testosterone induced female sex determination, whereas the nonaromatizable androgen DHT induced male sex determination.
The results of the DHT treatments support the hypothesis that DHT can have a masculinizing effect on sex determination at temperatures which produce mixed sex ratios. DHT was shown to significantly increased the production of males in the current study at 29.2°. In previous studies utilizing female-producing incubation temperatures, DHT did not affect sex determination (Crews et al., 1989; Wibbels and Crews, 1992). The masculinizing effect of DHT on sex determination has been reported in a previous study which utilized a temperature shift regimen that resulted in an approximate 1:1 sex ratio in control groups (Wibbels et al., 1992); however, the current study is the first to show this masculinizing effect at a constant incubation temperature. Interestingly, the masculinizing ability of DHT does not appear as robust as the feminizing ability of estrogen, since temperature regimes resulting in mixed sex ratios coupled with relatively high dosages of DHT must be used. Further, regardless of the DHT dosage it is difficult to predictably obtain...
masculinization of all embryos in a treatment group.
The results of the double-treatment experiments lend support the hypothesis that estradiol-17β and DHT have opposite effects on sex determination. Although no significant production of males was detected, it is of particular interest that the lower dosages of estradiol-17β (0.01 and 0.1 μg) given in combination with various dosages of DHT did not significantly induce female sex determination. This contrasts the results of the single-treatment estradiol-17β experiments at 29.0° in which the lower dosages (0.01 and 0.1 μg) were effective and suggests that DHT decreases the effectiveness of estradiol-17β. However, caution should be used in comparing these two experiments, since they were conducted independently (although the control groups were similar, 21 males:5 females versus 22 males:6 females). Last, it is noteworthy that the group receiving the highest dosages of both DHT and estradiol-17β had a relatively large number of intersex gonads (7 of 28) with distinct cortical as well as medullary development. Intersex gonads have been reported previously in embryonic reptiles treated with estradiol-17β (Pieau, 1974; Raynaud and Pieau, 1985; Dorizzi et al., 1991) or with tamoxifen and estradiol-17β (Dorizzi et al., 1991) and such findings suggest that cortical and medullary development are not mutually exclusive.
ACKNOWLEDGMENTS
We acknowledge the technical assistance of A. Alexander, H. Smith, J. Wilcox, and J. Huang and J. Bergeron. We also thank T. Rhen for assistance with the computer analysis of the data and Charles Wilson for help with photomicrography. This research was supported by NIH NRSA Fellowship HD-07319 to T.W., NIH NRSA Training Grant HD-07264 to the Institute of Reproductive Biology, and by NSF BSR-9205207 and NIMH Research Scientist Award 00135 to D.C.
REFERENCES
Bull, J. J., Gutzke, W. H. N., and Crews, D. (1988). Sex reversal in three reptilian orders. *Gen. Comp. Endocrinol.* 70, 425–428.
Bull, J. J., Vogt, R. C., and McCoy, C. J. (1982). Sex determining temperatures in turtles: A geographic comparison. *Evolution* 36, 326–332.
Chatterjee, S., and Price, B. (1977). “Regression Analysis by Example” Wiley, New York.
Crews, D., and Bergeron, J. M. (1994). Role of reductase and aromatase in sex determination in the red-eared slider (*Trachemys scripta*), a turtle with temperature-dependent sex determination. *J. Endocrinol.* 143, 279–289.
Crews, D., Bergeron, J. M., Bull, J. J., Flores, D., Tousignant, A., Skipper, J. K., and Wibbels, T. (1994). Temperature-dependent sex determination in reptiles: Proximate mechanisms, ultimate outcomes, and practical application. *Dev. Genet.* 15, 297–312.
Crews, D., Bull, J. J., and Wibbels, T. (1991). Estrogen and sex reversal in turtles: A dose-dependent phenomenon. *Gen. Comp. Endocrinol.* 81, 357–364
Crews, D., Cantu, A., Bergeron, J. M., and Rhen, T. (1995). Effects of testosterone and androstenedione on temperature-dependent sex determination in the red-eared slider turtle (*Trachemys scripta*). *Gen. Comp. Endocrinol.*, in press.
Crews, D., Wibbels, T., and Gutzke, W. H. N. (1989). Action of sex steroid hormones on temperature-induced sex determination in the snapping turtle (*Chelydra serpentina*). *Gen. Comp. Endocrinol.* 76, 159–166
Desvages, G., and Pieau, C. (1991). Steroid metabolism in gonads of turtle embryos as a function of the incubation temperature of the eggs. *J. Steroid Biochem. Mol. Biol.* 39, 203–213.
Desvages, G., and Pieau, C. (1992a). Aromatase activity in gonads of turtle embryos as a function of the incubation temperature of the egg. *J. Steroid Biochem. Mole. Biol.* 41, 851–853.
Desvages, G., and Pieau, C. (1992b). Time required for temperature-induced changes in gonadal aromatase activity and related gonadal structures in turtle embryos. *Differentiation* 52, 13–18.
Dorizzi, M., Mignot, T.-M, Guichard, A., Desvages, G., and Pieau, C. (1991). Involvement of estrogens in sexual differentiation of gonads as a function of temperature in turtles. *Differentiation* 47, 9–17.
Gabriel, K. R. (1978). A single method of multiple comparisons of means. *J. Am. Stat. Assoc.* 73, 724–729.
Gutzke, W. H. N., and Bull, J. J. (1986). Steroid hormones reverse sex in turtles. *Gen. Comp. Endocrinol.* 64, 368–372.
Humason, G. L. (1972). “Animal Tissue Techniques.” Freeman, San Francisco.
Lance V. A., and Bogart, M. H. (1991). Tamoxifen ‘sex reverses’ alligator embryos at male producing temperature, but is an antiestrogen in female hatchlings. *Experimentalia* 47, 263–266.
Lance, V. A., and Bogart, M. H. (1992). Disruption of ovarian development in alligator embryos treated with and aromatase inhibitor. *Gen. Comp. Endocrinol.* 86, 59–71.
Pieau, C. (1974). Différentiation du sexe en fonction de la température chez les embryons d' *Emys orbicularis* L. (Chélonian); effets des hormones sexelles. *Ann. Embryol. Morphog.* 7, 365–394.
Raynaud, A., and Pieau, C. (1985). Embryonic development of the genital system. In "Biology of the Reptilia" (C. Gans and F. Billet, Eds.), Vol. 15, pp. 149–300. Wiley, New York.
Sokal, R. R., and Rolf, F. J. (1981). "Biometry." Freeman, San Francisco.
Wibbels, T., Bull, J. J., and Crews, D. (1991a). Chronology and morphology of temperature-dependent sex determination. *J. Exp. Zool.* 260, 71–381.
Wibbels, T., Bull, J. J., and Crews, D. (1991b). Synergism of temperature and estradiol: A common pathway in sex determination? *J. Exp. Zool.* 260, 130–134.
Wibbels, T., Bull, J. J., and Crews, D. (1992). Steroid hormone-induced male sex determination in an amniotic vertebrate. *J. Exp. Zool.* 262, 454–457.
Wibbels, T., Bull, J. J., and Crews, D. (1994). Temperature-dependent sex determination: A mechanistic approach. *J. Exp. Zool.* 270, 71–78.
Wibbels, T., and Crews, D. (1992). Specificity of steroid hormone-induced sex determination in a turtle. *J. Endocrinol.* 133, 121–129.
Wibbels, T., and Crews, D. (1994). Aromatase inhibitor induces male sex determination in a female unisexual lizard and in a turtle with temperature-dependent sex determination. *J. Endocrinol.* 141, 295–299.
Yntema, C.L. (1968). A series of stages in the embryonic development of *Chelydra serpentina*. *J. Morphol.* 125, 219–252.
|
Translation Tracking System: A tool for managing translation archives
Lynne Bowker* and Peter Bennison†
*School of Translation and Interpretation, University of Ottawa,
70 Laurier Avenue East, Room 401, Ottawa, Ontario, K1N 6N5, Canada
email@example.com
†Sepro Telecom,
Dublin, Ireland
firstname.lastname@example.org
Abstract
The Translation Tracking System (TTS) is a database management tool intended to help translation researchers, translator trainers and translators to collect and organize archives of translated material. Relevant corpora can then be extracted from the archive in order to be further processed and analyzed using other natural language processing tools. This paper briefly describes the design and development of TTS, and it then goes on to explore how this tool has been successfully applied in an academic environment to help translator trainers identify areas of difficulty that have been encountered by their students. Some other applications of TTS are also discussed.
1. Introduction
Electronic corpora are playing an increasingly important role in the discipline of translation, where they are being used in a variety of ways, including 1) as resources for human translators to find solutions to specific translation- and terminology-related problems (e.g., Bowker and Pearson, 2002; Lindquist, 1999; Zanettin, 2001); 2) as resources for developing and testing machine translation systems – particularly systems using example-based and statistical models (e.g., Carl and Way, 2001); and 3) as a source of empirical data for more theoretical investigations into the nature of translation and translated text (e.g., Baker, 2001; Kenny, 2001; Laviosa, 1998). Regardless of the intended application, the collection of translation resources often begins in a haphazard and opportunistic way, with files being gathered in different formats and being indexed inconsistently. This was certainly the case for our own data collection endeavours in the not-so-distant past. Frustration with this process prompted us to work on designing and building a tool that could be used to help collate translation resources in a more systematic way. These efforts resulted in the production of the Translation Tracking System (TTS) – a tool that aims to provide a user-friendly framework for gathering and organizing translation data.
Essentially, TTS is a database application that permits users to enter (or copy) texts into a template, which also allows for the entry of relevant indexing information. The default indexing information consists of text attributes such as the source language, target language, subject field, text type, date, source, author/translator details, etc. Users can add or modify attributes as necessary. Once the data have been entered, the resulting archive can then be searched according to these indexed attributes, and texts that match the specified criteria can be extracted and exported for use with other natural language processing applications, such as part-of-speech taggers, term extraction systems, or corpus analysis tools. In short, TTS helps users to gather and manage a wide-ranging archive of translation data, but it also facilitates the export of specialized corpora for specific studies or investigations.
2. TTS design and development
As is the case for many software applications, the design and development of TTS has been an iterative process. TTS has been designed to be a straightforward and easy-to-use tool that can be employed by researchers, trainers, students and practitioners in the field of translation, as well as other users who do not necessarily have a computational background. TTS was initially implemented as a stand-alone application using Visual Basic Applications. It is currently being re-implemented using the PHP programming language so that it will be able to run over the Internet using a client-server model. In its current form, TTS has two main elements: the template, where textual and indexing information is entered, and the query form, where users formulate searches to identify and extract relevant texts from the archive. These are described in more detail in the following sections.
2.1. Template
Since the primary users of TTS are translators rather than computer scientists, the template includes a number of features intended to make it as user-friendly as possible. Firstly, the template allows users to enter as much indexing information as possible by selecting relevant criteria from drop down lists. This approach was adopted in order to minimize inconsistencies that may occur when data is entered in a free format (e.g., different spellings, typographical errors). Such inconsistencies would limit the effectiveness of queries and information retrieval. However, in cases where it may not be reasonable to create drop-down lists for some attributes, this information can be entered in a free format. Users can modify, delete or add attributes to the drop-down lists as necessary using a simple graphical user interface.
Secondly, the template provides validation, which means that users are prompted to fill in each of the attribute fields before closing the file. If a user attempts to close a file without filling in an attribute field, a message box will pop up advising the user which field(s) still need to be filled in. It is possible to override this feature in
cases where no information is available for a given attribute.
When the template has been filled in, the data is saved in plain text format (.txt). This means that when texts are later exported from TTS, they will be in a format that can be easily processed by other software (e.g., taggers, corpus analysis tools). An additional benefit of using a plain text format is that this also reduces the chances of spreading viruses.
2.2. Query form
Once stored in TTS, the data can be searched using a simple query form. Using the drop-down lists of attributes, users can set the parameters of the search by selecting the relevant criteria. A query can be carried out using a single attribute or a combination of attributes. Once the user has defined the search parameters, TTS will then retrieve from the archive all the texts that conform to the specified criteria. For example, TTS can be asked to search all texts in the archive and retrieve only those that match the criteria:
- Source language: French
- Target language: English
- Publication date: 2001
- Subject field: medicine
- Text type: research article
All the texts that match these criteria are then presented to the user in a table. The user can view the full text for any item in the table by opening it from within TTS. The user can also choose to export some or all of the texts in the table to a separate file. Users can choose whether they wish to export just the texts, or whether they wish to export the texts along with a list of their corresponding indexing attributes. In addition, texts can be exported individually, or they can be concatenated into a single large file. The resulting corpus can then be processed using other software as desired by the user.
3. TTS applications
As outlined above, TTS is a database application that can be used to manage an archive of texts. Our initial motivation for developing such a tool stemmed from our desire to study student translations with a view to identifying areas of difficulty for students and learning more about the translation process. This Student Translation Archive is the application for which TTS has been used most extensively to date, and it is described in more detail in section 3.1 below. However, other applications, such as managing translation resources for use by machines, professional translators, or translatologists are also possible. Furthermore, with some relatively simple modifications, TTS can be adapted to manage data for other types of translation-related investigations. One such adaptation will be discussed in section 3.2.
3.1. Student Translation Archive
For a number of years, foreign language teachers have been compiling and studying “learner corpora”, which are defined as textual databases of the language produced by foreign language learners (e.g., Granger 1998). Such corpora are used to identify typical characteristics of texts produced by language learners and to identify errors and problem areas that can then be addressed as part of the language learning curriculum.
Student translators can be considered as a highly specialized type of language learner/user. Although their specific needs differ from those of general language learners, we felt that a similar approach to collecting and studying the output of student translators would be highly valuable for both pedagogical and research purposes. With regard to pedagogy, a corpus of student translations can provide a means of identifying areas of difficulty that could then be integrated into the curriculum and discussed in class. In terms of research, a number of scholars (e.g., Baker, 2001; Kenny, 2001; Laviosa, 1998) have already demonstrated that translation corpora can be useful for studying the nature of professionally translated text; we believe that there is also much to be learned about translation process and product by investigating the nature of text translated by students.
One application of TTS that has already been successfully undertaken is the use of the system to facilitate the development of an archive of student translations. Using an approach based on that developed by Granger (1998) for second language learners, translator trainers at the University of Ottawa in Canada are currently using TTS to collect, organize and study translations produced by their students. Students enter their translations into TTS, and the trainers can extract a selection of texts according to desired criteria. For example, some types of corpora that have been created and extracted with the help of TTS include 1) longitudinal corpora, 2) text-specific corpora, 3) subject-specific corpora, 4) cross-subject field corpora, and 5) cross-linguistic corpora. By analyzing such corpora, translator trainers can identify the types of problems that are being encountered by their students, and they can develop course curricula that will address these issues.
3.1.1. Longitudinal corpora
Longitudinal corpora can be used to track the progress of a specific student or group of students over a given period of time (e.g., a semester, a year, or even an entire degree). By extracting and studying a longitudinal corpus, a trainer or student can see which translation-related difficulties appear to have been resolved and which are still causing problems. For instance, near the beginning of a French-to-English technical translation course, a student was identified by the trainer as having a tendency to use constructions containing prepositional phrases (e.g., head of the scanner) in places where constructions containing pre-modifiers (e.g., scanner head) would be more natural. As a result, although the student’s translations were grammatically correct, they were not idiomatic because they had not been constructed according to the norms of the sublanguage in question. The student was advised of this problem and was encouraged to use pre-modifiers instead of prepositional phrases where appropriate. Over the course of the semester, longitudinal corpora of the student’s work were extracted using TTS. The corpora were then part-of-speech tagged using the AMALGAM tagger and analyzed with the help of WordSmith Tools, a corpus analysis package that includes a concordancer.
Some of the results of the analysis of the longitudinal corpus extracted at the end of the semester are shown in table 1. This table illustrates that the texts that were
translated by the student at the beginning of the semester contained a higher proportion of prepositional phrases, whereas the texts translated towards the end of the semester showed an increased use of pre-modifiers. The trainer was able to examine particular instances of both prepositional phrases and pre-modifiers in context with the help of a concordancer, and the trainer was consequently able to determine that the student was indeed learning to use pre-modifiers in appropriate places in the translated texts. With the help of the longitudinal corpus, the trainer was able to provide the student with concrete feedback and empirical evidence demonstrating the progress of the student’s learning.
| Text | No. of prepositional phrases | No. of pre-modifiers |
|-----------------------|------------------------------|----------------------|
| Text 1 (614 words) | 11 | 3 |
| (Jan. 2001) | | |
| Text 2 (720 words) | 10 | 5 |
| (Feb. 2001) | | |
| Text 3 (857 words) | 7 | 10 |
| (Mar. 2001) | | |
| Text 4 (1008 words) | 6 | 14 |
| (Apr. 2001) | | |
Table 1: Summary of longitudinal corpus analysis.
### 3.1.2. Text-specific corpora
Translator trainers are often interested in seeing how a particular passage in a text has been handled by the various students in a class. Such investigations permit the trainer to identify areas where the class as a whole is having difficulty, as distinct from problems that may have befallen only one or two students. This allows the trainer to appropriately orient the curriculum or class discussions in order to focus on generally problematic issues. Many translation classes typically contain between twenty and thirty students, and these students often submit their work in printed form. In a class of this size, it is cumbersome to try to identify patterns of “problem areas” when working with separate sheets of paper. In contrast, if the translations are entered into a system such as TTS, a trainer can easily export all the given translations of a particular source text, and focus in on the different renderings of a selected passage with the help of a concordancer. Table 2 illustrates how a trainer can simultaneously view all the student translations of the following extract from the French source text: “…un pare-feu pour les internautes à haut débit…”.
By examining these different translations simultaneously, the trainer found that patterns became more visible, which made it possible to discern, for example, that the majority of the class would benefit from a discussion regarding the hyphenation of compound pre-modifiers, since only three of the eleven students who used a compound pre-modifier (e.g., “high-speed”) correctly hyphenated this type of construction. In contrast, the trainer could determine that there was only one student (see table 2, line 1) who encountered difficulty with a misplaced modifier, using “broadband” to incorrectly modify “firewall” instead of “Internet users”. Similarly, only a single student (see table 2, line 5) misunderstood the underlying concept represented by the French term “haut débit”, choosing to render it as “frequent”, rather than “high-speed”. In cases such as these, the students could be approached independently to discuss these problems.
| ST | …un pare-feu pour les internautes à haut débit… |
|----|-----------------------------------------------|
| 1 | …a broadband firewall for Internet users… |
| 2 | …a fire-wall for high-speed Internet users… |
| 3 | …a firewall for broadband connection users… |
| 4 | …a firewall for broadband surfers… |
| 5 | …a firewall for frequent Internet users… |
| 6 | …a firewall for high speed Internet surfers… |
| 7 | …a firewall for high speed Internet surfers… |
| 8 | …a firewall for high speed Internet users… |
| 9 | …a firewall for high speed Internet users… |
| 10 | …a firewall for high speed Internet users… |
| 11 | …a firewall for high speed Internet users… |
| 12 | …a firewall for high speed Internet users… |
| 13 | …a firewall for high speed surfers… |
| 14 | …a firewall for high-speed Internet surfers… |
| 15 | …firewalls for broadband Internet users… |
| 16 | …firewalls for high-speed Internet users … |
Table 2: Extracts from a text-specific corpus.
### 3.1.3. Subject-specific corpora
Many individual translation courses are devoted to specialized subject fields, such as medical translation or legal translation. Trainers can extract a corpus of translations pertaining to a particular subject field and examine these to determine if a problem is specific to one particular source text or if it is a difficulty that is also manifesting itself in other texts dealing with a related theme.
For example, over the course of a semester, students in a French-to-English technical translation class were required to translate two different texts on the subject of animal cloning. The first text was taken from a semi-specialized science magazine and it explained the concepts associated with animal cloning techniques in a clear and accessible manner that the majority of students were able to understand and adequately convey in their translations. The second text, which the students translated several weeks after the first, examined cloning from the point of view of ethical considerations. It described some of the same concepts as the first text, but these were expressed in a more abstract and less straightforward way. When translating the second text, many of the students seemed less sure of themselves, did not appear to understand the concepts in question, and produced much more literal translations that closely followed the syntax of the source text. Consequently, the translations of the second text were less accurate and less articulate than the translations of the first text. The fact that the first translation had been fairly accurately rendered seems to
indicate that the students were generally able to comprehend the subject matter; therefore, the difficulties with the second text were more likely to be related to the language and style that were used. This prompted the trainer to spend more class time discussing stylistic features of different text types and less time focusing on explanations of the subject matter itself (i.e., animal cloning techniques).
3.1.4. Cross-subject field corpora
As noted above, many translator-training programs require students to follow courses in different types of translation (e.g., legal translation, medical translation). Using TTS, it is possible for trainers to extract corpora that span multiple subject fields in order to investigate whether the problems encountered by a student or group of students in one type of specialized translation course (e.g., economic translation) are similar to or different from the problems they are having in another type of specialized translation course (e.g., technical translation). In this way, a trainer can try to determine whether a student is having a problem that manifests itself regardless of the subject field and therefore needs to be tackled at a more global level, or whether the student is having difficulty caused by a lack of knowledge of some aspect of a particular subject field (e.g., concepts, vocabulary, syntax) and which does not manifest itself when working in other fields.
For example, in examining translations produced by an individual student taking both an economic translation course and a technical translation course, it became clear to the trainers that the student had a general difficulty in grasping the concepts of register and text type. In the economic translation class, the student had translated a section from an annual report produced by an investment company, while in the technical translation class, the student had translated an extract from a technical report describing a new type of computer operating system. Both translations contained inappropriate constructions, such as the use of the second person, the use of contractions, and the use of sentence fragments. Faced with such evidence, the trainers decided to approach the student’s difficulty with register and text type at a more global level, independently of any particular subject field.
3.1.5. Cross-linguistic corpora
In a similar vein, students in translator training programs often work with multiple languages or in different language directions. In such cases, cross-linguistic corpora can be extracted from TTS and used to help trainers determine whether a given student’s work is subject to source language interference or whether the student encounters different types of problems when working with different languages. For instance, a trainer may decide to investigate potential problems of source language interference by extracting texts translated by a particular student that cover the same subject field but were translated from different languages (e.g., Spanish-to-English and French-to-English). A comparison of the two sets of translations may reveal that the student is having different types of problems when translating out of Spanish than when translating out of French, in which case it may be necessary to focus on source language interference, or it may turn out that a student is having similar problems regardless of the source language (e.g., perhaps the student has not grasped the concept of register), in which case it may be necessary to tackle the issue in a way that is not related to a particular source language.
For example, a student translated two technical texts on the subject of optical scanners: one from Spanish to English and one from French to English. In both cases the student’s terminological choices were heavily influenced by the source text in question. For instance, the Spanish term “scanner plano” was translated by “flat scanner” while the French term “numériseur à plat” was rendered as “flat digitizer”; however, both of these terms are referring to the same concept, and in both cases, a better solution would have been a translation such as “flatbed scanner”. This type of concrete evidence can be used to help the student see and correct the problem.
3.2. Other applications of TTS
Although the Student Translation Archive is the application for which TTS has been used most extensively to date, another application has been the use of TTS to manage translation resources for freelance translators. Freelance translators often work in a variety of subject fields and with a range of text types. Consequently, they are always on the lookout for any text that might be a useful resource for future translations. However, once they have built up a diverse collection of resources, they must be able to quickly pinpoint those specific elements that will be relevant for translating a given source text. TTS has proved to be extremely useful in this regard because it allows translators both to amass and organize a large archive of resource material and to quickly identify and extract only those texts that are pertinent to particular project.
TTS can also be adapted to assist with other types of data management. For example, at the University of Ottawa, a research project is currently underway that aims to investigate the evolution and current state of the translation profession in Canada by examining a corpus of translation-related job advertisements. As part of this project, researchers are collecting an archive of job advertisements and storing them using a modified version of TTS. The modifications relate primarily to the indexing attributes, which in this case include attributes such as job title, employer, location, qualifications, required skills, etc. Using TTS, researchers can extract from the archive only those advertisements for jobs requiring a BA in Translation, or only those jobs based in Ottawa, etc. The data can then be further analyzed using corpus analysis tools.
4. Concluding remarks
As noted in the introductory section of this paper, more and more people working in various areas of the translation profession are becoming aware of the value of consulting corpus-based resources. However, as these resources grow – both in size and in diversity – it is increasingly important that they be managed in a systematic fashion. The use of a tool such as a word processor to create, index, organize, and search an archive is simply not an efficient approach to managing a data collection. Organization is the key to success when constructing and consulting data resources. If such resources are not well constructed, pertinent information
may be overlooked, or searches may be carried out on inappropriate material, which could produce misleading results. TTS aims to provide a user-friendly means for helping members of translation-related professions to gather and maintain their data collections in an organized manner. The flexibility of TTS has allowed it to be used to support various types of investigations relating to translation teaching, research, and practice.
5. Acknowledgements
The work described here has been partially funded by grants awarded to Lynne Bowker by the Faculty of Arts of the University of Ottawa and the University of Ottawa Research Fund. Peter Bennison would like to thank his employer, Sepro Telecom, for supporting his research.
6. References
AMALGAM tagger: http://www.comp.leeds.ac.uk/amalgam/amalgam/amalgs oft.html
Baker, Mona, 2001. Investigating the Language of Translation: A Corpus-based Approach. In P. Fernández and J.M. Bravo (eds.), *Pathways of Translation Studies*. Valladolid: Universidad de Valladolid.
Bowker, Lynne and Jennifer Pearson, 2002. *Working with Specialized Language: A Practical Guide to Using Corpora*. London: Routledge.
Carl, Michael and Andy Way (eds.), 2001. *Proceedings of the MT Summit VIII Workshop on Example-Based Machine Translation*. Geneva: EAMT.
Granger, Sylviane (ed.), 1998. *Learner English on Computer*. London: Longman.
Kenny, Dorothy, 2001. *Lexis and Creativity in Translation: A Corpus-based Study*. Manchester: St. Jerome Publishing.
Laviosa, Sara, 1998. The English Comparable Corpus: A Resource and a Methodology. In L. Bowker, M. Cronin, D. Kenny and J. Pearson (eds.), *Unity in Diversity? Current Trends in Translation Studies*. Manchester: St. Jerome Publishing.
Lindquist, Hans, 1999. In G. Anderman and M. Rogers (eds.), *Words, Text, Translation: Liber Amicorum for Peter Newmark*. Clevedon: Multilingual Matters.
WordSmith Tools: http://www1.oup.co.uk/elt/catalogue/Multimedia/WordSmithTools3.0/download.html
Zanettin, Federico, 2001. Swimming in Words: Corpora, Translation and Language Learning. In G. Aston (ed.), *Learning with Corpora*. Bologna/Houston: CLUEB/Athelstan.
|
Work Assignment 3-7
White Paper on
Species/Strain/Stock in Endocrine Disruptor Assays
Contract No. 68-W-01-023
JULY 25, 2003
RTI Project No. 08055.002.023
PREPARED BY:
Sherry P. Parker, Ph.D.
Rochelle W. Tyl, Ph.D., DABT
REVIEWED BY:
Jimmy L. Spearow, Ph.D.
FOR:
James Kariya
WORK ASSIGNMENT MANAGER
U.S. ENVIRONMENTAL PROTECTION AGENCY
ENDOCRINE DISRUPTOR SCREENING PROGRAM
WASHINGTON, DC
BATTELLE
505 KING AVENUE
COLUMBUS, OH 43201
# TABLE OF CONTENTS
| Section | Page |
|------------------------------------------------------------------------|------|
| 1.0 Executive Summary | 1 |
| 2.0 Introduction and Background | 1 |
| 2.1 Purpose | 3 |
| 2.2 Literature Search Strategy | 3 |
| 2.2.1 Databases Searched | 3 |
| 2.2.2 Database Search Strategies | 4 |
| 2.2.3 Keywords and Phrases Used | 4 |
| 2.2.4 Summary of the Review Process | 6 |
| 2.3 Definitions | 6 |
| 2.3.1 Inbred and Outbred Strains | 7 |
| 2.3.2 Species Selection for Endocrine Disruption Assays and Genetic Variability | 21 |
| 2.3.3 Confounders Affecting Comparisons of Reproductive Toxicity Data | 23 |
| 2.4 Assays Under Consideration of the EDSP and Associated Endocrine Endpoints | 27 |
| 2.4.1 One-Generation Assay | 30 |
| 2.4.2 Pubertal Male and Female Assays | 30 |
| 2.4.3 *In Utero* Through Lactation Assay | 33 |
| 2.4.4 Adult Male Assay | 33 |
| 2.4.5 Two-generation Assay | 33 |
| 2.5 Endocrine Endpoints Under Consideration for EDSP Assays and Intraspecies Variability | 34 |
| 2.5.1 Fertility and Gestational Indices | 35 |
| 2.5.2 Survival and Growth Indices | 36 |
| 2.5.3 Reproductive Tract Development | 38 |
| 2.5.4 Anogenital Distance | 40 |
| 2.5.5 Urethral Vaginal Distance (UVD) | 41 |
| 2.5.6 Retention of Nipples in Preweanling Males | 41 |
| 2.5.7 Puberty | 42 |
| 220.127.116.11 Vaginal Patency in Females | 43 |
| 18.104.22.168 Age of First Estrus in Females | 44 |
| 22.214.171.124 Preputial Separation in Males | 44 |
| 2.5.8 Estrous Cyclicity and Ovulation Rate in Postpubertal Females | 44 |
| 2.5.9 Andrology | 46 |
| 2.5.10 Organ Weights and Histopathology | 47 |
| 2.5.11 Behavioral Assessments/Clinical Observations | 51 |
| 2.5.12 Hormonal Controls | 52 |
| 2.5.13 Uterine Weight | 56 |
| Section | Page |
|------------------------------------------------------------------------|------|
| 3.0 Interspecies Similarities and Differences in Endocrine Endpoints | 58 |
| 4.0 Summary and Conclusions of Intraspecies and Interspecies Similarities and Differences in Endocrine Endpoints and Conclusions | 60 |
| 5.0 References | 76 |
**LIST OF TABLES**
| Table | Page |
|----------------------------------------------------------------------|------|
| Table 1. Assays Under Consideration by the EDSP and Associated Endocrine Endpoints | 28 |
| Table 2. Intraspecies Comparisons of Endocrine Endpoints | 62 |
| Table 3. Summary of Agent- and Endpoint-Specific Intraspecies Differences | 71 |
1.0 Executive Summary
This white paper is a review of the interspecies and intraspecies similarities and differences in endocrine endpoints in the absence and presence of test chemicals, in order to determine whether specific species/strains should be preferred or avoided when screening for endocrine activity. There is much evidence that different species and strains within species exhibit differing sensitivities to endocrine-active compounds, specific for chemicals and endpoints evaluated. Thus selection of appropriate species and strain(s), or at least understanding the differential responsivity of them is crucial to detecting effects in animal models which are extrapolable to human risk. This white paper is limited to the species being considered for inclusion in the Endocrine Disruptor Screening Program (EDSP) and also limited in scope to the endocrine endpoints under consideration. Currently, the reproductive and developmental toxicology Environmental Protection Agency (EPA) guideline studies recommend using the rat and not strains with low fecundity. The most commonly used rat strain for these Guideline studies is the CD Sprague-Dawley (SD) rat (the CD-1 Swiss mouse is also frequently used). Though the majority of historical data exists in this species/strain, there is evidence that endocrine-active chemicals may have very different dose-response curves for certain endocrine-related reproductive endpoints, which may, in part, be due to a differential sensitivity of different species/strains and endpoints in these species/strains to these chemicals. Since confounding factors make interlaboratory comparisons of data problematic, multi-strain studies conducted under the same experimental conditions and same laboratory were primarily used in the species/strain/stock comparisons.
Comparisons revealed a lack of consistency in effects produced by endocrine-disrupting chemicals on endocrine endpoints from strain to strain. Endocrine effects were chemical specific, strain specific, endpoint specific, and, in some cases, laboratory specific. There were more sensitive and less sensitive strains to endocrine-active compounds among outbred and inbred strains, depending on the chemical used and the endpoints evaluated. Clearly, strain (genotype) by environmental agent by endpoint interactions need to be considered in selecting the appropriate species/strains for EDSP assays.
2.0 Introduction and Background
In 1996, the Food Quality Protection Act was enacted by Congress. It directs the United States Environmental Protection Agency (EPA) to screen pesticides for endocrine activity. Thus, the EPA is implementing an Endocrine Disruptor Screening Program (EDSP). In this program, comprehensive toxicological and ecotoxicological screens and assays are being developed to identify and characterize the endocrine effects of various environmental contaminants, industrial chemicals, and pesticides.
The program’s aim is to develop a two-tiered approach, i.e., a combination of *in vitro* and *in vivo* mammalian and ecotoxicological screens (Tier 1) to identify substances with the potential to interact with the endocrine system, and a set of definitive apical *in vivo* assays (Tier 2) to determine whether the substances identified in Tier 1 cause adverse effects, identify the adverse effects, and determine the quantitative relationship between dose and adverse effects. The EDSP is required to use “validated” test systems. The Endocrine Disruptor Methods Validation Subcommittee (EDMVS) provides technical advice on the validation of most of the assays.
In order to determine necessary modifications to standard reproductive and developmental toxicology guidelines, to detect and characterize the effects of endocrine disruptors, the National Toxicology Program/National Institute of Environmental Health Sciences (NTP/NIEMS), at the request of the EPA, organized a peer review panel which convened in October 2000. The panel consisted of scientists from academia, industry, and the government. One of the subpanels of the NTP’s Endocrine Disruptor Low Dose Peer Review Panel investigated the concern that animal models used in assays to detect endocrine disruption have been chosen on the basis of convenience and familiarity, and species/strains/stocks which are more frequently used are those which are bred specifically for robust fecundity and likely reduced sensitivity to endocrine perturbations (NTP’s Report of the Endocrine Disruptors Low Dose Peer Review, 2000). The subpanel addressed this issue with respect to the mammalian two-generation assay and had the following remarks:
On *animal model selection*: The subpanel recommended that the selection of species or strain for future studies should be the result of a more deliberate thought process, rather than based on availability, convenience, or familiarity. Development of a core of historical data across mouse and rat strains (inbred and outbred), with known endocrine-disrupting chemicals and characterization of the reproductive endpoints of interest, was also recommended. These data could be used to modify current protocols with respect to the number and doses/concentrations of dose groups, group size, and endpoint selection.
On *species/strain selection*: The subpanel asserted that although there is an abundance of historical control data in CD-1 mice and SD rats collected in reproductive and developmental toxicology studies, inbred F1 strains such as the B6C3F1 mouse may provide less variable responses to endpoints assessed. In addition, the advantage of historical data may be compromised by genetic drift and/or selective breeding.
In the December 2001 meeting of the EDMVS, committee members discussed strains and stocks and concluded that the EPA should prepare a white paper summarizing what is known about intraspecies strain/stock similarities and differences in the neuroendocrine control of reproduction/development and in responses to endocrine-active chemicals, and provide the rationale for strain/stock selection. This review is the “white paper” requested by the EDMVS, designed to assess the interspecies and intraspecies variability of endocrine endpoints in *in vivo* assays under consideration by the EDSP. Please note that the uterotrophic assay and the
Hershberger assay are under consideration by the EDSP, for inclusion in testing guidelines but are being standardized and validated in a cooperative effort between US EPA and OECD and participating laboratories on three continents; therefore, these assays will not be standardized or validated in this project, and will not be discussed in this white paper.
2.1 Purpose
The purpose of this review is to summarize the interspecies and intraspecies similarities and differences in response to endocrine endpoints, in order to determine whether specific species/strains should be preferred or avoided when screening for endocrine activity. Currently, the recommended species for reproductive and developmental toxicology EPA guideline studies is the rat. Though the majority of historical data exists in the SD rat and CD-1 mouse strains, there is evidence that endocrine-active chemicals may have very different dose-response curves for responses to certain endocrine-related reproductive endpoints, and this may in part be due to a differential sensitivity of different species/strains to these chemicals. Whether or not non-monotonic dose-response curves are eventually shown to be a general phenomenon of endocrine disruptors (or a class of them), it is appropriate to ask whether screening for endocrine activity is being carried out in appropriately sensitive test systems.
A literature review was performed to identify key references on the following two general topics:
1) Influence of rat strain/stock on endocrine endpoints measured in the mammalian *in vivo* assays being considered for the EDSP
2) Interspecies similarities and differences in neuroendocrine control of reproduction/development and in responses to endocrine-active chemicals (reported since 1986)
2.2 Literature Search Strategy
Literature databases, accessible through the RTI Information Technology Services, were searched for published peer-reviewed and nonpeer-reviewed articles using on-line electronic literature databases, followed by a focused literature screening process. Full citations, including abstracts (if available) were retrieved for review.
2.2.1 Databases Searched
MedLine, PubMed, Biological Abstracts, Chemical Abstracts, Toxline including DART (Developmental and Reproductive Toxicology)
2.2.2 Database Search Strategies
• **English language articles.** The literature search was performed to include all applicable English language articles.
• **Foreign language articles with English abstracts.** The literature search was performed to exclude foreign language articles with foreign language abstracts. However, the literature search was performed in a manner that accepted foreign language articles that also have English language abstracts. This strategy was used to allow authors to review some literature published in any number of foreign languages.
2.2.3 Keywords and Phrases Used
Articles were identified through the use of keywords in the literature search. Individual sets of keywords were selected for each of the topics listed in “Objectives” above. Combinations of keywords in Column A were combined with keywords in Column B for Task 2 in order to identify key articles addressing the influence of rat strain and stock on endocrine endpoints from pubertal male and female, *in utero* lactational, adult male, and two-generation reproductive toxicity studies. Initially, “rat strain” and “rat stock” were combined with keywords in Column B. When more than 100 hits were found per combination of keywords, additional terms from Column A were added to limit the search. Since the rabbit is not a species under consideration for use in the EDSP, it has been deliberately omitted from the white paper.
A. Rat strain
- Fisher (F344)
- Wistar
- Lewis
- Noble
- Holtzman (HTZ)
- Outbred rats
- Inbred rats
- Norway
B. Reproductive toxicity
- Anogenital distance
- Nipple retention
- Ovarian corpora lutea
- Precoital interval
- Sperm, sperm production
- Estrous cyclicity
- Areolar retention
- Vaginal patency
- Preputial separation
- Uterotrophic
- Embryo loss
- Testes (Leydig, and Sertoli)
- Lactational exposure
- Thyroid development
- Thyroid development and pregnancy
- Lactation
- Blood testis barrier
- Spermatogenesis
- Mammary glands
- 17β-Estradiol (E2)
- Estrus
- Litter size
- Gestation, gestational loss
- Pregnancy
INSL3
Reproductive tract development
Müllerian ducts, Müllerian Inhibitory Substance
Wolffian ducts
Reproductive and accessory organs
Thyroid hormones
Hypothalamic, pituitary, and gonadal hormones
Puberty
Implantation
Fetal survival, mortality
1keywords were obtained from approved EDSP protocols and preliminary search results.
In addition to searches performed with combinations of both columns of keywords, papers by the following authors were searched for relevant articles:
vom Saal, FS; Ashby, J; Odum, J; Gray, LE; Ostby, J; Cooper, RL; Spearow, JL; Michna, H; Diel, P; Festing, M.
The focus of the search was on "rat strain." When there was a paucity of references pertaining to a general endocrine endpoint, "mouse strain" was added to the search.
For interspecies comparisons of reproductive and developmental endpoints, combinations of keywords in Column C were combined with keywords in Column D, and the search was limited to articles published since 1986, in order to identify key articles addressing interspecies differences in neuroendocrine control of reproduction/development, with emphasis on parameters addressed by the NTP's Endocrine Disruptor Low Dose Peer Review Subpanel on Biological Factors and Study Design.
C. Reproductive toxicity, endocrine
Developmental toxicity, endocrine
Mouse/mice
Rat
Interspecies
D.2Intrauterine position (IUP)
Estradiol
LH
LHRH (GnRH)
FSH
ACTH
Hypothalamic-pituitary-gonadal (HPG) axis
Puberty
Prolactin
Testosterone (T)
Thyroid hormone (T₃, T₄)
TSH
Oxytocin
Growth hormone
Diet
Caging
Bedding
Genetic variability
Gene expression
Strain differences
Genotype
Oogenesis
Meiosis
Mitosis
Endocrine-mediated nondisjunction
Hypospadias
Steroidogenesis
Reproductive tract malformation
Cryptorchidism
Spermatogenesis
Nipple retention
Anogenital distance
Uterus
Progesterone
Dihydrotestosterone (DHT)
Gestation
Pregnancy
Gestational loss
Embryo loss
Implantation
Fetal survival, mortality
Androsterone
Androstenedione
Prostate
Hypothalamic-pituitary-thyroid (HPT) axis
Areolar retention
Vaginal patency
Preputial separation (balanopreputial
Epispadia
Ovary, ovaries
Testis, testes
Gonad
Aromatase
2Keywords were obtained from the NTP’s Report of the Endocrine Disruptors Low Dose Peer Review (Subpanel on Biological Factors and Study Design), October 2000, and preliminary search results.
The focus of the search was reproductive and developmental toxicity and endocrine effects. When a search of combined terms from Columns C and D yielded more than 100 hits, specific animal species names (in Column C) were added to limit the search.
2.2.4 Summary of the Review Process
After key articles were identified, individual references were retrieved and further searched to identify additional key articles (i.e., tree search). References of interest were electronically downloaded to Reference Manager, retrieved, evaluated, organized by topic, and retained for use in preparing the white paper.
2.3 Definitions
The Institute for Laboratory Animal Resources (ILAR) defines a strain as “inbred” when it has been mated brother x sister for 20 or more consecutive generations (ILAR Journal, 1992). “To ensure isogenicity, as well as homozygosity, a single brother x sister pair must be selected in the twentieth or a subsequent generation to perpetuate the strain. Parent x offspring matings may be substituted for brother x sister matings, provided that in the case of consecutive parent x offspring matings, each mating is to the younger of the two parents; this will prevent repeated backcrossing to a single individual” (ILAR Journal, 1992). A strain is defined as “outbred” when it has been maintained as a closed colony for at least four generations. “To minimize changes caused by inbreeding and genetic drift, the population should be maintained in such
numbers as to give less than 1 percent inbreeding per generation. Under these conditions, a heterozygous breeding population is expected to reach equilibrium and to produce a stock of stable genetic composition. Formerly inbred strains may be included after four generations of closed outbreeding, provided that continued outbreeding is intended. Outbred stocks are not necessarily highly variable genetically. The degree of genetic variability of any individual stock can only be determined by studying the appropriate genetic markers” (ILAR Journal, 1992). “Wild type” refers to the genotype or phenotype that is found most commonly in nature or in the standard laboratory stock for a given organism, before mutations are introduced.
2.3.1 Inbred and Outbred Strains
The term "strain" refers to a closed population of organisms of the same species, with distinctive hereditary characteristics that distinguish them from other groups within the species. "Strains" are artificially maintained to promote certain characteristics by manipulation of population size, mating system, as well as the intensity and direction of artificial selection (Lynch and Walsh 1998). The terms "strain," "stock," and "line" are used somewhat interchangeably on inbred and outbred strains. The most commonly used outbred strains of rats include Wistar, Long Evans, SD, and CD (caesarean-derived CR® SD). Outbred mice include ICR, SENCAR, NMRI, CFW, CF1, CD-1, and "Swiss" mice. The most commonly used inbred rat strains include Fischer (F-344), Brown Norway (BN), ACI, Lewis (LEW), Noble, DA, Copenhagen, Dahl Salt Sensitive (SS), spontaneously hypertensive rat (SHR), and WKY. Commonly used inbred mouse strains include A, AKR, BALB/c, CBA/Ca, CBA, C3H/He, C57BL/6, C57BL/10, DBA/2, FVB, NOD, NZB, SJL, and SWR. Commonly used isogenic rat F1 crosses include F-344 x BNF1 (FBNF1) and LEW x BN F1 (LBNF1). Commonly used isogenic mouse F1 crosses include B6D2F1, B6C3F1, B6AF1, CAF1, CB6F1, and NZBWF1.
Strain differences, in response to xenobiotics and hormonally active compounds, are an extremely common finding in the few studies that have compared several strains of mice or rats (Festing 1979; Festing 1987; Festing 1995; Steinmetz et al. 1998; Spearow et al. 1999; Long et al. 2000). Since most toxicologists and physiologists only use a single strain of mice or rats, the amount of genetic variation between strains is not usually apparent. Many assume that finding a fairly uniform response to a given hormonally active toxicant in a commercially available outbred strain indicates a lack of genetic variation in susceptibility in the species. As will be discussed in detail, such assumptions are often invalidated by the extremely narrow genetic base, history of long-term selection for large litter size, and correlated changes in reproductive development and function traits characteristic of many commercially available outbred strains of mice and rats. This section will consider the features and selection history that have defined available inbred and outbred mouse and rat strains, in particular, factors relating to genetic variation in susceptibility to endocrine disruption.
One criticism against utilizing the most commonly used outbred strains is based on the fact that these animals have been bred specifically for robust fecundity and
relative insensitivity to endocrine perturbations. Selective breeding can alter reproductive traits, including natural and hormone-induced ovulation in the rat, litter size, testis weight, and sperm production (Bradford 1969; Johnson et al., 1994; Okwun et al., 1996a; Okwun et al., 1996b; Spearow and Barkley 2001). Selection for large litter size can affect the hypothalamic-pituitary-gonadal axis, resulting in differential sensitivity of the ovaries to gonadotropins and changes in follicular steroidogenesis in female mice, increased testis weight in males, and altered sensitivity of testis weight to estrogen in males (Spearow 1985; Spearow et al., 1987; Spearow et al., 2001).
Inbred strains have several features that make them valuable for biomedical research (Festing, 1979). They are produced by at least 20 generations of brother x sister matings, with all individuals of a strain in the 20th or subsequent generation tracing back to a single common ancestral breeding pair (Festing 1987). While a small number of genes may continue to segregate as residual heterozygosity, especially in the 20th to 30th generation of inbreeding, practically speaking, all members of an inbred strain are isogenic, i.e., essentially genetically identical individuals (Festing 1987). Thus, each isogenic inbred strain represents an infinitely repeatable, genetically defined set of identical twins which are homozygous at essentially all loci. A complete mouse genomic map and mouse genomic DNA sequence are available for the C57BL/6J inbred mouse strain (www.informatics.jax.org; www.ensembl.org/Mus_musculus/). In addition, a rat genomic map, and a rat genomic DNA sequence are in development for the Brown Norway BN/SsNHsd/MCW (BN) rat strain (Rattus norvegicus) (www.ensembl.org/Rattus_norvegicus/; also see http://bacpac.chori.org/rat230.htm).
Genetically defined, isogenic inbred strains are an important resource for determining the toxicological effects of chemicals on biodiversity within such species. Because of their very high level of homozygosity, inbred strains stay genetically uniform and constant over many generations, with only a slight amount of genetic drift due to new mutations (Festing 1987). Their utility in toxicological research is due to the highly consistent and reproducible genotype, ideal for testing many different chemicals for toxicity over time (Festing 1979; Festing 1987; Festing 1995). Inbred strains are genetically monitored to confirm genetic integrity of each strain stain, and to ensure they have not been accidentally outcrossed by using coat color, biochemical, immunogenetic, as well as microsatellite and Single Nucleotide Polymorphism (SNP) molecular genetic markers.
F1 crosses between inbred strains are also commonly used in toxicological research, including B6C3F1 mice by the NTP. Isogenic F1 crosses are uniformly heterozygous at all loci, differing between parental inbred strains. As parental inbred strain genotypes and genetic sequence become increasingly defined, the genotype of F1 crosses of genetically defined inbred strains can be accurately predicted *in silico* (Manly et al. 2001; Marshall et al., 2002). This enables the production of an almost infinite number of genetically defined isogenic F1 cross animals with identically defined homozygous and/or heterozygous genotypes at all known loci. While the F1 crosses of inbred strains are isogenic, their F1 + F1 crosses to generate F2 animals and/or
backcrosses produce offspring which are segregating at many loci. The increased genetic and phenotypic variability in such segregating crosses confounds the description of treatment-related effects and thereby decreases the sensitivity of detecting treatment effects, and therefore limits their use in toxicological studies with offspring of F1 parents.
A wide variety of specialized, genetically defined, inbred strain genetic resources are also available, and are especially useful for characterizing, mapping, and identifying genes controlling strain differences in a wide variety of traits (Silver 1995). Two strains are referred to as consomic when they differ by one complete chromosome pair, and a congenic strain when it carries a small genetic region (ideally a single gene) from another strain, but which is otherwise identical to the original inbred strain. Several congenic inbred strains of mice and rats are available, each with a single chromosomal region from one strain backcrossed onto the genetic background of another strain. Several consomic (chromosome substitution) inbred strains are also available, each with a single chromosome from one strain introgressed by backcrossing onto the genetic background of another strain. Available rat consomic strains include several SS.BN chromosome-specific consomic strains with individual BN chromosomes on the Dahl Salt-Sensitive (SS) genetic background. Available mouse consomic strains include a full set of C57BL/6J-A chromosome substitution strains, each with a single A/J chromosome substituted on the C57BL/6J (B6) genetic background (Nadeau et al., 2000). Several mouse and rat recombinant inbred (RI) strain sets are also available. These RI strains are formed by crossing two inbred parental strains, intermating the F1 to produce F2s, and then producing a set of inbred lines by inbreeding *ad infinitum* from each of several F2 mating pairs. Available mouse RI strain sets include BXD RI, AXB.BXA RI, AKXD RI, and AKXL RI ([http://jaxmice.jax.org/jaxmicedb/html/rcbinbred.shtml](http://jaxmice.jax.org/jaxmicedb/html/rcbinbred.shtml)). Available rat RI strain sets include the SHR x BN (HXB/BXH) RI and the LEW x BN (LXB) RI.
Outbred strains are also commonly used in biomedical research and are intended to be genetically more diverse by maintenance of large, heterogenous, random mating populations and avoiding inbreeding. Yet, the diversity of many outbred mouse strains is limited, in part, by the narrow genetic base of the founders and, in some cases, early inbreeding programs to select against deleterious recessive genes. For example, laboratory stocks of Norway rats (*Rattus norvegicus*) were developed from albino mutants that had been bred by fanciers (Gray 1977). Around 1900, H.H. Donaldson obtained rats from fanciers for a breeding colony at the University of Chicago. This stock was transferred to the Wistar Institute in 1906 and maintained as a closed colony. This Wistar stock contributed to, and is therefore related to, several other outbred strains. In 1915, a few albino Wistar Institute females were crossed with a single wild gray male and then used to develop the Long-Evans rat stock ([http://www.criver.com/03CAT/rm/rats/longevansRats.html](http://www.criver.com/03CAT/rm/rats/longevansRats.html)).
In 1925, Robert Dawley crossed a single hybrid hooded male of unknown origin with an albino female of the Doureoure strain (probably from Wistar). This single hooded foundation male was subsequently backcrossed to his albino female offspring
for seven successive generations. Multiple daughter lines, developed by inbreeding, were then crossed to form the stable heterogeneous SD stock, which was then maintained as a closed outbred population (http://www.harlan.com/). Thus, while the SD rat strain can be traced back to a single (most likely a Wild x Wistar) hooded hybrid male and a Wistar female, due to the repeated backcrossing of successive generations of daughters to the hooded hybrid foundation male, the vast majority of the genes in this bred strain originated from the single hooded hybrid male. Furthermore, the process of inbreeding in each of multiple lines enables enhanced selection against deleterious recessive genes present in the initial population, and enables increased litter size means and improved selection responses in litter size following crossing (Falconer 1971; Eklund and Bradford 1977; Falconer 1989). In other words, by inbreeding with selection in several lines followed by crossing these lines, Robert Dawley was very likely to have eliminated deleterious recessive genes commonly found in outbred populations. By then selecting for increased fecundity and docility, he was able to develop the highly productive SD strain. Charles River Laboratories obtained SD breeders in 1950, Cesarean derived them in 1955, and selected long-term for large litter size and vigor to develop the CD® (SD) rat strain (http://www.criver.com/03CAT/rm/rats/longevansRats.html). Given this breeding and selection history, the SD strain, and the SD-derived CD strain have less genetic diversity and less deleterious recessive genes than what is found in wild type populations. As discussed below, long-term selection mainly for increased prolificacy may have resulted in even greater genetic divergence for reproductive and correlated traits.
A stock of Swiss mice was also developed from two male and seven female non-inbred albino mice by Dr. de Coulon, Lausanne, Switzerland. This stock was imported into the U.S. by Dr. Carla Lynch at Rockefeller Institute in 1926, and transferred to the Institute for Cancer Research in Philadelphia in 1948, where it was selected for high production and growth rate (http://www.taconic.com/addinfo/icrorigin.htm; http://www.criver.com/03CAT/rm/mice/cd1Mice.html). This closed ICR Swiss stock was used to establish several ICR strains. ICR was Caesarean derived in 1959 by Charles River Laboratories (CRL) and used to bred the CD-1 strain which was also selected long-term for large litter size and vigor.
Other outbred strains of mice are also from a very narrow genetic base. For example, the CRL CF-1® strain, which likely originated from non-Swiss mice at Rockefeller Institute, was intensively inbred for over 20 generations by Carworth and then outbred from a single mating pair (http://www.criver.com/03CAT/rm/mice/cf1Mice.html). This outbred stock, like all the other outbred strains at CRL, was long-term, selected primarily for high prolificacy.
**Population Genetics:**
Mutations, inbreeding, genetic drift, and differential selection may account for genetic differences in the same outbred strains provided by different suppliers or even different colonies of the same supplier. In outbred strains, spontaneous mutations and
recombinations between alleles supplement each other in generating and maintaining a multiple allelic system, which provides even more genetic variation on which selection can act. The amount of genetic variability in a randomly selected and randomly mated strain or population depends, in part, on the initial heterozygosity, the rate of inbreeding, and the number of generations of inbreeding (Pirchner 1969; Falconer 1989). Inbreeding is the change in genotype frequencies resulting from the mating of related individuals. The rate of inbreeding is dependent on the effective population size (Pirchner 1969; Falconer 1989). For pair-mated species, the rate of inbreeding per generation = 1/(2N), where N= total number of unrelated individuals in a population (Pirchner 1969). For populations in which males are mated to several females, the rate of inbreeding per generation = (1/(8Nm)) + (1/(8Mf)), where Nm= number of males and Nf=number of females (Pirchner 1969).
Genetic drift is the change of allele frequencies as a result of sampling populations of limited size. A colony of limited population size also undergoes genetic drift and therefore differs from other colonies of the same origin (Kacew and Festing, 1996). Genetic drift occurs when the frequency of alleles in a population change due to chance, i.e. sampling, rather than by natural or artificial selection. The "founder effect" is an extreme form of genetic drift, where gene frequency in a small founding subset of a population is different than the larger population from which it was derived. For commercially available outbred strains, genetic drift and inbreeding are likely greatest in the generations with small effective population size, at which Caesarean derivations were performed to establish each outbred strain and each outbred substrain. In contrast, a large number of breeders (typically several hundred to a thousand) are selected for continuing the line in most generations of breeding in most outbred mice and rat strains. During these generations (with large effective population sizes), if mating is random, the theoretical rate of inbreeding is likely to be quite low in the 0.05% to 0.5% range per generation. Nevertheless, genetic quality control data comparing outbred strain subpopulations showed evidence for considerable genetic drift between colonies or subpopulations. For example, Major Histocompatibility Complex RT1 haplotypes showed considerable genetic drift between colonies of Crl: CD® (SD) BR rats (Rodent Genetics and Genetic Quality Control for Inbred and F1 Hybrid Strains, Part II, Winter 1992, Table 8: http://www.criver.com/techdocs/rodent2.html).
DNA fingerprinting analysis showed that the diversity within outbred rat strains was much less than that found among ten commonly available inbred rat strains (Festing 1995). Genetic and biochemical marker typings in 63 rat inbred laboratory strains and 214 substrains showed an average polymorphism of 53% between strains, with wild-derived Brown Norway (BN) strain rats showing the greatest genetic divergence (Canzian 1997). While much more diverse than within any inbred strain, the genetic diversity within outbred rat strains is far lower than that found between commonly available inbred strains and more closely represents the level of diversity found in island populations (Festing 1995). For many traits, these commercial outbred strains show much lower variability than that found in genetically heterogeneous populations, such as an F2 cross between inbred strains. The limited genetic diversity of common commercial outbreds, such as the CD rat, should not be surprising, given its
extremely narrow genetic base, early inbreeding to purge deleterious recessive genes, and subsequent history of selection mainly for large litter size and vigor.
**Migration/Outbreeding:**
The amount of genetic variability in a strain or population also depends on the frequency of migration or outbreeding from other populations and the difference in gene frequencies between such populations (Pirchner 1969; Falconer 1989).
**Genetic Monitoring:**
Genetic monitoring of outbred strains can be used to confirm a strain's genetic integrity. The main purpose of genetic quality control is to preserve isogenicity (http://www.criver.com/techdocs/rodent1.html). The loss of isogenicity, i.e., substrain or subline divergence, has three main causes: mutation, drift in residual heterozygosity, and genetic contamination caused by an unintended outcross. Genetic drift resulting from mutation or residual heterozygosity is difficult to detect and control and has minimal impact on most research. The greatest cause of subline divergence affecting the usefulness of inbred strains is genetic contamination. Genetic quality control procedures are aimed at preventing and detecting genetic contamination by strict colony management and routine genetic monitoring (http://www.criver.com/techdocs/rodent1.html).
**Genetic and Environmental Sources of Variation:**
Selection has defined the available outbred strains of mice and rats. While some of this genetic divergence between outbred line substrains is likely due to genetic drift, some of the divergence between strains and substrains is also very likely to be due to selection. Early in their development, outbred strains (including Wistar, SD, and Long Evans rats) were selected, at least in part, for docility and increased reproductive performance (Gray 1977). For economic and productivity reasons, most outbred strains continued to be selected by commercial breeders for large litter size and vigor. For example, following caesarean derivation from the SD line, CRL selected the CD rat strain from 1950 until 1991, i.e., for approximately 80-100 generations, mainly for large litter size in their 2nd to 5th litters and to a lesser degree for increased vigor (Charlie Parady and Patricia Mirley, CRL, personal communication). Following selection of large litters at birth, litters were reduced to 13 pups per litter, and vigorous pups from larger litters were selected at weaning as breeders for the next generation of the line. While pedigrees were not maintained, selected individuals with different birth dates were randomly mated to avoid inbreeding. CRL used this selection criteria and mating system within each subpopulation of outbred mice and rats. While much of the selection pressure was for large litter size, the limited selection for vigor following rearing in large litters (mice only have ten teats) may have also resulted in some selection for increased lactational yield and body weight.
ICR Swiss outbred mice were also rigidly selected for high productivity and
growth rate by Dr. TS. Hauschka at the Institute for Cancer Research (www.taconic.com/addinfo/icrorigin.htm). Following Caesarean derivation from ICR outbreds in 1959, the CD-1 mouse was again selected mainly for large litter size, with some selection for vigor through 1991 (Charlie Parady and Patricia Mirley, CRL, personal communication). The result of these selective breeding and random mating programs has been large, vigorous, highly prolific, high lactating outbred mouse strains that have been widely used as animal models in biomedical research.
**Results of Controlled Selection Experiments:**
Unfortunately, unselected controls were not maintained during the many generations of selection that defined these commercial outbred laboratory animal strains. Nevertheless, several selection experiments have been conducted by academic researchers that did maintain appropriate, randomly-selected control lines or divergently-selected control lines. These selection experiments in outbred stocks showed dramatic responses to selection for large litter size, growth rate, and litter weight gain (Bradford 1968; Bradford 1969; Land and Falconer 1969; Eisen et al. 1970; Land 1970; Eisen 1972; Land et al. 1974; Eisen 1975; Bradford 1979; Eisen and Durrant 1980). Long-term selection for large litter size results in increased litter size, with limited effects on body weight (Bradford 1968). In contrast, selection for increased growth rate increases growth rate and mature weight but decreases longevity (Eklund and Bradford 1977). Many of the reproductive endpoints of interest, including puberty, litter size, and cyclicity, are threshold traits, invoking need to consider threshold and scale effects (Falconer 1989).
Selection for large litter size increases ovulation rate and embryo-fetal survival (Bradford 1968; Bradford et al. 1979; Durrant et al. 1980; Eisen and Durrant 1980; Spearow and Bradford 1983; Spearow 1985; Pomp et al. 1988). Selection for growth rate also increases ovulation rate, but through different physiological mechanisms than selection for large litter size (Durrant et al. 1980; Eisen and Durrant 1980; Spearow and Bradford 1983; Spearow 1985; Pomp et al. 1988). While the physiological genetic mechanisms by which genetic selection for increased prolificacy in rodents are not fully understood, they involve changes in the regulation of the hypothalamic-pituitary-gonadal axis, serum gonadotropin levels, and follicle populations, as well as changes in sensitivity to gonadotropins, estrogens and estrogen negative feedback, the induction of gonadal LH receptors, the induction of gonadal steroidogenesis, the induction of follicular growth, and follicular atresia (Bradford et al. 1979; Durrant et al. 1980; Spearow and Bradford 1983; Spearow 1985; Spearow 1986; Pomp et al. 1988; Lubritz et al., 1991). In essence, selection for increased litter size has dramatic effects on the endocrine mechanisms regulating reproductive endocrine function and development traits.
Comparison of reproductive endocrine traits between inbred strains, congenic substrains, and recombinant inbred lines clearly show major genetic differences in these traits between strains of mice, further confirming that these reproductive endocrine traits have a genetic basis. This includes evidence for significant to highly
significant differences between strains of mice in: hormone-induced ovulation rate (Spearow 1988a; Spearow 1988b; Spearow and Barkley 1999), the hormonal induction of follicle maturation, hormonal control of follicular atresia and follicle number (Spearow et al., 1991), ovarian steroidogenesis and aromatase activity, estrogen-induced immature uterotrophic weight (Griffith et al. 1997; Roper et al. 1999), estrogen-induced uterine eosinophil and macrophage numbers (Griffith et al. 1997; Roper et al. 1999), and estrogen-induced susceptibility to vaginal candida infection (Parmar et al. 2003). Strains of inbred mice and rats also differ in normative testes and seminal vesicle weights (Zidek et al., 1998; Zidek et al., 1999).
SD and ACI strain rats also differ dramatically in the incidence of mammary cancers in response to DES (Shellabarger et al. 1978), as reviewed by Festing (1987). Whereas DES failed to cause mammary cancer in any SD rat (0/33), DES increased the incidence of mammary cancers from 0% to 53% in ACI strain rats. In contrast, in response to atrazine, female SD rats developed mammary cancer while F-344 rats did not, apparently due to a disruption of ovarian function leading to persistent estrus and increased E2 in SD but not in F-344 rats (Wetzel et al. 1994; Stevens et al. 1999).
Additional evidence for genetic variation in reproductive endocrine function has also been demonstrated in Quantitative Trait Loci (QTL) linkage studies that have mapped genes or QTL controlling reproductive endocrine traits to specific chromosomal regions. Genes controlling the increased natural ovulation rate of large-litter size selected Quackenbush-Swiss strain mice over that of C57BL/6J strain mice map to regions of chromosome (Chr) 2 and 4 (Kirkpatrick et al., 1998). These regions of Chr 2 and 4 overlap with loci controlling major strain differences in hormone-induced ovulation rate and ovarian steroidogenesis. Markers on Chr 13 are significantly associated with strain differences in testes weight in the mouse BXD recombinant inbred strain set (Zidek et al., 1998).
Several genes controlling E2-induced uterine hypertrophy and/or E2-induced uterine leukocyte responses have also been mapped to specific chromosomal regions (Roper et al. 1999). An interacting genetic factor on Chr 10 controls E2-induced uterine weight as well as E2-induced leukocyte responses (Roper et al. 1999). These and other studies in genetically-defined inbred strains of mice clearly confirm that differences in reproductive endocrine traits and sensitivity/susceptibility to estrogenic agents found among selected strains have a genetic basis.
Due to genetic variation in reproductive development and function, selection can also have a major effect on reproductive function in other mammalian species. Lines of sheep selected for large litter size increased the frequency of alleles with major effects on reproductive development, function, and ovulation rate (Bindon 1984; McNatty et al. 1985; McNatty et al., 1995). QTL linkage mapping and positional cloning studies showed that a Chr 6 mutation (FecC) in the intracellular kinase signaling domain of bone morphogenetic protein IB (BMP-IB) receptor (also known as ALK-6), which binds members of the transforming growth factor-beta (TGF-beta) superfamily, has a major
effect on reproductive function, development, ovulation rate, and litter size (Wilson et al. 2001).
**Sources of Variation in Traits:**
The phenotype of an individual can be considered as the sum of its genotypic value (G), the environmental effects (E), and the genotype x environment interaction, i.e., phenotype = G + E + GxE interaction (Falconer 1989; Lynch and Walsh 1998). Geneticists normally consider each of these factors as variance components in analyzing trait phenotypes. Potential endocrine-disrupting chemicals represent environmental or nongenetic sources of variation affecting a given trait. The purpose of the EDSP is to determine if a given chemical, i.e., environmental factor, has detrimental effects on reproductive development and functional phenotypes. However, it is not just the occurrence of genetic differences in reproductive phenotypes, but also the potential for genotypes to interact in a nonparallel manner with environmental factors that is of concern in designing EDSP assays. If G and GxE interactions are not important, any strain of animals could be used for testing chemicals for endocrine disruptor activity. However, if the genetic variance and especially if the genetic x environmental variance are significant, care must be taken to avoid screening chemicals with resistant strains, since the effects on sensitive animal strains would be underestimated (Narotsky et al. 2001; Spearow and Barkley 2001). The ultimate question is, of course, the effects, if any, on humans and other species of concern. The best approach would be the use of the most relevant animal model, if known; the default approach is the use of the most sensitive animal model.
The genetic variance in a trait can be further considered broken down into its components, including the additive genetic variance, dominance genetic variance, and the epistasis genetic variance, e.g., Total Genetic = Additive + Dominance + Epistasis (Falconer 1989; Lynch and Walsh 1998). In biological terms, the additive genetic variance is the variation in the average additive effects of alleles. Alleles at additive-acting loci behave in an step-wise or additive manner, with each + allele increasing the phenotype in an additive manner. Conventional mass selection for a trait acts mainly on additive genetic variance to increase the frequency of alleles, which on average have a beneficial effect on the trait(s) under selection. An animal's breeding value is the sum of the additive effects of its genes. Dominance is defined as a nonadditive interaction between alleles at a given locus. While loci that show dominance for a trait can also have an additive genetic component, they show a deviation, e.g. dominance deviation, from the regression line between the phenotypic means of the low homozygote and the high homozygote. For example, consider a trait showing complete dominance where the phenotype of individuals that are aa = 0, Aa= 2 and AA=2. While the average of the two homozygotes =1, the heterozygote (Aa) has a mean of 2, and thus shows a dominance deviation. Epistasis describes the nonadditivity of effects between loci.
The effects of selection, inbreeding, and crossing are very different on traits controlled by additive versus dominance genetic variation. Selection can utilize additive
genetic variance but not the dominance genetic variance in a population to improve a trait. Traits controlled by additively acting loci do not show inbreeding depression, since the number of loci proportional to the initial gene frequency will fix for the - versus the + acting alleles. In contrast, loci showing dominance generally decline with inbreeding due to the fixation of less desirable recessive alleles. Finally, F1 crosses generally show heterosis or hybrid vigor (e.g., increased phenotypic variance) in traits showing dominant gene action but not in traits controlled by additive gene action unless there is also complementarity among component traits.
**Relative Importance of Historic Inbreeding Versus Selection for High Prolificacy in Defining Currently Available Strains:**
In full sib (brother-sister) mating programs during the development of inbred strains, the largest evolutionary force is genetic drift and random fixation of alleles. This is especially true of reproductive traits with medium to low heritability. There will be an inbreeding depression for phenotypes controlled by dominantly acting loci and perhaps for some loci controlled by dominant epistatic interactions (Lynch and Walsh, 1998). In contrast, on average, inbreeding without selection will not change the phenotypes controlled by additively acting genes since as many + as - acting alleles are likely to fix in a given inbred strain. This is particularly true for traits controlled by a large number of additive loci. Nevertheless, for traits controlled by a very small number of additive loci, there is likely substantial genetic drift in the trait, depending on whether inbreeding fixes a given inbred strain for a + or - acting allele. Without selection, the expectation is for no net change in the number of + or - acting alleles affecting a trait.
In contrast, long-term selection mainly for a single trait will dramatically increase the frequency of + alleles, even for medium to low heritability traits like litter size (Bradford 1968; Eklund and Bradford 1977). Furthermore, since commercial outbred populations were selected for large litter size in large populations, it is likely that any beneficial mutations which occurred in the population would also be utilized to increase prolificacy even further.
**Correlated Trait Responses:**
Correlated traits are generally due to the action of genes, with pleiotropic effects on several traits or physiological processes (Falconer 1989). Comparison of strains selected for high fecundity with randomly selected control strains revealed correlated trait responses in several male characters for reproductive development function. In addition to changing reproductive function in females, selection for large litter size also increases the weights of testes, epididymides, and seminal vesicles (Eisen and Johnson 1981; Spearow et al. 1999; Spearow et al. 2001). Strains of mice and rats selected for large litter size are more resistant than unselected strains to disruption of testes weight, accessory gland weights, spermatogenesis, and steroidogenesis by estradiol or DES (Spearow et al. 1987; Inano et al. 1996; Spearow et al. 1999; Spearow et al. 2001). These observations suggest that one of the correlated responses to
selection for large litter size is resistance to endocrine disruption by estrogenic agents (Spearow et al. 1999; Spearow and Barkley 2001; Spearow et al. 2001).
Some of the genes with pleiotropic effects on related reproductive traits have been mapped, suggesting potential genetic mechanisms mediating correlated trait responses. Markers on Chr 8 showed a significant association with seminal vesicle mass and a suggestive association with litter size in HXB and BXH recombinant inbred strain sets derived from SHR and Brown Norway (BN) rat strains (Zidek et al. 1999). Since litter size was significantly associated with seminal vesicle mass, these data suggest that both of these traits are under the control of the same QTL or tightly linked QTL on rat Chr 8 (Zidek et al., 1999). Thus, this rat Chr 8 QTL may have pleiotropic effects on litter size and seminal vesicle mass, explaining, at least in part, how selection for large litter size also affects male reproductive developmental traits.
Considerable alarm was raised by the toxicology community when it was noted that certain outbred stocks, including CD rats, were showing excessive litter size, increased mature body weights, and decreased longevity (Pettersen et al. 1996) (CRL reference paper Vol 11, #1, 1999). When raised in the same environment, CRL: CD males were found to be significantly heavier than Taconic: SD males which were heavier than Hsd: SD males (Klinger et al., 1996). While it is unknown whether the increased body weights of CD rats are a correlated response to selection for vigor or unintended selection for increased body weight, the study of Klinger et al. (1996) suggests the substrain differences are genetic. Increased body weight is correlated with decreased longevity (Eklund and Bradford 1977), and a higher proportion of CD rats failed to survive to the age required in two-year carcinogenicity studies (Petterson et al., 1996). Thus, large commercial suppliers such as CRL have reconsidered and changed their long-term selection criteria and mating system (CRL reference paper Vol 11, #1, 1999).
**Genetic Standardization of Genetic Variability in Outbred Strains:**
Due to the observed increased litter size and body weight, as well as decreased longevity, CRL has recently initiated an effort to "standardize" certain outbred strains such as the CD® (SD) IGS BR rat, Wistar Han IGS rat, and CD-1 (CD-1®(ICR)BR) mouse. For example, in the early 1990s, 100 pairs of CD rat breeders were selected from each of eight diverse CRL CD rat colonies world wide and rederived in isolators to form the CD IGS reference foundation colony in Wilmington, MA. Selection criteria were relaxed, and this CD IGS (international genetic standard) rat reference population was then managed with procedures to minimize genetic drift to establish each CD IGS colony world wide. CRL plans to further minimize genetic drift by migrating additional breeders in both directions between the IGS reference population and isolated colonies over time (CRL reference paper Vol 11, #1, 1999). By also using a pedigreed mating system designed to minimize inbreeding and by improving genetic quality control monitoring, CRL is anticipated to dramatically reduce genetic drift and variation between outbred IGS colonies.
Genetic variation within “narrow genetic base outbred strains”: While CRL refers to this as the International Genetic Standard (IGS) program and has the aim of minimizing genetic drift and producing a "standardized" outbred rat, it is the variability of the population that is standardized, not the individual Cri:CD®(SD)IGSBR rat (abbreviated as CD IGS). While much less diverse than outbreds of essentially any other mammalian species that have not undergone an extreme genetic bottleneck followed by long-term selection mainly for increased prolificacy, the SD strain and the SD derived CD IGS rat strain are segregating at many loci. Even with a narrow genetic base, it is impossible to predict or repeat the genotype of any given CD IGS strain, or for that matter any other outbred strain. Selection for increased prolificacy is likely to have also increased the frequency, or fixed alleles conferring resistance to endocrine disruption by some hormonally active agents, but the genes conferring genetic susceptibility to diverse hormonally-active compounds have not been identified. Thus, the use of such outbred animals makes it impossible to replicate the susceptibility genotypes and therefore the conditions used for testing any toxicant x dose combination in the EDSP. This makes replication of experiments involving outbred animal models problematic, regardless of whether replicates are conducted by the same or different laboratories. Such unpredictable genetic variation within even narrow genetic base outbred strains will greatly complicate and limit efforts to use conventional reproductive toxicological, genomic bioinformatic, microarray, and proteomic approaches to identify chemicals with endocrine disrupting activities, as well as future efforts to characterize their mechanisms of action and loci controlling genetic susceptibility. Furthermore, the within strain genetic variation common to outbred strains is also likely to be a major component of "between litter" effects.
While the extremely narrow genetic background, early inbreeding, and long-term history of selection, mainly for high prolificacy and vigor, has resulted in highly robust and productive CD IGS rats, it is highly doubtful these animals are representative of any natural mammalian outbred population. The fact that Robert Dawley backcrossed daughters of a hybrid male back to the same male for seven consecutive generations indicates that well over 90% of the gene pool (an estimated 99.6 % of the genes minus deleterious alleles purged by selection) of the SD strain came from the single foundation male rat. Thus, the SD strain and any substrains derived from this closed population represent an exceedingly narrow genetic base. The seven consecutive generations of backcrossing daughters to the same foundation male rat is also likely to have purged most deleterious recessive genes, including those affecting reproductive development and function. While crossing inbred sire-daughter lines generated from the single foundation male is likely to have provided some heterozygosity at certain loci in the SD population, most of the genetic diversity in this strain had to come from the single hybrid foundation male that was backcrossed for seven consecutive generations to his daughters.
Just as important, the IGS program does very little to eliminate the changes in gene frequency affecting reproductive and correlated traits brought about by over 80 generations of selection for increased litter size. Since selection of the CD rat was conducted in large, narrow genetic base outbred populations, any additional mutations
improving litter size would also be selected for. It has been argued that outbred laboratory animal populations show phenotypes more representative of wild outbred populations and more phenotypic diversity than inbred strains. This may be true for unselected traits in a fully outbred laboratory animal strain, but it is clearly not true regarding traits (and their correlated traits) for which outbreds have undergone long-term selection (Eklund and Bradford 1977). Many long-term selected outbred strains show phenotypic means and distribution for traits under selection and their correlated traits, which are well beyond the normal distribution of values from an unselected control population. While long-term selection for large litter size increases the mean litter size, it also decreases the additive genetic variance for this trait (Eklund and Bradford 1977) and is thus likely to decrease the additive genetic variance in traits correlated with high fecundity. Since selection for large litter size was relaxed since the formation of the CD IGS rat population, this population may be restored to its original litter sizes, but not necessarily the original genotype. However, if the restoration of original litter size resulted from changing the genes which control litter size, these genes may also control response of other endocrine-sensitive endpoints (i.e. a pleiotropic effect, whereby a single gene controls a number of parameters/responses). Therefore recovery of original litter size may also change the sensitivity of the strain to the pleiotropically-related endpoints (e.g. FSH or LH levels, number of eggs ovulated, responsivity to E2, or estrogen-like compounds, etc.) back to where it was, whether or not the original genotype was recovered.
Data from Spearow’s laboratory show dramatic differences in susceptibility to endocrine disruption by estrogenic agents between strains of mice, and that CD-1 strain mice selected for high fecundity are highly resistant to E2. This includes approximately 16- to 100-fold differences in sensitivity between strains of mice in susceptibility to the disruption of testes weight, spermatogenesis, epididymal sperm counts, testicular sulfotransferase activity, and gestational fetal losses (Spearow et al. 1999; Spearow et al. 2001). Data from other laboratories also show that the SD rat and the SD-derived CD rat are less sensitive than other strains in susceptibility to estrogenic agents, including estrogen and the xenoestrogen Bisphenol A (BPA) (Steinmetz et al. 1997; Steinmetz et al. 1998; Long et al. 2000). SD rats are also much more resistant than Wistar/MS or Fisher 344 rats to the reduction in testis and seminal vesicle weights by DES (Inano et al. 1996), and are much more resistant than several other strains, including F344, to estrogen- and DES-induced pituitary tumors (Gregg et al. 1996; Wendell et al. 2000). Thus, the fact that outbred strains such as the SD-derived CD rat and the Swiss and ICR derived CD-1 mouse have been previously long-term selected for increased fecundity and vigor is of special concern in EDSP assays due to the correlated trait response of increased resistance to endocrine disruption by estrogenic agents.
However, resistance or sensitivity to endocrine disruptors is not uniform across test chemicals and endpoints. For example, SD rats are more sensitive than F344 rats to the uterotrophic effects, including increased uterine weight and epithelial cell height effects of endocrine-active compounds such as tamoxifen (Bailey and Nephew 2002). There are many other examples of strain differences in endpoint- and test substancespecific responses to endocrine-active compounds (see Tables 2 and 3 for a range of responses in different rat strains).
Even though the "outbred" strains come from a relatively narrow genetic base and may not represent all sensitive rats of the world, there is a wider range of responses in outbred strains, due to genetic heterogeneity, than in inbred strains. The selection of an outbred versus an inbred strain for use in these assays depends on whether one can select the most sensitive inbred strain for an assay with all the confounders discussed above and below, or whether the broader response of an outbred strain provides a significant advantage at the sensitive end of the curve.
**Genetic Variability in Toxicological Assays:**
Since all members of a given isogenic strain (inbred strains and F1 hybrids) are essentially genetically identical, the dramatically reduced genetic variability in isogenic strains enhances the reproducibility and comparability of data generated in these stocks (Festing 1979; Festing 1993). The crucial characteristic common to both inbred and F1 hybrid strains is isogenicity, i.e., the fact that all individuals of an authentic strain are genotypically the same and therefore phenotypically more uniform than individuals of outbred stocks (http://www.criver.com/techdocs/rodent1.html). Isogenicity of such inbred stains and F1 crosses leads not only to a much greater genetic and phenotypic uniformity but also a high, long-term genetic stability. Thus, it has been suggested (Festing 1995) that toxicologists should treat genetics like every other variable and control it by utilizing isogenic strains (F1 hybrids heterozygous at every locus). However, in any study requiring generation of offspring, such as the two-generation reproductive toxicity study, the advantages of utilizing isogenic strains is lost since use of F1 parents will produce F2 offspring which are segregating at many loci with differing genotypes and phenotypes.
The view is held by some researchers (Spearow et al. 1999; Spearow et al. 2001) that commercial outbred strains are resistant to endocrine disruption by estrogenic agents at some endpoints, most likely as a correlated response to long-term selection for high prolificacy. However, toxicity testing in outbreds represents testing the effects of toxicants on a sampling of outbred strain genotypes. One argument is that the segregation of any genes in outbred strains controlling traits that have not been fixed by early inbreeding or long-term selection will result in genetic noise and increased phenotypic variability. Since toxicity testing usually involves the calculation of dose-response curves, the use of phenotypically variable nonisogenic stocks reduces the precision with which such curves can be estimated and therefore the sensitivity of the assay (Festing, 1979). The counterargument is that the confidence interval for an outbred strain is wider due to the variability, so that use of an outbred strain with greater variability and therefore less precision, would be better because of its variability, than using a very precise isogenic strain if it were not the most sensitive one to the specific chemical or for the specific endpoints.
**Multiple Strain Assays:**
There is a risk in using a single strain in toxicological safety testing, particularly when that strain is known to be highly resistant to one or more classes of chemicals and/or endpoints to be tested in the EDSP. Any time there is considerable genetic variation in a susceptibility trait, a single isogenic or outbred strain may be resistant to the compound being tested or the endpoint being assessed. If so, a chemical that is toxic to other genotypes may be judged to be relatively safe (Festing 1993). As Narotsky et al. (2001) concluded, "Thus, routine toxicity tests that use only a single strain may be unreliable since the outcome may hinge on choice of strain." This is particularly true for traits showing major strain differences in susceptibility to endocrine disruption by estrogenic agents. As an alternative, Michael Festing has pointed out for two decades the benefits of toxicological testing with multiple divergent isogenic strains, rather than a single strain, to better ensure that all the test animals are not resistant (Festing 1979; Festing 1987; Festing 1993; Festing 1995), by the use of several isogenic strains from diverse genetic backgrounds in a factorial experimental design to overcome the problem of testing animals, all of which are genetically resistant to the compound to be tested (Festing 1995; Festing et al. 2001; Festing and Altman 2002).
An advantage of testing with multiple strains is that identification of strain differences enables an additional resource to determine the mechanisms of toxicity (Narotsky et al. 2001). Once parental strains are shown to differ, congenic inbred, consomic inbred, and recombinant inbred strains can be compared and strain distribution profile phenotypes used to map and characterize genes controlling susceptibility. Congenic inbred, consomic inbred, and recombinant inbred strains are highly reproducible strains with defined genotypes (Silver 1995). Once a strain set has been genotyped at molecular markers along each chromosome, genes controlling traits differing between parental strains can be mapped to specific chromosomal regions, following scoring biochemical or physiological phenotypes of the set of strains (Matin et al. 1999; Cowley et al. 2001; Liang et al. 2002) with appropriate consomic strains, if available. Such congeneric inbred, consomic inbred, and recombinant inbred strains resources, along with available gene mapping software, also enable simple as well as complex multigenetic physiology, disease, and toxic susceptibility traits to be broken down to allow the identification of individual susceptibility genes (Manly et al. 2001; Williams et al. 2001). Thus, the use of several highly divergent, genetically-defined inbred parental strains in endocrine disruptor assays could greatly enhance the possibility of identifying genes controlling susceptibility to endocrine disruption.
2.3.2 Species Selection for Endocrine Disruption Assays and Genetic Variability
Variability across strains, in both rats and mice in reproductive parameters, must be considered in the selection of the appropriate strains/species. Historically, the most common strains/species selected for assays of endocrine disruption have been the SD rat and CD-1 mouse. Since the objective of the EDSP study is to examine the effects of a multitude of chemicals on reproductive developmental structural and functional toxicity, strain variation in developmental rates, as well as other biochemical endocrine and signal transduction mechanisms, may be important in considering the range of
susceptibilities to reproductive and developmental toxicity, as are evidenced by effects on endocrine endpoints.
In selecting the appropriate species for EDSP assays, the rat has an advantage over the mouse due to larger size, which allows easier analysis of serum hormone concentrations and certain other physiological endpoints. While considerably more genetic resources are available in the mouse, progress is currently being made in sequencing the genome of Brown Norway rat strain. If recombinant inbred, congenic, and/or chromosome-specific consomic strains derived from the parental inbred strains to be used in the EDSP were available, the utility of the rat for such studies would be further improved.
At present, one of the biggest problems with species or strain selection is that only a small number of studies have examined genetic differences in susceptibility to endocrine disruption. Since most reproductive toxicology studies involve only a single stock of laboratory animals, we do not know whether the response to a given xenobiotic is under genetic control (Festing, 1987). Furthermore, an even smaller number of studies have compared genetically-defined isogenic strains.
The fecundity of a strain limits the number of offspring available following gestational or lactational exposures. Strains with low fecundity are not recommended in OPPTS reproductive and developmental test guidelines. Since SD rats have been bred for high fecundity and have the largest historical database, they have been used the most frequently for regulatory reproductive toxicity studies. Nevertheless, provided the supplier maintains enough breeders to provide the animal needs of the EDSP, strains with moderate fecundity might not limit the number of animals available from the supplier for conducting pubertal or adult exposures. Furthermore, even with gestational and lactational exposures, a strain with moderate fecundity would not limit an EDSP assay under current guidelines of retaining and examining one individual of each sex per litter at adulthood.
Another consideration of species and strain selection relates to the maintenance of early pregnancy in many species, which is dependent on the gonadotropins maintaining corpus luteum (CL) progesterone production. In humans, the maintenance of pregnancy is dependent on LH modulation of CL progesterone production prior to implantation, and hCG modulating CL progesterone production following implantation (see discussion of Narotsky et al 2001 Section 2.5.2). Human epidemiological evidence indicates that exposures to low-dose bromodichloromethane (BDCM, a common drinking water disinfection by-product) are associated with an increased incidence of pregnancy loss in humans (Waller et al. 1998). Following exposure to 75 mg/kg BDCM on days 6-10 of gestation, 62% of Hsd:F344 litters showed full litter resorption (Bielmeier et al. 2001). In contrast, 0% of the Hsd:SD litters showed full litter resorption in response to 75 or 100 mg/Kg BDCM (Bielmeier et al. 2001). Thus, sensitive strains such as the F344 might be more appropriate to estimate human risk of BDCM-induced abortion. While utilizing strains with low fecundity may reduce the number of animals available for the multitude of assessments required in reproductive toxicity testing
guidelines, use of a potentially resistant strain of animals may underestimate the risk to sensitive genotypes.
A discussion of assays under consideration for testing guidelines, a detailed comparison of strain differences in endocrine endpoints, and sensitivity of these endpoints to endocrine-disrupting chemicals which follow provides a summary of potential problems with species/strain selection in reproductive toxicity studies. Specific examples of differential sensitivity of endocrine endpoints in different species and strains to many endocrine-disrupting chemicals will be discussed in Section 3.0.
2.3.3 Confounders Affecting Comparisons of Reproductive Toxicity Data
The focus in this White paper is on intra-species and inter-species comparisons of responses to EACs. The critical papers are those in which the same laboratory evaluates two or more different strains/stocks (intra-species) or the two or more different species at the same time under the same laboratory conditions. Comparisons of intra- and inter-species differences in response to EACs performed at different times, in different laboratories under different laboratory conditions, are more difficult to interpret due to the following likely confounders to the determination that the strains/stocks/species, per se, differ in their responses:
Same laboratory, different times. Source of the animals, genetic drift, age of animals, status of the plastic cages and/or water bottles (new versus damaged), change in feed lot, bedding lot numbers, water supply, change in technical staff.
Different laboratories
A. Animals
• Source/supplier (local closed colony, national/international commercial supplier, change in location from commercial supplier, etc.; the same strain from different suppliers, will most likely be genetically different)
• Age, weight, and health status
B. Husbandry
• Housing: Group housing versus single housing impacts on many endpoints in both sexes for rats and mice. When male mice are group housed, a “dominance hierarchy” is established, with one dominant, aggressive male and the remaining males as subordinates (Haemisch et al. 1994; McKinney and Pasley, 1973; Van Loo et al. 2001). The subordinates exhibit reduced circulating testosterone levels, testes weight, epididymides, accessory sex organs, epididymal sperm numbers and motility (Koyama and Kamimura, 1999; 2000), and testicular spermatid head counts. The subordinates also exhibit signs of increased stress such as increased circulating stress hormones and adrenal gland weights,
altered nervous system and neuroendocrine functions (D’Arbe et al. 2002; Karolewicz and Paul, 2001), altered immune competence, and increased incidence of tumors (Bartolomucci et al. 2001; Grimm et al. 1996; Haserman et al. 1994). In many cases (incidence varies by mouse strain), the dominant male “barbers” the subordinates by removing their whiskers and large patches of fur (Strozik and Festing, 1981; Reinhardt and Militzer, 1979; Sarna et al. 2000; Long, 1972). Even phenotypic variance is affected by housing in C57BL/6J mice (Nagy et al. 2002). Therefore, a major source of differences among study results in different laboratories may be due to the major confounder of housing conditions, if they vary from laboratory to laboratory. Group housing will also result in inter- and intra-cage variability and therefore intra-group variability, confounding detection of any treatment-related inter-group differences. Singly-housed mouse males provide a more uniform population in terms of the endpoints under consideration in this White paper, which are affected by group housing. Dominance hierarchy is also established in group-housed male and female rats (Becker and Flaherty, 1968; Westenbroek et al., 2003; Sharp et al. 2003). The group-housed subordinate rats exhibit effects on delta-opioid receptor function (Pohorecky et al. 1999), immune status (Stefanski 1998; 2000), endocrine status (Taylor and Costanzo, 1975; Popova and Naumenko, 1972), circadian rhythms (Greco et al., 1989), and behavioral and neuroendocrine parameters (Blanchard et al., 1993). The dominant rats display aggressive behavior (Taylor and Moore, 1975) and “barber” the subordinate rats (Bresnahan et al., 1983). The subordinate female rats also exhibit stress-like responses in group housing situations (Westenbroek et al. 2003; Sharp et al. 2003). Therefore, the same confounders may exist in studies in rats across laboratories if housing situations vary.
• **Caging:** Polycarbonate caging is transparent, autoclavable, and in common use. Recent evidence (Hunt et al. 2003; Koehler et al. 2003) indicates that crazed, cracked, damaged polycarbonate caging from inappropriate and inadvertent use of a harsh detergent releases BPA (a weak xenoestrogen), which can cause adverse reproductive effects (meiotic aneuploidy) in certain sensitive mouse strains (Hunt et al. 2003). Meiotic aneuploidies are associated with embryo and fetal mortality as well as Down’s Syndrome (Trisomy 21) in humans. If such damaged cages are in use in a laboratory, its ability to detect effects of a potentially estrogenic chemical (intentionally introduced) may be compromised by the inadvertent presence of BPA (Hunt et al. 2003). A laboratory can carefully check their polycarbonate cages and discard damaged ones or switch to caging which is not made of polycarbonate (e.g., polypropylene). However, polypropylene caging is translucent but not transparent.
• **Water bottles**: Damaged polycarbonate water bottles can also release BPA (Hunt et al. 2003). A laboratory can carefully check and discard damaged water bottles or switch to glass water bottles.
• **Feed**: Laboratory animal feed can be categorized into semipurified, purified, certified, and standard open or closed formulas. Different lots of the same feed from the same vendor, as well as different feeds from different vendors, may differ in their relative content of nutrients, pesticides, other contaminants, and phytoestrogens: genistein, daidzein, formononetin, and biochanin A (the isoflavones), as well as coumestrol (the coumestans) from soybeans, flax, wheat, barley, corn, alfalfa, and oats commonly used in laboratory animal diets (Thigpen et al., 1999). Phytoestrogens and estrogenic mycotoxins from contaminating molds and fungi can bind to the estrogen receptor (although they are much weaker than the endogenous steroidal estrogens of humans and animals) and induce estrogen-like effects in animals, humans, and cells in culture. Phytoestrogens can also affect sex-specific behavior, gonadotropin function (Whitten et al. 1995), and postnatal development (Lewis et al. 2003). The predominant phytoestrogens in feed are genistein and daidzein from soybeans. The concentrations of these phytoestrogens vary significantly across rodent diets and within rodent diets by batch (Thigpen et al. 1989; 1999). The content is influenced by the use of different plant species, portion of the plant used, geographic location, time of harvest, and method of processing into pellets or meal (Eldridge and Kwolec, 1983).
• **Water**: Different laboratories use tap water, deionized water, deionized/distilled water, distilled water, reverse osmosis (R0) water, etc. Separate from palatability issues, different salts, ions, organic contaminants, pesticides, disinfection by-products (DBPs), etc., in different concentrations are present in each type of water and will vary by season, by geographic location, by different water disinfectants, by where the disinfectant is added in the water purification process, and by where the water is sampled in the supply lines. Certain DBPs have been shown to affect reproduction, especially those which are brominated; e.g., bromochloroacetic acid (Klinefelter et al. 2003), dibromoacetic acid, bromochloromethane, bromodichloromethane (Narotsky et al. 1993; Bielmeier et al. 2001), etc., as well as perchlorate which inhibits iodide symporters in the thyroid, reducing import of organic iodide into thyroid cells, and thereby reducing synthesis, export, and circulating and local T₃ and T₄ levels, causing hypothyroidism in adults and children (Lamm and Doemland, 1999).
• **Temperature and relative humidity**: Are these parameters continuously recorded, monitored, and adjusted? Elevated temperatures can affect
spermatogenesis in males. Reduced temperatures can cause stress responses. Reduced relative humidity can affect pup survival.
- **Light cycle**: 12:12 or 14:10 can make a difference in circulating hormone levels for those hormones keyed to light/dark cycles (i.e., circadian rhythms; Greco et al. 1989).
- **Technician skills and experience**: The better trained and more experienced the technical staff, the better the dissections and examination of animal data, and the more uniform the evaluations (low inter-technician variability).
- **Source of the test material**: The source and purity of the test material, amount, and identification of impurities, storage stability, etc., will affect the results if the performing laboratory does not know the status of these parameters and does not adjust for them, if necessary.
**C. Study Design**
- Number of animals/group; number of dose groups; source, species/strain/stock, age and weight of the animals
- Route of administration, identity of vehicle, doses (in mg/kg/day), dosing volume (in ml/kg), chemical concentration (in mg/ml); adjustment of dosing volume, based on most recent body weight: the same dose (in mg/kg), administered at a different concentration and dosing volume, can result in different results
- Frequency of body weights to adjust dose and detect effects, clinical observations, feed consumption
- Duration of dosing period (age of animals at end of dosing)
- Technician experience and expertise in dosing, observation of clinical signs; are the technicians “blind for dose?”
- Time of necropsy (age of animals)
- Anesthesia/euthanasia method
- Endpoints evaluated
- Experience of technicians in necropsy, trimming and weighing organs (fresh/fixed weight, especially for small organs such as pituitary, adrenal glands, thyroid gland, ovaries; risk of organs drying out)
- Choice of fixative (testes in Bouin’s, other organs in buffered 10% formalin)
— Trimming in tissues, fixation, embedment (GMA plastic for testes, paraffin for other organs), section location and thickness, microscope slide preparation, staining (PAS/H for testes, hematoxylin and eosin for other organs), coverslipping
— Experience of pathologist reading slides
— Blood collection (location, volume, speed [hemolysis?])
— Analysis of hormones, validation method, intra- and inter-assay variability, intertechnician variability (single terminal blood sample versus longitudinal evaluation by serial tail vein sampling or from cannulated animals)
• Summarization and statistical analyses of study data
— Are the correct statistical tests used for the correct parameters?
— Does the laboratory maximize sensitivity and power (number of independent entities, “n” per group)?
— Does the laboratory have and use historical control data (to interpret concurrent control values in the context of historical control values, and to track the control values over time and between studies)?
2.4 Assays Under Consideration of the EDSP and Associated Endocrine Endpoints
The EPA is in the process of implementing the EDSP. To support this program, the EPA has contracted with Battelle (as prime and RTI International as subcontractor) to provide comprehensive toxicological and ecotoxicological screening, including chemical, analytical, statistical, and quality assurance/quality control support to assist the EPA in developing, standardizing, and validating a suite of *in vitro*, mammalian, and ecotoxicological screens and assays for identifying and characterizing endocrine effects through exposure to pesticides, industrial chemicals, and environmental contaminants. The studies conducted will be used to develop, standardize, and validate methods; prepare appropriate guidance documents for peer review of the methods; and develop technical guidance and test guidelines in support of the Office of Prevention, Pesticides and Toxic Substances regulatory programs.
*In vivo* mammalian assays under consideration by the EDSP include: (1) pubertal male and female, (2) *in utero*/lactational, (3) adult male 15-day study, all for Tier I, and (4) two-generation mammalian reproductive assay (Tier 2). In addition, as an alternative to the mammalian two-generation assay, a one-generation assay has been described in the EDS*TAC Final Report* (1998). Table 1 summarizes the assays under consideration and their associated endpoints. A discussion of each proposed study follows Table 1.
| Endocrine Endpoints | One-Generation* | Pubertal Male and Female | In Utero Through Lactation | Adult Male | Two-Generation |
|---------------------|-----------------|--------------------------|----------------------------|------------|----------------|
| Period of exposure/testing | F0: 2 wks of prebreed exposure, 2 wks of mating; 3 weeks of gestation, 3 wks of lactation, then selected F1 animals exposed for minimum of 10 wks postwean | F1 offspring dosing from pnd 23-52/53 (males), pnd 22-42/43 (females) | F0 maternal dosing from gd 6 to pnd 21
F1 cohorts:
1) F1 females, 1/litter for utero-trophic assay, sc injection pnd 21-24
2) F2 females, 2/litter for pubertal assessments, oral dosing pnd 21-42
3) F1 males, 2/litter, for pubertal assessments, oral dosing pnd 21-70 | Dosing for 15 consecutive days in adult males | F0: 10 wks of prebreed exposure, 2 wks of mating, 3 wks of gestation, 3 wks of lactation, then selected F1 animals exposed for minimum of 10 wks prebreed, mating, gestation, and lactation, with termination at weaning (pnd 21) of F2 offspring |
| Litter size/gestational indices | yes | no | yes | no | yes |
| Pup viability | yes | no | yes | no | yes |
| Anogenital distance | yes, pnd 0, 21 and 95 (males) | no | yes, pnd 0, 21 and at necropsy (pnd 42 females, pnd 70 males) | no | yes (triggered in F2 in 1998 OPPTS Testing Guidelines) |
| Nipple/areolar retention | yes, pnd 11-13 (males and females), 21 and 95 (males) | no | yes, pnd 11-13 | no | yes (not in 1998 OPPTS Testing Guidelines) |
| Vaginal patency | yes | yes | yes | no | yes |
| Preputial separation | yes | yes | yes | no | yes |
| Endocrine Endpoints | One-Generation* | Pubertal Male and Female | In Utero Through Lactation | Adult Male | Two-Generation |
|---------------------------------------------------------|-----------------|--------------------------|----------------------------|----------------------------------------------------------------------------|--------------------------------------------------------------------------------|
| Hormone levels | yes, T₄/T₃ (TSH triggered) | TSH, and T₄ | TSH, T₃, T₄ and E2 | Serum T, E2, DHT, follicle stimulating hormone (FSH), luteinizing hormone (LH), prolactin (PL), thyroid stimulating hormone (TSH), thyroxine (T₄), and triiodothyronine (T₃) | not in 1998 OPPTS Testing Guidelines; likely to include TSH, and T₄ for the EDSP (personal communication with J. Kariya) |
| Urethral vaginal distance | no | no | yes | no | no |
| Estrous cyclicity | yes | yes | yes | no | yes |
| Uterine Weight | no | no | yes | no | no |
| Reproductive tract development | yes | yes | yes | no | yes |
| Testes descent | yes | no | no | no | yes |
| Behavioral evaluations (clinical observations) | yes | yes | yes | yes | yes |
| Andrology | yes | no | yes | yes | yes |
| Reproductive organ weights and histopathology | yes | yes (reproductive organs and thyroid) | yes | yes | yes |
*as described in the EDSTAC (1998) Final Report as an alternative to the mammalian two-generation reproductive toxicity study.
2.4.1 One-Generation Assay
Two assays were proposed by the EDSTAC (FR, 1998) as an alternative to the mammalian two-generation reproductive toxicity study, a one-generation assay and an “Alternative Mammalian Reproductive Test” (AMRT). The proposed one-generation assay is a shortened, scaled down version of the OPPTS guideline for reproductive toxicity testing, and was designed to assess the reproductive effects and developmental effects after an *in utero* lactational and post-lactational exposure. During continuous exposure, the assay is designed to assess onset of puberty (VO and PPS), estrous cyclicity, and andrological parameters. The period of exposure begins two weeks prior to mating and continues through weaning of the F1 offspring, followed by a ten-week postwean exposure in the F1 animals. The AMRT differs from the one-generation assay in that it doesn’t include prebreed exposure, but includes mating of the F1 offspring and evaluations of the F2 offspring, with no direct exposure in the F1 and F2 animals. Though the exposure periods and durations of the two alternative assays differ, the endpoints evaluated are essentially the same (and are similar to the endpoints evaluated in the two-generation assay).
2.4.2 Pubertal Male and Female Assays
The EDSTAC also recommended the use of an intact female 20-day pubertal assay to evaluate test materials for effects on the thyroid, hypothalamic-pituitary-gonadal (HPG) and hypothalamic-pituitary-thyroid (HPT) axes, aromatase and estrogens (and/or other test materials) that are only effective orally or after a dosing duration longer than that used in the uterotrophic assay (EDSTAC Report, 1998, Vol. 1, Chapter 5, p. 5-26). EDSTAC also recommended, as an alternate assay to be evaluated, the intact male 20-day thyroid/pubertal assay in rodents (EDSTAC, 1998, Vol. 1, Chapter 5, p. 5-30).
The EDSTAC discussion on the usefulness of the female pubertal assay and its endpoints included the following:
“The determination of the ages at “puberty” in the female rat is an endpoint that has already gained acceptance in the toxicology community. Vaginal opening (VO) in the female is a required endpoint measured in the new EPA two-generation reproductive toxicity test guideline. In this regard, this assay would be easy to implement because these endpoints have been standardized and validated and VO data are currently being collected under GLP conditions in most toxicology laboratories. In addition, VO data are reported in many recently published developmental and reproductive toxicity studies (i.e., see studies from Drs. R.E. Peterson’s, R. Chapin’s and L.E. Gray’s laboratories on dioxins, antiandrogens, and xenestrogens).
In the pubertal female assay, oral dosing is initiated in weanling rats at 21 days of age (10 per group, selected for uniform body weights at weaning to reduce variance). The animals are dosed daily, 7 days a week, and examined daily for vaginal opening (one could also check for age at first estrous and onset of estrous cyclicity). Dosing continues until VO is attained in all females (typically two weeks after weaning, unless delayed). Age at VO is also determined in the female rat. Rats are dosed by gavage with xenobiotic and examined daily for VO. The advantage over the uterotrophic assay is that one test
detects both agonists and antagonists, it detects xenoestrogens like methoxychlor that are almost inactive via sc injection, it detects aromatase inhibitors, altered HPG function, and unusual chemicals like betasitosterol. In addition, at necropsy one should weigh the ovary (increased in size with aromatase inhibitors, but reduced with betasitosterol), save the thyroid for histopathology, take serum for T4, and measure TSH.
Exposure of weaning female rats to environmental estrogens can result in alterations of pubertal development (Ramirez and Sawyer, 1964). Exposure to a weakly estrogenic pesticide after weaning and through puberty induces pseudoprecocious puberty (accelerated vaginal opening without an effect on the onset of estrous cyclicity) after only a few days of exposure (Gray et al., 1989), or precocious puberty with both accelerated VP and onset of estrous cyclicity. Pubertal alterations also result in girls exposed to estrogen-containing creams or drugs which induce pseudoprecocious puberty and alterations of bone development (Hannon et al., 1978).
Several examples of estrogenic chemicals affecting vaginal opening in rodents are known and include methoxychlor (Gray et al., 1989), nonylphenol, and octylphenol (Gray and Ostby, 1998). This endpoint appears to be almost as sensitive as the uterine weight bioassay, but the evaluation is easier to conduct and does not require that the animals be euthanized, so they can be used for additional evaluations. For example, treatment with methoxychlor at weaning (6 mg/kg/day or higher) caused pseudoprecocious puberty in female rats. Vaginal opening occurs from two to seven days earlier in treated animals than controls, in a dose-related fashion, but methoxychlor did not alter estrous cyclicity at the low dosage levels, indicating a direct estrogenic effect of methoxychlor on vaginal epithelial cell function without an effect on hypothalamic-pituitary maturation. Similar effects have been achieved with chlordecone, another weakly estrogenic pesticide, and octylphenol. Chlordecone also induces neurotoxic effects (hyperactivity to handling and tremors). In addition to estrogens, the age at vaginal opening and uterine growth can be affected by alteration of several other endocrine mechanisms, including alterations of the hypothalamic-pituitary-gonadal axis (Shaban and Terranova, 1986; Gonzalez et al., 1983). In rats, this event can also be induced by androgens (Salamon, 1938) and EGF (Nelson et al., 1991). In the last 20 years, there have been over 200 publications which demonstrate the broad utility of this assay to identify altered estrogen synthesis, ER action, growth hormone, prolactin, FSH or LH secretion, or CNS.” (EDSTAC Report, 1998, Vol. 1, Chapter 5, pp. 5-26 – 5-27)
Based on the EDSTAC’s recommendations, one of the assays that the EPA has proposed to include in an EDSP is a female pubertal assay (see FR Vol. 63, No. 248, pp. 71541-71568, December 28, 1998). This assay is the most comprehensive female-specific assay in the proposed Tier 1 battery of assays, as it is capable of detecting substances that alter thyroid function, that are aromatase inhibitors, estrogenic, antiestrogenic, or are agents which interfere with the HPG and HPT axes. Results from other shorter assays and/or with the use of ip injection as the route of administration have also been reported (O’Connor et al. 1996, 1999).
The EPA is also pursuing the validation of a male pubertal assay as a potential alternative to other assays in the Tier 1 battery. The EDSTAC discussion on the usefulness of the male pubertal assay and its endpoints included the following:
“This assay detects androgens and antiandrogens *in vivo* in a single stage apical test. “Puberty” is measured in male rats by determining age at PPS (preputial separation). Animals are dosed by gavage beginning one week before puberty (which occurs at about 40 days of age) and PPS is measured. Androgens will accelerate and antiandrogens and
estrogens will delay PPS. The assay takes about 3 weeks, and allows for comprehensive assessment of the entire endocrine system in one study (10 per group, selected for uniform body weights to reduce variance). The animals are dosed daily, seven days a week, and examined daily for PPS. Dosing continues until 53 days of age; the males are then necropsied. The body, heart (thyroid), adrenal, testis, seminal vesicle plus coagulating glands (with fluid), ventral prostate, and levator ani plus bulbocavernosus muscles (as a unit) are weighed. The thyroid is retained for histopathology and serum is taken for T4, T3, and TSH. Testosterone, LH, prolactin, and dihydrotestosterone analyses are optional. These endpoints take several weeks to evaluate and are affected not only by estrogens but by environmental antiandrogens, drugs that affect the hypothalamic-pituitary axis (Hostetter and Piacsek, 1977; Ramaley and Phares, 1983), and by prenatal exposure to TCDD (Gray et al., 1995a; Bjerke and Peterson, 1994) or dioxin-like PCBs (Gray et al., 1995b). In contrast to these other mechanisms, only peripubertal estrogen administration accelerates this process in the female and delays it in the male. Preputial separation in the male rodent is easy to measure and this is not a terminal measure (Korenbrot et al., 1977).
Age and weight at puberty, reproductive organ weights, and serum hormone levels can also be measured. Delays in male puberty results form exposure to both estrogenic and antiandrogenic chemicals including methoxychlor (Gray et al., 1989), vinclozolin (Anderson et al., 1995a), and p,p'DDE (Kelce et al., 1995). Exposing weanling male rats to the antiandrogenic pesticides p,p'DDE or vinclozolin delays pubertal development in weanling male rats as indicated by delayed preputial separation and increased body weight (because they are older and larger) at puberty. In contrast to the delays associated with exposure to estrogenic substances, antiandrogens do not inhibit food consumption or retard growth (Anderson et al., 1995b). Antiandrogens cause a delay in preputial separation and affect a number of endocrine and morphological parameters including reduced seminal vesicle, ventral prostate, and epididymal weights. It is apparent that PPS is more sensitive than are organ weights in this assay. In addition, responses of the HPG are variable. In studies of vinclozolin, increases in serum LH were a sensitive response to this antiandrogen, whereas serum LH is not increased in males exposed to p,p'DDE during puberty (Kelce et al., 1997). Furthermore, a systematic review of the literature indicates that the sex accessory glands of the immature intact male rat are consistently more affected than in the adult intact male rat.
In summary, preputial separation and sex accessory gland weights are sensitive endpoints. However, a delay in preputial separation is not pathognomonic for antiandrogens. Pubertal alterations result from chemicals that disrupt hypothalamic-pituitary function (Huhtaniemi et al., 1986) and, for this reason, additional *in vivo* and *in vitro* tests are needed to identify the mechanism of action responsible for the pubertal alterations. For example, alterations of prolactin, growth hormone, gonadotrophin (LH and FSH) secretion, or hypothalamic lesions alter the rate of pubertal maturation in weanling rats.
As indicated above, the determination of the age at "puberty" in the male rat are endpoints that already have gained acceptance in the toxicology community. Preputial separation in the male is a required endpoint in the new EPA two-generation reproductive toxicity test guideline. In this regard, this assay would be easy to implement because these endpoints have been standardized and validated and PPS data are currently being collected under GLP conditions in most toxicology laboratories. In addition, PPS data are reported in many recently published developmental and reproductive toxicity studies (i.e., see studies from R.E. Peterson's, J. Ashby's, R. Chapin's and L.E. Gray's laboratories on dioxins, PCBs, antiandrogens, and xenoestrogens).
Sex accessory gland weights in intact adult male rats also can be affected directly or indirectly by toxicant exposure. The HPG axis in an intact animal is able to compensate for the action of antiandrogens by increasing hormone production, which counteracts the effect of the antiandrogen on the tract (Raynaud et al., 1984; Edgren, 1984; Hershberger, 1953)." (EDSTAC, 1998, Vol. 1, Chapter 5, pp. 5-30 through 5-32).
Based on the EDSTAC’s recommendations, one of the assays that the EPA has also proposed to validate as a potential alternative for other assays in the Tier 1 battery in an endocrine disruptor screening program is a male pubertal assay (see FR Vol. 63, No. 248, pp. 71541-71568, December 28, 1998). This assay is the most comprehensive male-specific assay in the proposed Tier 1 battery of assays, as it is capable of detecting substances that alter thyroid function, that are aromatase inhibitors, androgenic, anti-androgenic, or that are agents which interfere with the HPG and HPT axes. Results from other shorter assays, and/or with the use of ip injection as the route of administration, have also been reported (O’Connor et al., 1996, 1999).
2.4.3 In Utero Through Lactation Assay
The proposed protocol has been identified by the EPA as the “In Utero/Lactational Exposure Testing Protocol” and has been assigned for development under the EDSP. The objective of this bioassay is to detect effects mediated by alterations in the estrogen, androgen, and thyroid-signaling pathways as a consequence of exposure during pre- and postnatal development in the laboratory rat. The treatment paradigm allows for an evaluation of effects on organogenesis, sexual differentiation, and puberty. In using a developing system as the basis for the assay, it is understood that modes of action, other than those of the estrogen, androgen, and thyroid-signaling pathways, may be involved in the induction of toxicity. As such, any observed effects will have to be interpreted in light of the overall weight of the evidence that they are endocrine dependent.
2.4.4 Adult Male Assay
One of the assays considered by EDSTAC as an alternate assay was a short-term screen in an intact adult male with assessment of levels of various circulating hormones at necropsy (see Table 1). The adult male assay was developed to detect effects on male reproductive organs that are sensitive to antiandrogens and agents that inhibit testosterone synthesis or inhibit 5-alpha-reductase (see EDSTAC FR Vol. I, p. 5-30, August, 1998). Results from this assay and/or with the use of ip injection as the route of administration, and other assays with a similar purpose, have been reported (O’Connor et al. 1996, 1999, 2002a,b).
Based on the EDSTAC’s recommendations, one of the assays that the EPA has proposed to validate as a potential alternative for other assays in the Tier 1 battery, in an endocrine disruptor screening program, is an adult male *in vivo* assay (see FR Vol. 63, No. 248, pp. 71541-71568, December 28, 1998). The utility of this battery for screening unknown compounds for endocrine activity will be evaluated. Endocrine endpoints for this study are listed in Table 1.
2.4.5 Two-generation Assay
For the Tier 2 battery, EDSTAC recommended a mammalian two-generation reproductive toxicity study. The two-generation reproductive toxicity study in rats is
designed to evaluate the health effects of chemicals on reproduction and viability through two generations as performed in accordance with EPA Guideline OPPTS 870.3800 (1998), and OECD Guideline 416 (2001). Endocrine endpoints for this study are listed in Table 1. In the two-generation reproductive toxicity assay, potential endocrine-disrupting effects can be detected through behavior, fertility, gestational duration, litter size, sex ratio, viability of the offspring, developmental landmarks, and reproductive development (histopathology of reproductive organs, onset of puberty [acquisition of preputial separation and vaginal opening], and estrous cyclicity) in the F1 offspring exposed initially as gametes (from exposed F0 parents), their gestation (in exposed F0 females), lactation (nursed by exposed F0 females), and directly through adulthood and reproduction to produce the F2 generation. This study is intended to evaluate the effects of chemicals on sensitive life stages of reproduction and development in a transgenerational design. Proposed endocrine endpoints for this study are listed in Table 1. Table 1 indicates what is tested and what is proposed with no separation.
2.5 Endocrine Endpoints Under Consideration for EDSP Assays and Intraspecies Variability
Study designs for use in risk assessment require endpoints that have been shown to be robust, reproducible, appropriately sensitive, biologically plausible, and relevant to the adverse outcomes of concern. Definitions of the attributes of such endpoints are as follows:
**Reproducible:** These endpoints must be reliable; i.e. the same findings occur under the same conditions within the initial reporting laboratory (intra-laboratory) and among other laboratories (inter-laboratory). If the results from endpoints are not reproducible, they cannot form the basis for future research and are most likely not useful for risk assessment.
**Robust:** These endpoints must significant enough to be present after comparable routes of exposure, (e.g., dosed feed or dosed water), at the same doses over time. Different effects, both quantitative and even qualitative, may be observed when different routes of administration are used. The use of oral gavage, a bolus dose once/day, may result in exacerbation of the endpoint if the parent material is the proximate toxicant and is metabolized to a nontoxic metabolite, if bolus dosing overwhelms the metabolic capacity of the organism or preparation, or it may result in diminution or loss of the endpoint if the parent compound must be metabolized to the active form. The use of nonoral routes, such as inhalation, topical application, injection, etc., will also likely result in different effects, since these routes bypass “first-pass” metabolism by the liver. The findings from routes unrelated to human or environmental exposures may not be as useful for risk assessment.
**Sensitive**: These endpoints should not be so sensitive that they are dependent on unique conditions (e.g., intrauterine position [IUP], etc.), especially those which are not relevant to the species at risk. Sensitivity is the ability of an endpoint to detect small differences reliably. These endpoints should not exhibit high variability (insensitive) or be greatly affected by confounders (too sensitive).
**Relevant**: These endpoints must be biologically plausible and related to adverse effects of interest/concern. If there are no adverse consequences at the dose/duration/route evaluated, these endpoints should be predictive of other adverse effects at higher doses, after longer exposure duration, and/or by different routes, etc.
**Consistent**: These endpoints should occur in the presence of effects in other related, relevant endpoints, if possible, at the same dose, timing, duration, routes of exposure, etc. (i.e., characteristic syndrome of effects).
Individual endocrine endpoints are discussed below.
### 2.5.1 Fertility and Gestational Indices
Fertility and gestational indices and litter size parameters are used as measures of reproductive performance. Examples of these indices and parameters follow.
**Gestational parameters:**
- No. of mating pairs
- No. (%) females sperm/plug positive
\[
\text{Mating index} = \left( \frac{\text{no. sperm / plug positive}}{\text{no. paired}} \right) \times 100
\]
Precoital interval (time in days from pairing to evidence of copulation)
- No. (%) females pregnant
\[
\text{Pregnancy index} = \left( \frac{\text{No. pregnant}}{\text{No. sperm / plug positive}} \right) \times 100
\]
- Gestational length in days
- No. (%) females with live litters
\[
\text{Gestational index} = \left( \frac{\text{No. females with live litters}}{\text{No. females pregnant}} \right) \times 100
\]
**Litter size parameters:**
- No. ovarian corpora lutea/dam
- No. uterine implants/dam
\[ \text{No. (\%)} \text{ preimplantation loss} = \left( \frac{\text{No. corpora lutea} - \text{no. uterine implants}}{\text{No. corpora lutea}} \right) \times 100 \]
- No. resorbed implants/litter
- No. dead fetuses/litter
- No. nonlive (resorbed and dead) implants/litter
- No. (\%) litters with \( \geq 1 \): resorptions, dead fetuses, and nonlive
\[ \text{No. (\%)} \text{ postimplantation loss} = \left( \frac{\text{No. uterine implants} - \text{no live fetuses or pups}}{\text{No. uterine implants}} \right) \times 100 \]
- No. malformed implants/litter
- No. affected implants (nonlive plus malformed)
- No. live pups/fetuses/litter
### 2.5.2 Survival and Growth Indices
Survival and growth indices are used as measures of pup viability. Pups are evaluated on pnd 0 for the following:
\[ \text{Live birth index} = \left( \frac{\text{No. live pups at birth}}{\text{Total no. pups born}} \right) \times 100 \]
\[ \text{Stillbirth index} = \left( \frac{\text{No. dead pups at birth}}{\text{Total no. pups born}} \right) \times 100 \]
No. live pups/litter: total and by sex
No. dead pups/litter total and by sex
Mean pup body weight/litter: total and by sex
Anogenital distance (absolute in mm, relative to body weight, or adjusted with body weight as the covariate) by sex/litter
Pups are evaluated during lactation for the following:
Sex ratio (% male offspring/litter)
4-Day survival index = \(\left( \frac{\text{No. pups surviving 4 days (precull*)}}{\text{Total no. live pups at birth}} \right) \times 100\)
7-Day survival index = \(\left( \frac{\text{No. pups surviving 7 days}}{\text{Total no. live pups at 4 days (postcull*)}} \right) \times 100\)
14-Day survival index = \(\left( \frac{\text{No. pups surviving 14 days}}{\text{Total no. live pups at 7 days}} \right) \times 100\)
21-Day survival index = \(\left( \frac{\text{No. pups surviving 21 days}}{\text{Total no. live pups at 14 days}} \right) \times 100\)
Lactation index = \(\left( \frac{\text{No. pups surviving 21 days}}{\text{Total no. live pups at 4 days (postcull*)}} \right) \times 100\)
*If the litters are standardized to a fixed number (normally eight or ten) on pnd 4.
Several studies have investigated the effects of strain differences on implantation and pregnancy outcomes. One study by Cummings et al. (2000) compared the effects of atrazine on implantation and early pregnancy in four strains of rats: Holtzman, SD, LE, and F344. Since atrazine has been known to affect prolactin surge, which is important in the initiation of pregnancy (two surges, one diurnal and one nocturnal, occur daily in the first ten days of pregnancy), the rats were dosed on the first eight gestational days (gd) either diurnally or nocturnally with 0, 50, 100, or 200 mg/kg/day with atrazine. In F344 rats, atrazine (at the top two doses) increased the percent preimplantation loss only after nocturnal dosing. Holtzman rats showed a trend toward increasing preimplantation loss, while SD and LE rats were not significantly affected at this endpoint. Percent postimplantation loss was significantly higher in Holtzman rats only. Clearly, the F344 and Holtzman strains were most sensitive to the effects of atrazine on early pregnancy.
The effects of atrazine on full litter resorption and pregancy outcome were investigated in three rat strains: F344, SD, and LE (Narotsky et al., 2001). The dams were dosed from gd 6-10 with 0, 50, 100, and 200 mg/kg/day with atrazine and then
allowed to deliver their pups, thus allowing for an assessment of pup viability. At the highest dose (200 mg/kg/day), atrazine caused similar rates of full litter resorption in all three strains. Prenatal loss was significantly increased in F344 dams, resulting in reduced litter sizes for dams with live litters. At lower doses (e.g. 50 and 100 mg/kg/day), atrazine caused pregnancy loss in only F344 rats, while SD and LE litters were unaffected. The period of sensitivity to atrazine-induced pregnancy loss coincided with the period of LH/prolactin dependency on the maintenance of pregnancy. While gestational loss was induced in the sensitive F-344 strain by 50 mg/kg atrazine administered on gd 6-10, even the highest dose of atrazine (200 mg/kg) was without effect when administered after the LH dependent period on gd 11-15. The authors concluded that F344 rats were more sensitive than SD and LE rats to the reproductive effects of atrazine, and that maternal toxicity, which occurred in all three strains at higher doses, was not predictive of full litter resorption.
In another study, the effects of a drinking water disinfection-by-product, BDCM, on pregnancy loss were studied in two rat strains, F344 (dosed with 0 and 75 mg/kg/day based on previous studies), and SD (dosed with 0, 75 and 100 mg/kg/day) (Bielmeier et al. 2001). Daily dosing with BDCM (75 mg/kg) from gd 6 to 10 produced a 62% incidence of full litter resorption (pregnancy loss) in F344 rats and no effect on pregnancy in SD rats. Since body weights were significantly reduced in both strains, it is possible that toxicokinetic differences in the strains may not be responsible for the differential sensitivities of the strains to BDCM-induced pregnancy loss. Thus, the data of Bielmeier et al. (2001) suggest that F344 rats are genetically more sensitive than SD rats to BDCM-induced diminishment of luteal cell responsiveness to LH (or perhaps BDCM induced luteolysis).
Though these studies were not performed in the same laboratory, Liberati et al. (2002) compared reproductive and litter parameters in Wistar Hannover rats with historical CD rat data from the same laboratory and from CD rat data provided by CRL. Pregnant female Wistar Hannover rats were dosed daily with distilled water from gd 6 to 15. Wistar Hannover rats were found to have lower pregnancy rates and smaller litter sizes. In addition, they were found to have higher percentages of preimplantation loss, postimplantation loss, and resorptions versus CD rats (14.1, 7.4 and 7.2% versus 5.9, 5.6, and 5.1%, respectively). Thus, it appears likely that fundamental differences in reproductive parameters occur between outbred stocks of rats.
### 2.5.3 Reproductive Tract Development
Reproductive development involves both morphological and hormonal aspects, which operate together to result in correctly formed, functional, and responsive reproductive systems in both males and females. In mammals, gonadal origins begin early in embryonic development, prior to sexual differentiation (Schardein, 1999). Initial stages are the same for both male and female. Sexual differentiation and maturation are under hormonal control. Thus, both physical and hormonal indicators of
reproductive development can be monitored to detect the presence of endocrine-disrupting activity.
— **Wolffian duct (male development)**. Initially, the gonads appear as a pair of longitudinal undifferentiated genital ridges in the dorsal abdominal cavity of the embryo. The primordial germ cells migrate into the genital ridges from the extra-embryonic yolk sac at about gd 10-12 in the rat. Concomitantly, the genital ridges form primitive sex cords, which are indistinguishable, male from female. These small primitive indifferent gonads are held in place in the abdominal cavity by cranial suspensory ligaments (from gonad cephalia to diaphragm) and by gubernacular cords (from gonad caudally to the base of the abdominal cavity). In the male, under the initiation of the sry gene on the Y-chromosome, the primitive sex cords continue to proliferate and form the testis, including the interstitial Leydig and the intratubular Sertoli cells. The Leydig cells begin to produce T. The epididymides, vas deferens, ventral prostate, and seminal vesicles are formed from the embryonic structures known as the Wolffian ducts in the presence of fetal T. The secondary sex cords characteristic of female development (Müllerian ducts) regress in the presence of testosterone and of Müllerian Inhibitory Substance (MIS; inhibin) produced by fetal Sertoli cells, as male sexual differentiation proceeds. DHT (produced locally by the Leydig cells from conversion of testosterone by the enzyme 5-alpha-reductase), directs the differentiation of the male genital tubercle into the external genitalia and the urogenital sinus into the prostate at Cowper’s (bulbourethral) glands; DHT also causes regression of nipple anlagen in the fetal male rodent. In the presence of T, the cranial suspensory ligaments regress, as the gubernacular cords thicken (also under the control of the INSL3 gene) to cause the testes to descend to the inguinal ring *in utero* (in rodents) and then into the scrotal sacs, i.e. testes descent during late lactation (in rodents).
— **Müllerian duct (female development)**. After formation of the primitive sex cords, genetically female embryos undergo differentiation as the primitive sex cords proliferate to form the ovaries, whereas the secondary sex cords (Müllerian ducts) form the uterus, oviducts, and upper end of the vagina. The lower end of the vagina and external genitalia are formed from the female genital tubercle. The Wolffian ducts regress in female fetuses in the absence of T. Also in the absence of T, the gubernacular cords regress, while the cranial suspensory ligaments are retained to keep the ovaries held abdominally just below the kidneys.
— **Pre- and postnatal development**. Visual examination of the reproductive tracts of both males and females at birth and during the postnatal period provides a measure of both pre- and postnatal development as described above and below.
— **Puberty.** Acquisition of puberty, identified by the age (in days) of acquisition of vaginal patency (VP) in offspring females and the age (in days) of acquisition of balanopreputial separation (preputial separation, PPS) in males, can be used to compare the relative effects of a compound on male and female reproductive development. In the authors' laboratory, the age at acquisition of these indicators of puberty is consistent, with very tight variances intra- and inter-studies. Acquisition of puberty is a critical endpoint in endocrine disruptor assays. Section 2.5.7 contains a more detailed discussion of pubertal endpoints.
### 2.5.4 Anogenital Distance
The sex differences in anogenital distance (AGD) at birth and beyond (male AGD is approximately twice as long as female AGD in rats, mice, and newborn humans) are under androgen control, specifically dihydrotestosterone (DHT) (Gray et al. 1998; Gray and Ostby, 1998) and do not appear to be affected by estrogens (Biegel et al. 1998a) but are affected by pup body weights (Ashby et al. 1997). Data (Gallavan et al. 1998) from 1501 control CD® (SD) rat pups indicated that a 1 gm increase in body weight results in a 0.19 mm increase in AGD. Very small (but statistically significant) increases in female AGD (with no effects on males) on pnd 0 have been reported for dietary $p$-tert-octylphenol (OP; Tyl et al. 1999) and BPA (Tyl et al. 2002) in a number of dose groups, not dietary dose related, with no developmental or reproductive sequelae. If AGD values are shorter in either sex (especially if in both sexes) in a treatment group with reduced pup body weights, it is highly likely that the AGD effect is due to the body weight effect, and can be teased out by analyses of covariance (ANCOVA) with body weight as the covariate. The precision with which laboratories measure AGD on newborns ranges from use of a dissecting microscope with an ocular micrometer and eyepiece grid or a vernier caliper (and the pup flat on the microscopic platform) to hand-held pups and a ruler. Obviously, the accuracy and variance of the values will differ, depending on the method. Precise methods result in very tight values, which may result in very similar statistically significantly differences in group means, for which the biological significance and relevance, if any, are unknown. AGD has also been shown to be significantly reduced in CD® (SD) newborn rats whose dams were on 50% feed restriction from gd 7 (Holsapple et al. 1998; Carney et al. 1998).
AGD is DHT-mediated, and the endocrine-mediated effects persist into adulthood. However, since it is confounded by body weight, the current practice is to present the data as mm, mm/cube root of the body weight, and/or to analyze the data by ANCOVA (analysis of covariance), with the body weight at measurement (birth, weaning, etc.) as the covariate. These procedures help to account for differences in body weight (especially in groups where there is systemic toxicity, expressed as reduced parental and offspring body weights).
2.5.5 Urethral Vaginal Distance (UVD)
The measurement of UVD in female rodents has been proposed as an endpoint, possibly sensitive to levels of E2; analogous to AGD under DHT control. It is currently under evaluation in the authors' laboratory.
In a study which investigated the effects of gestational exposure to TCDD on reproductive development of female rat offspring in two strains of rats, LE and Holtzmann (Gray and Ostby, 1995), administration of TCDD on gd 15 (1 µg/kg) produced malformations of the external female genitalia and vaginal orifice, a delay in puberty, and significantly increased UVD in both rat strains.
2.5.6 Retention of Nipples in Preweanling Males
This is evaluated usually in male rats on pnd 11-13 (and in male mice on pnd 9-11) and is DHT-mediated. Effects may persist into adulthood. In the authors' laboratory, retained nipples have never been observed in control preweanling CD® (SD) males, although areolae have been observed in our laboratory in 0-7.5% of control males on pnd 11-13 (based on examination of over 5000 males *in toto*). This is a sensitive indicator of altered testosterone and/or DHT levels (effects on synthesis, degradation, receptor binding, transcriptional activation, etc.). Male pups with retained nipples (especially as weanlings and/or adults) are more likely to exhibit reproductive system malformations, but the correlation is not perfect (i.e., some males with nipples exhibit no malformations, some males with no nipples do exhibit malformations). Retention of nipples is also a reasonable, but not infallible, predictor of male reproductive malformations caused by perinatal exposures at similar or higher doses (McIntyre et al. 2001; McIntyre et al. 2002).
One study by You et al. (1998) was designed to compare male sexual development in two strains of rats after gestational exposure to an antiandrogenic compound. In LE and SD rats exposed *in utero* on gd 14-18 to $p,p'$-DDE (a metabolite of DDT) at 0, 10, or 100 mg/kg/day, the high dose produced a significant 14% decrease in AGD in male LE, and a 7.8% decrease (not statistically significant) in SD rats on pnd 2, with no effect on AGD in females of either strain. On pnd 13, males from both the low and high-dose groups in the SD rats and males from the high-dose group only in the LE rats had retained nipples. Preputial separation occurred in control SD and LE rats at about the same time, but vaginal opening occurred earlier in control LE versus SD rats. Regardless, neither preputial separation nor vaginal opening in either strain were affected by developmental exposure to $p,p'$-DDE, and growth of male reproductive organs was also not affected. Flutamide (an androgen receptor antagonist), which was used as a positive control in this study, produced decreases in AGD, nipple retention, and changes in male reproductive organ growth (decrease in testis, ventral prostate and epididymis weight in SD rats, and a decrease in seminal vesicle weight in LE rats). Both strains of rat had a differential sensitivity to the effects of $p,p'$-DDE. In LE but not SD rats, AGD was decreased by 100 mg/kg/day of $p,p'$-DDE and in SD rats, nipple retention was produced at a lower dose (10 mg/kg/day) than in LE rats. In response to the 100
mg/kg dose, LE rats showed 6- to 8-fold higher serum concentrations of $p,p'$-DDE than SD rats. One explanation for the differential effects with $p,p'$-DDE may be due to different tissue levels of $p,p'$-DDE from potentially different pharmacokinetic characteristics in the two strains.
In a study comparing the effects of developmental exposure to VIN (an antiandrogen which has metabolites that bind to the androgen receptor) in Wistar and LE rats, similarities and differences were reported (Hellwig et al. 2000). Exposure to 200 mg/kg/day from gd 14 to pnd 3 produced similar effects on male offspring of both strains, including reduced AGD, nipple and areolae retention lasting into adulthood, hypospadias, penile hypoplasia or development of vaginal pouch, transient paraphimosis (penile edema), and reduced function and chronic inflammation of the epididymides, prostate, seminal vesicles, and coagulating glands. In adults, LE had testis atrophy and chronic inflammation of the urinary bladder, which were not observed in Wistar offspring. Exposure to 12 mg/kg/day produced only transient nipple/areolae retention in male offspring of both strains, but produced nipple/areolae retention persisting into adulthood in a few LE but no Wistar males. In addition, adult LE but not Wistar exposed to 12 mg/kg/day had slightly reduced prostate, seminal vesicle and coagulating gland weights. Overall there were more similarities than differences in the effects of VIN in both strains, and the NOAEL was 12 and 6 mg/kg in Wistar and LE rats, respectively.
### 2.5.7 Puberty
Acquisition of puberty can be determined in both females and males by a number of physical changes. For females, vaginal patency and age of first estrus are most often used, whereas in males, preputial separation is most often monitored. In both sexes, acquisition of puberty is affected by body weight, so the current approach is to covary the age at acquisition by the body weight at acquisition, at an arbitrary age during the time of acquisition, or by some measure of weight gain during the postlactational or prepubertal period (the selection of the end date for weight gain is problematic). Small changes in acquisition ($\pm$ 3 days) may indicate body weight-related delays in development; large changes (accelerations or delays of $\geq$ 4 days) most likely indicate effects from endocrine disruption, especially in the absence of body weight effects. Minor delays/accelerations in puberty in the BPA rat study (Tyl et al. 2002) were presented and analyzed as absolute values, and as values covaried by body weight at acquisition and at an arbitrary age.
Statistically significant differences in age at acquisition of puberty may indicate endocrine-mediated effects, especially if the effects are different for the sexes (e.g., VP is delayed and PPS is accelerated or unchanged, VP is accelerated and PPS is delayed or unchanged, etc.) and if the effects are profound (acceleration or delay of many days versus only a few days). However, acquisition of developmental landmarks is dependent on both age and weight (i.e., heavier animals acquire the landmark earlier, while lighter animals acquire the landmark later), but lighter animals do acquire the landmark (unless there is another cause for the delay) and in many cases acquire the
landmark at a lighter weight than the heavier animals. This observation is consistent with the recognition by the EPA (1996, p. 56295) that “body weight at puberty may provide a means to separate specific delays in puberty from those that are related to general delays in development.” The significance (i.e., the consequence, if any) and “the biologic relevance of a change in these measures of a day or two is unknown” (EPA, 1996, p. 56295).
The recognition that body weight is important in analyzing and understanding acquisition of puberty is strengthened by the work of Kennedy and Mitra (1963) and Carney et al. (1998) who showed that body weight and food intake are factors in the initiation of puberty in the rat, and by the work of Holsapple et al. (1998) who put groups of 26 timed-mated SD rats on standard diets at 100% (control), 70%, or 50% of historical control feed intake levels from gd 7 through weaning on pnd 21. Selected weanlings were continued on feed restriction until ten weeks (with 100% feed from ten to 20 weeks of recovery) or until 20 weeks of age, with necropsy of all offspring at 20 weeks of age. Feed restriction resulted in reduced weight gains for dams and pups related to the degree of restriction. In both the 50% and 70% feed restriction groups, gestation length was significantly increased, and age at VP and PPS was also delayed (by one day at 70% restriction and by six days at 50% restriction for both parameters). AGD at birth was significantly reduced in both sexes in the 50% restriction group, but AGD:body weight ratios were essentially identical across groups, indicating that smaller (low body weight) pups had shorter AGDs and that the effects were proportional. The authors conclude that “these results show that certain reproductive and developmental endpoints are altered by feed restriction in the range relevant to common testing scenarios” (Holsapple et al. 1998).
126.96.36.199 Vaginal Patency in Females
In females, acquisition of puberty is indicated by vaginal opening or patency (VP), dependent on E2 and resulting from E2-dependent cornification of the vaginal seam. In control CD® (SD) rats in the authors’ laboratory, the grand mean age at VP is 31.1 days (based on 16 studies from 1996 to 2000). VP may be observed first as the appearance of a small “pin hole(s)” or perforations but is recorded as acquired when vaginal opening is complete. Vaginal threads across the vaginal opening may be temporary or persistent (Wolf et al. 1999; Flaws et al. 1997).
Vaginal opening may be advanced by estrogenic compounds, and estrogen receptor modulators either advanced or delayed with various environmental chemicals (see Review by Goldman et al. 2000). In a study of the effects of BPA on sexual development in two strains of rats, SD and Alderley Park (AP), Tinwell et al. (2002) found strain-related differences in VP. Vaginal opening was significantly delayed in AP rats and not SD rats exposed to BPA. There was no effect on age of first estrus (an explanation of this endpoint follows).
188.8.131.52 Age of First Estrus in Females
On or within a few days of VP, the female exhibits her first estrus, so age at first estrus (absolute age and/or interval from VP to first estrus) is also useful. Late follicular growth of the first ovulatory cells is stimulated about the time of vaginal opening, although there is some variation in the initial release of oocytes. Following vaginal opening, daily vaginal smears are monitored to determine the age of first estrus and/or first vaginal cycle. Irregular estrous cycles are often seen in the immediate postpubertal period (Goldman et al. 2000).
184.108.40.206 Preputial Separation in Males
Acquisition of puberty in males is indicated by preputial separation (PPS; balanopreputial separation) or separation of the foreskin of the penis from the glans. PPS is dependent on androgens. PPS is a process that leads to the cleavage of the epithelium through cornification, forming the squamous lining of the prepuce of the penis (Goldman et al. 2000). As a sign of puberty and an essential prerequisite for further development of the ejaculatory process, PPS has been used as a reliable, noninvasive endpoint by which to monitor rodent pubertal development and perturbations of this process. This landmark of acquisition generally occurs during the peripubertal period (pnd 36-55 or 60; Stoker et al. 2000). In control CD®(SD) rats in the authors' laboratory, the grand mean age at PPS is 41.9 days (based on 16 studies from 1996 to 2000).
Estrogenic and anti-androgenic compounds have been shown to delay PPS, while androgen receptor agonists accelerate PPS (see review by Stoker et al. 2000). Tinwell et al. (2002) found that BPA had no effect on PPS in two strains of rats (AP and SD) at a dose that delayed vaginal opening in female AP rats only. In AP rats, vaginal opening was at $33.8 \pm 0.8$ days in control animals, compared to $35.4 \pm 0.6$ days in rats exposed to 50 mg/kg/day BPA. Sensitivity to BPA was found to be not only strain related but sex related.
2.5.8 Estrous Cyclicity and Ovulation Rate in Postpubertal Females
After the initial release of ova, female rats begin to exhibit four- to five-day estrous cycles, with accompanying changes in vaginal cytology and circulating hormones. The acquisition of estrous cyclicity results from shifts in the hypothalamic-pituitary-ovarian endocrine axis and is the culmination of the maturation of reproductive processes that began prenatally. As indicated above, irregular estrous cycles are more common in the first weeks after acquisition of puberty.
Ovulation rate is affected by the dose and ratio of FSH to LH, stage of the cycle, and age of the female. Large genetic differences in ovulation rate exist between strains of mice and in response to exogenous gonadotropins (Spearow et al. 1999; Spearow and Barkley, 1999).
Ovulation rate (the number of eggs ovulated per female) is not included in the endpoints discussed in this White Paper because the protocols of the proposed EDSP assays preclude measurement of ovulation rate in order to measure other relevant endpoints. Ovulation rate is based on the number of corpora lutea counted on the ovaries. These are postovulation ruptured follicles (one per ovulated ovum) producing large amounts of P4 and lesser amounts of E2, to prepare the uterus for implantation of the conceptuses. However, maternal ovarian corpora lutea involute beginning at delivery of offspring (with involution completed on or about pnd 4) to become corpora albicans, indistinguishable from corpora albicans from previous ovulation cycles. All of the studies in Tiers I and II that involve production of offspring require that the dams remain with their pups through lactation to weaning. The parental females are necropsied at the weaning of their litters on pnd 21, when the corpora lutea are no longer present on the ovaries. The inability to collect ovarian corpora lutea counts in these studies also precludes calculation of percent preimplantation loss:
\[
\left( \frac{\text{No. corpora lutea} - \text{no. uterine implants}}{\text{No. corpora lutea}} \right) \times 100
\]
What can be ascertained, and is therefore included in the list of endpoints to be discussed, is percent postimplantation loss, which is based on the number of uterine implantation sites (i.e., the number of conceptuses implanted; these “nidation scars” persist at least 40 days after delivery) and the number of total pups delivered. Both of these parameters are present and recorded in the Tier I and II studies involving generation of offspring. The calculation for percent postimplantation loss is:
\[
\left( \frac{\text{No. uterine implants} - \text{no live fetuses or pups}}{\text{No. uterine implants}} \right) \times 100
\]
In studies comparing estrous cycles across rat strains, there were strain-related differences in estrous cyclicity in response to food deprivation (Tropp and Markus, 2001). Prior to food restriction, Brown Norway rats had irregular estrous cycle patterns while SD, LE, and F344 rats had regular estrous cycle patterns. By day 5 of food deprivation, 75% of F344 rats and 100% of Brown Norway rats stopped cycling and SD and LE rats were unaffected (the animals’ weights were reduced to 85% of *ad libitum* body weight). It is possible that SD and LE rats, which have generally larger body masses may have more energy store and therefore be less sensitive to changes in body weight. Another possibility is that sensitivity to food deprivation is higher in inbred versus outbred strains. However, given the fact that food deprivation schedules were adjusted to account for differences in initial body weight, it would seem unlikely that simple strain differences in body weight account for the results. These data suggest outbred strains selected for larger litter size are relatively resistant to the disruption of estrous cyclicity by dietary restriction.
In a study comparing estrous cycles in Lewis and F344 rats, by obtaining vaginal smears and quantitating E2, P, FSH, and LH levels at different phases of the cycle, Smith et al. (1994) reported that metestrus was significantly longer, while diestrus and estrus were significantly shorter in Lewis rats compared to F344 rats. Proestrus was similar in both strains. During estrus, E2 levels were significantly higher in Lewis compared to F344 rats, and P levels were significantly higher in all stages of the estrous cycle in Lewis compared to F344 rats. LH and FSH levels did not differ between strains at any stage of the estrous cycle. The authors suggest that elevated E2 and P levels may affect corticosterone levels which could affect hypothalamic-pituitary-adrenal axis responsiveness.
In response to endocrine-disrupting chemicals, strain differences in the ovarian cycle have been reported. Cooper et al. (2000) reported that LE rats were more sensitive than SD rats to atrazine-induced disruption of the ovarian cycle. In addition, Ando-Lu et al. (1998) found that in aging Donryu rats, estrous cycle abnormalities (e.g. persistent estrus) were more common than in F344 rats. Finally, in a study by Eldridge et al. (1994), atrazine administration to SD and F344 rats for up to 12 months produced changes in estrous cyclicity in SD rats (increased the number of days of vaginal estrus), increased E2, decreased P, and increased incidence of mammary tumors in SD rats only, with no significant treatment-related effects in F344 rats.
BPA, an environmental estrogen, has been found to stimulate Prl secretion in F344 but not SD rats (Steinmetz et al. 1997). More recently, Long et al. (2000) found that BPA increased DNA synthesis and cell proliferation in the vaginal epithelium of F344 rats but not SD rats. Thus, the rat vagina, an estrogen target tissue, is more sensitive to the effect of BPA in a strain-specific manner. Long et al. (2000) also showed that F344 and SD rats showed no difference in clearance of 3H-BPA from the blood, concentration or affinity of estradiol receptor, or induction of early gene c-fos in response to BPA. Since BPA increased vaginal cell proliferation and DNA synthesis in F344 but not in SD, these data show that strains differ in the intermediate effects of these xenoestrogens downstream of the ER.
Differences in estrous cyclicity have been reported in outbred strains of mice which have been selected for large litter size, high embryo survival, or small litter size (Barkley and Bradford, 1981; DeLeon and Barkely, 1987). Selection for large litter size (Line S1) and high embryo survival (Line E) increased the regularity of estrous cycles, and selection for small litter size (CN) dramatically decreased the regularity of estrous cycles. Therefore, the BN rat, F344 rat, or the CN mouse may provide better animal models than strains that have been bred for large litter size.
2.5.9 Andrology
Depending on the age of the male rodent when sampled, the cauda epididymis ($\geq$ 80 days old) or the entire epididymis (65-80 days old) is evaluated for total number of sperm per cauda or per gram cauda, motility and progressive motility (as percent of total sperm examined. This evaluation must be done within two minutes of animal's demise.
with microscope slide and buffer kept at 37°C). Percent malformed sperm should be examined usually manually by microscopic examination of 200 fixed and stained (Eosin Y) sperm per male. In addition, one testis at necropsy should be frozen and subsequently homogenized in buffer and evaluated for homogenization-resistant spermatid head counts (SHC) to calculate daily sperm production (DSP) and efficiency of DSP.
For andrological studies, epididymal sperm counts and testicular homogenization-resistant spermatid head counts are found to be good markers for altered spermatogenesis. Wilkinson et al. (2000) compared the outbred strains of Wistar and SD with the inbred strain Dark Agouti (DA). While a small number of SD were used in this study, the DA rat has lower absolute and relative (% body weight) testes weight, and more variability in sperm counts but there was no significant difference in testicular histology, sperm count per gram of testis, or epididymal sperm count. There were also no differences in weights (relative to body weight) of the epididymis, seminal vesicles, or ventral prostate or of testosterone values for whole blood. DA rats are deficient in CYP2D1 activity, and several P450 cytochromes may also be absent.
In the Tinwell et al. (2002) study that found that the weak xenoestrogen BPA had no effect on PPS in two strains of rats (AP and SD), they reported that 50 mg/kg BPA decreased total sperm count and daily sperm count in AP rats but not in SD rats. Thus, there were strain-related differences in the effects of BPA in rats.
Apostoli et al. (1998), in a review article on the toxicology of lead, stated that SD rats appeared to be relatively resistant to the toxicological effects of lead. However, in general, lead impaired spermatogenesis and decreased androgens in other rat strains (e.g. Wistar and Charles Foster rats). Concentrations of blood lead > 40 µl/dl were associated with decreased sperm counts, volume, motility, morphology and endocrine effects.
In mice, strain differences in andrological parameters have been observed. For example, CD-1 mice have been shown to be much greater than 16-fold more resistant than C57Bl/6J (B6) or C17/Jls strain mice to the inhibition of spermatogenesis by pubertal exposure to estradiol (Spearow et al. 1999). Additional studies in Spearow’s laboratory have confirmed these observations and have also shown CD-1 mice to be much more resistant to the inhibition of testes weight, elongated spermatids per seminiferous tubule crosssection and epididymal sperm counts than outbred wild-derived Mus spretus/RP/Jls mice.
2.5.10 Organ Weights and Histopathology
• **Reproductive (including accessory sex organ weights)**. Reproductive organ weights should be obtained at adulthood and should include: (a) ovaries and uterus for females and (b) testes, epididymides (total and separated into caput, corpus and cauda), prostate (whole, and dorsolateral and ventral lobes separately; dissection may be postfixation), seminal
vesicles, coagulating glands, preputial glands, bulbourethral (Cowper’s) glands, and levator ani/bulbocavernosus (LABC) complex for males.
- **Thyroid.** Thyroid hormones ($T_3$ and $T_4$) are necessary for normal growth, development, differentiation, and regulation of most organ systems (Goldman et al. 2000; Stoker et al. 2000). Disruption of the feedback control of thyroid function may result in either a hypertrophic (goiter) or hypotrophic thyroid, depending on the mechanism of disruption. These changes would be evident in the weight of the thyroid gland. Since the thyroid gland surrounds the trachea, the thyroid plus embedded trachea is fixed and the trachea dissected away post fixation. The thyroid can then be weighed with little or no damage to the organ for subsequent histopathology.
- **Systemic (liver, kidneys, brain, etc.).** Systemic organ weights should be obtained at adulthood in both sexes and should include liver, kidneys, adrenal glands, pituitary, brain (regions), etc. Comparison of the effect of the test compound on these organ weights (absolute and relative) to effects on reproductive organ weights will provide a more complete characterization of toxicity and suggest whether observed toxicity is more or less targeted to the endocrine system.
- **Absolute and relative to body weight (and brain weight).** Organ weights (both reproductive and systemic) should be presented as absolute and relative to terminal body weight and brain weight. Relative organ weights will correct for effects on body weights (i.e., systemic toxicity). Brain weight is generally considered more stable than body weight after exposure to exogenous compounds and provides a basis for determination if changes in organ weights are primary or secondary to altered body weights.
In a study by Putz et al. (2001), the estrogenic effects of neonatal exposure to β-estradiol-3-benzoate (EB) were studied in two rat strains, SD and F344. Neonatal rats were injected with EB (over a 7-log range of doses from 0.015 µg/kg/day to 15 mg/kg/day in SD and a 5-log range of doses from 0.15 µg/kg/day to 1.5 mg/kg/day in F344) on pnd 1, 3, and 5. While F344 were not examined on pnd 35, SD male rats on pnd 35, exhibited significant increases in absolute and relative testis and epididymis weights at the low dose, 0.015 µg/kg/day, and significant reductions at higher doses (1.5 and 15 mg/kg/day). Since hepatic testosterone hydroxylase activity was increased in the low-dose animals, it may have advanced puberty, therefore resulting in increased organ weights. On pnd 90, in SD males exposed neonatally to the highest dose used in both strains (1.5 mg/kg/day), there were significant reductions in absolute and relative seminal vesicle, and coagulating gland weights, but not in testis or epididymis weights. SD rats on pnd 90 also showed an increase in testis and epididymis weights at the lowest dose (at one order of magnitude lower than the increase observed on pnd 35). This dose was not tested in F344 rats. In F344 males, the reduction in male reproductive organ weights (absolute and relative to body weight) at pnd 90 was greater at the highest dose (1.5 mg/kg/day) than in SD rats. Whereas relative testicular weights
were 44% of controls in F344 rats, they were 67% of controls in SD rats. Similarly, 1.5 mg/kg EB reduced epididymal weights to 36% of controls in F344 versus 87% of controls in SD rats. Pnd 90 testis and epididymal weights were reduced much more by 1.5 mg/kg EB in F344 than at a 10-fold higher dose (15 mg/kg) in SD rats. Thus, SD rats were greater than 10-fold more resistant than F344 to the inhibition of testes weight by EB.
Strain-related differences in absolute pituitary weights have been reported in ovariectomized rats exposed to E2 (silastic implants) for 10 or 20 days (Schechter et al. 1987). Pituitary weights were dramatically increased in F344 rats, with comparatively minimal effects in SD rats, and Prl levels were dramatically increased in F344 rats ($\geq$ 1000 fold), while only moderately increased in SD rats (100 fold). In addition, E2 implants in F344 strain rats produced a dramatic hyperplasia of anterior pituitary lactotropes, activation of phagocytic folliculo-stellate cells (FS), increase of cells positive for basic fibroblast growth factor, and reorganization of the blood supply from vessels in the adjacent meninges. Estradiol-treated SD rats did not show comparable responses (Schechter and Weiner 1991). Pituitary weights were also different across strains in ovariectomized rats exposed to E2 (10 mg s.c. pellet) for four weeks (Yin et al. 2001). Control pituitary gland weights were the lowest in Brown-Norway rats (4.4 ± 0.2 mg), and more than two-fold higher in Wistar rats (13.0 ± 2.1 mg); F344 control pituitary weights were 7.5 ± 0.1 mg, and Donryu 10.7 ± 1.0 mg. After exposure for four weeks to E2, there was a significant > 3 fold increase in pituitary weights in F344 rats, a significant >0.5 fold increase in pituitary weights in Brown-Norway rats, and no difference in pituitary weights of Wistar and Donryu rats. The F344 strain was the most susceptible to estrogen induction of pituitary tumorigenesis, followed by Wistar and Brown-Norway. The work of Schechter et al. and Yin et al. demonstrates that the pituitary gland of F344 rats is more sensitive to the effects of E2.
Differential effects of DES in particular rat strains have been demonstrated in studies by Gorski et al., who showed strain differences in estrogen dependent pituitary mass (Edpm) and pituitary tumor growth (Wendell et al. 1996; Wendell and Gorski 1997; Chun et al. 1998; Wendell et al. 2000). While F344 are highly susceptible to DES-induced pituitary growth/tumors, Brown Norway (BN) and SD rats are highly resistant. Following DES treatment, F344 strain rats and F344 x BN F2 rats with largest pituitary tumors showed a reduction in retinoblastoma susceptibility gene product (pRb) (Chun et al. 1998). QTL linkage analysis in a F344 x BN F2 mapped several additive and epistatic loci controlling Edpm, including susceptibility alleles from F344 and from BN (Wendell and Gorski 1997). Through QTL mapping in a (F344 x BN)x F344 backcross Wendell et. al. (2000) showed that several QTL including Edpm2-1, Edpm3, Edpm5, and Edpm9-2 all had significant effects on pituitary mass. While Edpm2-1 and Edpm9-2 primarily affected DNA content, Edpm5 primarily affected hemoglobin/DNA ratio, and Edpm3 affected all of these traits equally (Daun et al. 2000). These data defining genes controlling susceptibility to estrogenic agent-induced tumors among genetically defined strains provide a powerful tool for understanding genetic differences in susceptibility to endocrine disruption by estrogenic agents. These data also have value as historic controls. Since these studies used genetically-defined isogenic parental strains, they
can easily be repeated and enhance the identification of genes controlling susceptibility to environmentally-induced disease in humans as well.
In a review by Kacew et al. (1995), strain-related differences in mammary tumorigenesis were summarized. SD rats are more susceptible to mammary tumorigenesis after exposures to 2-acetylaminofluorene, 1,4-bis(4-fluorophenyl)-2-propynyl-N-cyclooctyl carbamate, and atrazine than were F344 rats. In addition, Wistar and SD rats were more susceptible than Copenhagen or LE rats to the effects of DMBA (7,12-dimethylbenz (a)anthracene), while Wistar were more sensitive than LE to the effects of 2-acetylaminofluorene. Therefore, there is an inherent difference in mammary tissue sensitivity among rat strains. In males, the sensitivity of the tumorigenic response in the prostate of F344, ACI, Lewis, CD and Wistar rats to 3,2'-dimethyl-4-aminobiphenyl (DMAB) was ordered as follows: F344>ACI>Lewis>CD; the Wistar rats were insensitive (Shirai et al. 1990).
Strain differences in susceptibility to the effect of chemicals on testis weight in mice have been reported. Oishi (1993) found that administration of di-2-ethylhexyl phthalate (DEHP) (0, 0.1, 0.2, 0.4, and 0.8% in feed, for two weeks) to two strains of mice (Jcl:ICR and CD-1) caused significant increases in absolute and relative liver weights in both strains at the highest doses and reduced testicular weights in CD-1 mice only (at a dose as low at 0.2%). DEHP was associated with testicular atrophy in CD-1 mice only, at the doses administered.
In another report demonstrating strain differences in mice, Nagao et al. (2002) exposed male C57BL/6N and ICR mice to BPA at 0, 2, 20 or 200 μg/kg/day for various periods encompassing adulthood, the juvenile period (just after weaning), and the embryo/fetal period. Though BPA did not affect male reproductive organ weights during any dose/exposure period, E2 (10 μg/kg from pnd 27 to 48, as a positive control) produced significant decreases in absolute and relative testes, epididymides, and seminal vesicle weights (as low as 55% of control values) compared to controls in C57BL/6N mice, while ICR mice were unaffected. Histopathology showed that 10 μg/kg E2 was without effect on ICR males, while B6 males showed slight to severe effects on elongated spermatids, decreased epididymal sperm, and seminal vesicle atrophy. Thus, C57BL/6N mice were more sensitive than ICR mice to the effects of E2. These data are consistent with data presented by Spearow et al. (1999).
In male mice, strain-related differences in susceptibility to endocrine disruption by endocrine-active chemicals have been reported (Spearow et al. 1999; Spearow et al. 2001). Mouse strains included B6 (an inbred strain), CD-1 (outbred, with larger litter size), C17/Jls (bred randomly, then inbred), and S15 (bred for large litters, then inbred). In control mice, testicular weight (absolute and relative to body weight) was higher in CD-1 and S15 strains selected for larger litter sizes. In juvenile male mice exposed for three weeks to E2 (at 0, 2.5, 10, 20 or 40 μg in silastic implants), B6 and C17/Jls were sensitive to E2, showing a maximal suppression of testis weight and spermatogenesis even at the lowest dose of E2 (2.5 μg), with no effect on testis weight or spermatogenesis in CD-1 or S15 at any dose up to 10 μg E2. Thus, Spearow et al.
(2001) demonstrated genetic differences in sensitivity to estrogen that may be related to breeding animals for high fecundity.
Additional studies exposing juvenile male mice from 3 to 7 weeks of age to 0, 0.625, 2.5, 10, 40 and 160 µg E2 in silastic implants showed a dramatic strain difference in susceptibility to endocrine disruption (Spearow et al. 2002; Spearow et al. 2003). CD-1 mice were greater than 195-fold more resistant than B6 mice to the disruption by E2 of testes weight, number of elongated spermatids per tubule and Spermatogenic Index (SI). CD-1 strain mice were also >41 times more resistant than B6 strain mice to the inhibition by E2 of epididymal sperm counts, and were more resistant than outbred wild-derived Mus spre tus mice to the disruption of testes weight and spermatogenesis by E2.
In a separate experiment, immature B6 males, outbred CD-1, CD-1 derived inbred strains CD10 and CD3, and F1 crosses were implanted subcutaneously at 3 weeks of age with silastic implants containing 0, 2.5, or 40 µg E2 (Spearow et al. 2003). Susceptibility to endocrine disruption by estrogenic agents (SEDE) was evaluated 4 weeks later by determining testicular weight, histology and epididymal sperm counts. The effects of Strain, Dose of E2 and the Strain x E2 Dose interaction were all highly significant on testes weight (TW), seminiferous tubule diameter, elongated spermatids per tubule, spermatogenic index and epididymal sperm counts (P<0.0003) (Spearow et al. 2003). Resistance of mouse strains to disruption of testes weight by E2 ranked: B6 << CD3 < CD10 < CD-1. While CD10 x B6 F1 mice showed limited hybrid vigor or heterosis for resistance to the disruption of testes weight by E2, the CD10 x CD3 F1 showed a large amount of heterosis in this trait. The data suggest that susceptibility to the disruption of testes weight by estrogen is controlled by additively and non-additively acting genes. Thus the observed > 16-fold to > 195-fold strain differences in susceptibility to the disruption of spermatogenesis and testes weight between strains questions the adequacy of the standard 10-fold within-species safety factor if only genetically resistant strain(s) are used for toxicological safety testing.
2.5.11 Behavioral Assessments/Clinical Observations
Courtship and mating behaviors in both sexes, and maternal and neonatal behaviors involving nesting, pup retrieval, and nursing are also under the control of the endocrine system. Qualitative evaluation of these behaviors, as they affect viability and ability to thrive, provides another measure of possible endocrine-disrupting activity of a test compound. Strain-related differences in lordotic behavior have been reported. In LE rats exposed gestationally to 1,4,6-androstatriene-3,17-dione, high levels of lordotic behavior are observed in male adult offspring treated with estrogen and progesterone, while SD rats only showed slight effects (Whalen et al. 1986). In an earlier study by Emery and Larsson (1979), Wistar males retained copulatory behavior longer than SD males following castration and systemic para-chlorophenylalanine treatment (which facilitates copulatory behavior). The castrated Wistar males also were more responsive to androgen replacement than SD males. In ovariectomized females, Wistar females were behaviorally more sensitive to estrogen than SD females.
2.5.12 Hormonal Controls
The endocrine system (also referred to as the hormone system) is made up of glands located throughout the body, hormones that are synthesized and secreted by the glands into the bloodstream, hormone carrier proteins (e.g., steroid hormone binding proteins, globulin and albumin, α-fetoprotein), receptors in the cell membranes, cytosol and nucleus of the cells of various target organs, and tissues that recognize and respond to the hormones. The function of the system is to regulate a wide range of biological processes, including control of blood sugar (through the hormone insulin from the pancreas), growth and function of reproductive systems (through the hormones T and estrogen and related components from the testes and ovaries), regulation of metabolism (through the hormones cortisol from the adrenal glands and thyroxin from the thyroid gland), development of the brain and the rest of the nervous system (estrogen and thyroid hormones), and development of an organism from conception through adulthood and old age. Normal functioning of the endocrine system, therefore, contributes to homeostasis (the body’s ability to maintain itself in the presence of external and internal changes) and to the body’s ability to control and regulate reproduction, development, and/or behavior. An endocrine system is found in nearly all animals, including mammals, nonmammalian vertebrates (e.g., fish, amphibians, reptiles, and birds), and invertebrates (e.g., snails, lobsters, insects, and other species). In humans, the system comprises more than 50 different hormones, and the complexity in other species appears to be comparable.
Puberty, the period in which sexual maturation occurs, begins in the hypothalamic-pituitary-gonadal (HPG) axis and leads to the development of secondary sex characteristics and fertility in both males and females (Stoker et al., 2000; Goldman et al., 2000). Within the hypothalamus, gonadotropin-releasing hormone (GnRH) from neurosecretory neurons act as the primary controller, whereas in the anterior lobe of the pituitary, gonadotropes, which secrete luteinizing hormone (LH) and follicle-stimulating hormone (FSH), and lactotropes, which secrete prolactin (Prl), serve the controller function. The primary gonadotropin-responsive elements in males are the Leydig and Sertoli cells in the testes, whereas in the female, the thecal and granulosa cells in the ovarian follicle respond.
- **Hypothalamus (GnRH)**. It is generally believed that the CNS is the trigger point for initiation of sexual maturation in the male and female rat (Goldman et al., 2000; Stoker et al., 2000). GnRH is present in the fetal brain and slowly increases until the second postnatal week in females and the third postnatal week in males. At that point, GnRH increases steeply and remains elevated until puberty. At puberty, the GnRH neurons undergo a morphological change, developing spiny-like processes that may be related to an increase in synapses on the cells. It has been shown that at puberty, the GnRH neurons become more responsive to neurotransmitter (norepinephrine and dopamine) stimulation. GnRH is released in a pulsatile manner in both male and female animals, which induces a similar pattern of LH and FSH secretion from the
anterior pituitary. GnRH levels can be viewed as an indicator of initiation of sexual maturation.
- **Pituitary (FSH, LH, Prl, TSH).** The gonadotropins FSH, LH, and Prl, secreted by the anterior pituitary, are essential in the process of sexual maturation. In the male, LH stimulates T secretion by direct action on the Leydig cells in the testis, and FSH binds to the Sertoli cells within the seminiferous tubules to aid spermatogenesis. FSH also increases the number of LH receptors in the testis, which in turn increases T production and testis growth. Increased prolactin is associated with growth of the prostate and seminal vesicle glands. In the female, FSH and LH act on the ovarian follicular granulosa and thecal cells, respectively, to stimulate production of E2, follicular/oocyte maturation and ovulation. An increase in prolactin levels is essential in the acquisition of vaginal opening and the transition to sexual maturity. Thyroid stimulating hormone (TSH) also from the anterior pituitary, is the trigger for the release of $T_3$ and $T_4$ from the thyroid gland (though $T_3$ is also produced locally in target organs), and is essential in the regulation of thyroid activity (see below).
Endocrine-disrupting chemicals have been shown to alter levels of pituitary hormones, such as prolactin and LH. Cummings et al. (2000) dosed four strains of rats, either diurnally or nocturnally with atrazine (0, 50, 100, or 200 mg/kg/day on the first eight days of pregnancy). There were reductions in LH levels in Holtzman rats and LE rats but not in SD or F344 rats after diurnal dosing, and reductions in LH levels in LE and F344 at the highest dose (200 mg/kg) after nocturnal dosing, with no effect on SD or Holtzman rats. There were strain-related differences in control levels of LH. For example, serum LH levels were low in Holtzman and F344 rats and significantly higher in SD and LE rats. Control progesterone levels tended to be higher in F344 but the ranking of other strains differed according to time of collection. Thus basal levels of pituitary hormones may contribute to the sensitivity of certain strains to endocrine-disrupting chemicals.
In a study by Cooper et al. (2000), 50–300 mg/kg/day of atrazine was administered to ovariectomized SD and LE rats for 1, 3, or 21 days, and surges of LH and Prl induced by estrogen were examined. After one or three doses of atrazine (300 mg/kg), LH and Prl were suppressed in ovariectomized LE but not SD rats. After 21 doses, LH and Prl were suppressed in both rat strains in a dose-dependent manner. Therefore, although a longer exposure resulted in similar effects in both strains, LE rats were more sensitive to shorter exposures of atrazine.
The differential effects of an environmental estrogen, bisphenol A (BPA) were studied in female F344 and SD rats by Steinmetz et al. (1997). Basal levels of Prl were 40 and 25 ng/ml in F344 and SD rats respectively. Within three days, E2 (in silastic capsules inserted s.c.) significantly increased Prl levels 10-fold in F344 rats and only 3-fold in SD rats; while BPA significantly increased Prl levels 7–8-fold in F344 rats, with no effect on SD rats. Interestingly, E2 increased anterior pituitary weight in F344 rats, but not in SD rats, while BPA had no significant effect on pituitary weight in either strain.
While the authors speculated that genetic differences in estrogen receptors may be involved in strain-related sensitivities, subsequent studies in uteri and vagina showed that F344 and SD rats differ in the intermediate effects of xenoestrogens downstream of the estrogen receptor (Long et al. 2000).
- **Gonads (T, DHT, E2, P).** Androgens are essential in the development of the male reproductive tract, as well as for feedback regulation of the hypothalamic-pituitary axis, sex accessory organ development and maintenance, and spermatogenesis (Goldman et al., 2000; Stoker et al., 2000). T and DHT are the two most active androgens. Testes descent and development; maturation of the epididymides, vas deferens, seminal vesicles, coagulating glands, prostate, Cowper’s glands, levator ani/bulbocavernosus, and other aspects of the male reproductive tract are dependent upon T, whereas DHT is responsible for male AGD, and external genitalia. DHT is key in the development and maintenance of the external genitalia, prostate, and urethra. E2 and progesterone (P) serve similar developmental and maintenance functions in the female.
In four strains of rats, Holtzman, LE, SD and F344, dosed with atrazine (0, 50, 100 or 200 mg/kg/day, for the first eight days of pregnancy) one group diurnally and the other nocturnally, Holtzman rats were the only strain to show a significant postimplantation loss and decreased P levels (Cummings et al., 2000). In the same study, serum E2 was increased only in SD rats dosed diurnally with the high dose of atrazine (200 mg/kg). In control animals, there were strain-related differences in both P and E2 levels. For example, serum P levels on day 9 controls were significantly higher in F344 and Holtzman rats than in SD or LE rats. At the same time, E2 levels in F344 rats were significantly lower than those of the other three strains. Thus, there are strain differences in control gonadal hormone levels, as well as in response to atrazine.
In 19.5 day old male rat fetuses, gestational exposure to TCDD (0, 0.5, 0.1, 0.5 or 1.0 μg/kg) was associated with increases in prenatal T and pituitary LH production in Han/Wistar but not LE rats (Haavisto et al., 2001). The lowered sensitivity of fetal LE rats may be associated with prenatal T levels which are only 15% of those in Han/Wistar rats. Conversely, adult LE rats are 1000 times more sensitive to TCDD compared to Han/Wistar rats (Pohjanvirta et al., 1988). Interestingly, strain-related differences in sensitivity to TCDD, are also age-dependent. In the same report, gestational exposure to DES (100 μg/kg in Han/Wistar rats; 100, 200 or 300 μg/kg in SD rats) significantly decreased prenatal T production in SD and Han/Wistar male rats, thus the authors concluded that both TCDD and DES exposure in utero may interfere with the timing of the prenatal T surge.
- **Thyroid (T₃, T₄).** TSH released from the pituitary gland stimulates the thyroid to secrete triiodothyronine, T₃, and thyroxine, T₄. T₄ is more prevalent in the blood (98%) than is T₃ (2%). T₃ is predominantly produced locally in target tissues. Prenatally, maternal T₄ is essential for normal offspring development. Thyroid hormones are well known to play essential roles in vertebrate
development (Dussault and Ruel, 1981; Myant, 1971; Porterfield and Hendrich, 1993; Porterfield and Stein, 1994; Timiras and Nzekwe, 1989). Experimental work focused on the effects of thyroid hormone on brain development in the neonatal rat supports the concept of a "critical period" during which thyroid hormone must be present to avoid irreversible damage (Timiras and Nzekwe, 1989). Though the duration of this critical period may be different for different thyroid hormone effects, the general view has developed that the period of maximal developmental sensitivity to thyroid hormone occurs during the lactational period in the rat (Oppenheimer et al., 1994; Timiras and Nzekwe, 1989). Although thyroid hormone receptors are expressed in fetal rat brains (Bradley et al., 1989; Strait et al., 1990) and thyroid hormone can exert effects on the fetal brain (Escobar et al., 1990; Escobar et al., 1987; Escobar et al., 1988; Porterfield, 1994; Porterfield and Hendrich, 1992, 1993; Porterfield and Stein, 1994), the lactational period represents a stage of rapid expansion of the thyroid hormone receptors (Perez-Castillo et al., 1985) and an increase in the number of demonstrated effects of changed levels of thyroid hormone on brain development.
In a study of the effects of 2,3,7,8-Tetrachlorodibenzo-$p$-dioxin (TCDD) on two strains of adult rats, LE, and Han/Wistar, there were strain-related differences in control and treated rats (Pohjanvirta et al., 1989). Control $T_3$ values in both strains were not significantly different, but $T_4$ levels were about 1.2 times higher and TSH levels twice as high in Han/Wistar compared to LE rats. Rats were injected once i.p. with 0, 5, 50, or 500 (Han/Wistar only) $\mu g/kg$ of TCDD, and tissues and hormones were collected at 1, 4, 8 and 16 days post treatment. TCDD decreased $T_4$ levels slightly more in LE (59%) compared to Han/Wistar rats (43%) after 4 days. After 16 days, $T_4$ levels had returned to basal levels in the two highest dose groups in the Han/Wistar rats, but in LE rats, $T_4$ levels remained more than two fold lower than controls with no sign of recovery. By day 16, TCDD increased $T_3$ levels in the two highest dose groups in the Han/Wistar rats and had decreased $T_4$ levels by one-half in LE rats. Thus, LE rats exhibited a greater sensitivity to TCDD with respect to thyroid hormone levels. The greater sensitivity of LE rats to TCDD has been confirmed in more recent work by Pohjanvirta et al. (1999) who showed that LE rats are 1000 times more sensitive to the acute lethal effects of TCDD than are Han/Wistar rats.
Another study compared the effects on levels of TSH, $T_3$ and $T_4$ after an endocrine challenge test [with thyrotropin-releasing hormone (TRH), and TSH], on two strains of male rats, SD and F344 (Fai et al., 1999). Both strains of rats responded to the challenge with increases in TSH levels. In F344 rats, there were significant increases in levels of $T_3$ and $T_4$, while in SD rats, there were only increases in $T_4$.
Strain-related differences in various hormone levels and organ weights were reported after exposure to the weak antiandrogen, $p,p'$-DDE, by O'Connor et al. (1999). After exposure of adult male LE and CD rats for 15 days to $p,p'$-DDE (0, 100, 200, 300 or 350 mg/kg/day in CD rats; 0, 200 or 300 mg/kg/day in LE rats), the following effects were reported.
| CD | LE |
|-----------------------------------------|-----------------------------------------|
| ↓ relative liver weight, ↓ absolute epididymis weight | ↓ relative liver weight, ↓ absolute epididymis and relative accessory organ weight |
| ↑ E2 (≥ 200 mg/kg/day), no change in T, DHT | ↑ E2 (300 mg/kg/day), T, DHT (≥ 200 mg/kg/day) |
| ↓ FSH (≥ 200 mg/kg/day), no change in Prl, LH | no change in FSH, Prl or LH |
| ↑ T₄ (≥ 100 mg/kg/day), no change in TSH | ↑ TSH and ↑ T₄ (≥ 200 mg/kg/day) |
These data demonstrate strain-sensitive differences in response to an endocrine-disrupting chemical. CD rats were much less sensitive to the effects of $p,p'$-DDE than were LE rats.
There is also a report by Dhaher et. al. (2000) of intraspecies differences in estrogen receptor number and binding affinity between Balb/c strain mice and a strain used as a systemic lupus erythematosus (SLE) model, MRL/MP-lpr/lpr (Dhaher, Greenstein et al. 2000). The MRL mice showed significantly higher affinity for E2 binding (using 3H-moxestrol as ligand) than did the Balb/c mice, which may be related to the exacerbation by E2 of SLE in the mouse model.
### 2.5.13 Uterine Weight
Uterine weight is sensitive to estrogenic compounds. The increase in uterine weight after estrogenic compound exposure may be due to increased fluid uptake (imbibition), increased cell size (hypertrophy), and/or increased cell number (hyperplasia). Although the uterotrophic assay is being standardized and validated by an OECD/EPA initiative, the measurement of uterine weight is a sensitive parameter for inclusion in many of the *in vivo* screens involving females. The uterotrophic assay was designed to identify chemicals that act as estrogen receptor agonists or antagonists, directly on the uterus in ovariectomized females (since the HPG axis is not intact). An alternative is to use prepubertal intact females as the animal model. In this case, the test chemical may affect any point along the HPG axis, and the end organs such as the uterus, and the chemical may not be an ER agonist or antagonist since it does not require ER binding in the uterus. A description of processes and endpoints to evaluate chemical effects on uterine weight follows.
1. Ovariectomized adult females are exposed to test material for 3-5 days (po, sc, etc.) and are evaluated for the following:
A. Increased uterine wet/blotted weight, 6 to 24 (hypertrophy and hyperplasia, respectively) hours after test dose; must be due to a uterine
estrogen receptor-mediated response (since gonad is missing, HPG axis is not intact); an estrogen receptor agonist will be detected.
B. Administration of authentic E2, the potent endogenous estrogen plus the test material; if uterine wet and blotted weights increased with E2, but to lesser extent from E2 plus test chemical, then test material is an estrogen antagonist.
2. Intact prepubertal females are exposed to test material for 3 days (po, sc, etc.) prior to normal onset of puberty and are evaluated for the following:
A. Increased uterine wet/blotted weight (can occur through HPG axis since it is intact; if uterine weight is increased, the test material is an estrogen-mimic, or estrogen-like (not necessarily an estrogen receptor agonist).
B. Administration of E2 and test material for detection of anti-estrogens (need not be an estrogen receptor antagonist).
A three-day uterotrophic assay for detecting the estrogenic activity of octylphenol, nonylphenol, methoxychlor, and bisphenol A in prepubertal LE rats was found to be the most accurate method of detecting estrogenic activity when compared to age of VO and estrous cyclicity (Laws et al., 2000), and may provide a sensitive endpoint for detection of endocrine disrupting chemicals that act via estrogen receptor binding (ovariectomized female), or via interaction with the HPG axis (intact prepubertal female).
In studies performed in ovariectomized Wistar, Da/Han, and SD rats, dosed two weeks after surgery with increasing doses of genistein (25, 50 and 100 mg/kg/day), p-tert-octylphenol (5, 50 and 200 mg/kg/day), bisphenol A (0, 5, 50 and 200 mg/kg/day), and as a positive control 100 µg/kg EE, there was a strong stimulation by EE in uterine weight in the Wistar and Da/Han rats and less pronounced response in the SD rats (Diel et al., 2001). All strains showed comparable slight uterotrophic responses to 50 and 100 mg/kg genistein and comparable moderate uterotrophic response to 200 mg/kg p-tert-octylphenol. No doses of bisphenol A applied stimulated uterine wet weight in Wistar or Sprague Dawley rats, whereas in the Da/Han rats a slight stimulation was detected in the highest dose (200 mg/kg BW). These studies demonstrated a strain- and chemical-specific sensitivity in the uterotrophic assay with SD rats less sensitive than DA/Han rats to EE and BPA. While Wistar rats were more sensitive than SD to EE, both Wistar and SD rats were resistant to BPA.
Similarly, EE (1,3,10 or 30 µg/kg/day), DES (0.5, 1.5, 5 and 15 µg/kg/day) and a weak phytoestrogen, coumestrol (CE) (10, 35, 75 and 150 mg/kg/day), as positive controls, produced increases in uterine weight in SD and F344 rats (McKim et al., 2001). However, in response to the chemical D4 (0,10,50,100, 250, 500 and 1000 mg/kg/day), the maximal uterine weight was increased 160% relative to control values in the SD rats and only 86% relative to control values in F344 rats. Thus, SD rats were more sensitive to the effects of octamethylcyclotetrasiloxane (D4) in the uterotrophic assay. McKim et
al. (2001) suggested that the metabolism of D4 may be slower in SD versus F344 rats based on pharmacokinetic data.
In a study by Christian et al. (1998), the uterotrophic assay was compared across three rat strains, Wistar-Chbb:THOM-SPF, and Wistar-CRL:(W)lBR, and SD (Charles River). When administered DES, all three strains exhibited a positive uterine response, with a similar response in both Wistar strains and a slightly lower response in SD rats. Variability of responses was associated with background spontaneous incidences of abnormally high relative uterine weights possibly due to fluctuations in estrogen occurring between pnd 21 to 25. Due to differences in control mean uterine weights between rat strains (i.e. means were lower in Wistar-Chbb:THOM-SPF versus Wistar-CRL and SD), criteria for biological outliers were different in Wistar-Chbb:THOM-SPF ($\geq 0.15\%$) versus Wistar-CRL and SD rats ($\geq 0.20\%$). Christian et al. (1998) demonstrated the importance of historical control data in the determination of statistically significant effects in subsequent studies.
Sometimes genetic differences have no observable effect on an endocrine endpoint. A study by Odum et al., 1999a, investigated the effects of $p$-Nonylphenol (NP) (0 to 250 mg/kg/day orally for 3 or 11 days; and 0 to 7.12 mg/kg/day via mini-pumps implanted s.c., for 11 days) in the uterotrophic assay in Alderley Park (Wistar-derived) and SD rats. Results were similar in both strains with a positive response to both DES (0.01 mg/kg/day) and NP (250 mg/kg/day), which were of the same magnitude as in previous studies performed in Noble rats (Odum et al., 1999b). The uterotrophic effects of NP and DES were found to be independent of rat strain.
### 3.0 Interspecies Similarities and Differences in Endocrine Endpoints
Few studies have been conducted in a single laboratory comparing the effects of endocrine-disrupting chemicals in more than one strain within a species, and even fewer studies have been conducted in a single laboratory comparing the effects of endocrine-disrupting chemicals in more than one species. Therefore the criterion that only studies performed in the same laboratory across species would be included in this white paper could not be applied. One review paper which looked at many studies on many chemicals across many laboratories in mice versus rats was included as well as other studies comparing endocrine endpoints across species in response to endocrine-disrupting chemicals.
There are many reviews on species differences in reproductive and developmental toxicology studies. Among these is a comparison of reproductive organ weights, sperm parameters, and vaginal cytology from fifty 13-week studies involving 24 chemicals in seven different laboratories (and four routes of exposure for the National Toxicology Program in B6C3F, mice and F344 rats (Morrissey et al., 1988). Considerable interlaboratory variability was demonstrated, but overall, it was concluded that there were no differences in types of sperm head abnormalities between control and treated rats and mice, and that testis, epididymis, and cauda epididymis weights and sperm motility were the most statistically powerful endpoints evaluated. Of all the
chemicals tested, only one, methylphenidate in the rat, produced an increase in abnormal sperm without effects on any other male endpoint. The agreement in results in these endocrine endpoints in response to reproductive toxicants between rats and mice was about 58%. A combination of confounding factors and species differences may have accounted for disparity in toxicological data.
Several reports have focused on differences in endocrine endpoints between rats and mice. In the uterotrophic assay, the estrogenic activity of parabens was assessed in B6D2F1 mice and Wistar rats (Hossaini et al., 2000). The parabens tested were methyl-, ethyl-, propyl- and butyl $p$-hydroxybenzoate, and $p$-hydroxybenzoic acid, which were administered either orally or subcutaneously for three days at doses up to 1000 mg/kg/day, and E2 (0.1 mg/kg) was used as a positive control. In the mouse uterotrophic assay, there was no significant effect on uterus weight in doses up to 1000 mg/kg/day for all parabens. In the rat uterotrophic assay, 600 mg/kg/day of butyl-paraben produced a positive response. Thus the estrogenic activity of parabens was found to be weak in rats and was not observed in mice.
Two separate studies reported species differences in the ovarian toxicity of reproductive toxicants. In one study (Doerr et al., 1996), 1,3-butadiene epoxides (butadiene monoepoxide, BMO; and butadiene diepoxide, BDE) were administered at doses up to 1.43 mmol/kg/day intraperitoneally for 30 days, to young SD rats and B6C3F1 mice. BMO was ovotoxic in mice, producing decreases in follicle counts and reproductive organ weights, with no effects in rats at the doses tested. BDE was ovotoxic in both rats and mice, with a greater sensitivity to BDE in mice, resulting in reductions in uterine and ovarian weights in mice at lower concentration than in rats. In addition, follicle counts were greatly reduced in mice at lower doses of BDE than in rats. The authors speculate that metabolic differences affecting the conversion of BMO to BDE may be responsible. In another study by Takizawa et al., 1985, intraovarian injection of increasing concentrations of benzo(a)pyrene, a polycyclic aromatic hydrocarbon, reduction in small oocytes occurred in a dose-dependent fashion at doses ranging from 0.01 to 30 $\mu g$/ovary in C57BL/6N and DBA/2N mice and 0.8 to 240 $\mu g$/ovary in SD rats. Thus effects of benzo(a)pyrene on small oocyte number were present in both SD rats and two strains of mice.
Cadmium has also been found to induce ovarian toxicity in animals. In a study by Rehm and Waalkes (1988), the effects of cadmium were assessed in immature and mature female Syrian hamsters, four mouse strains (BALB/cAnNCr, DBA/2NCr, C57BL/6NCr, NFS/NCr) and two rat strains [F344 and Wistar-Furth (WF)] after a single injection (sc) of a dose ranging from 20 to 47.5 $\mu mol/kg$, and the reproductive tracts were examined by light microscopy. Syrian hamsters were the most sensitive to cadmium-induced ovarian hemorrhagic necrosis, in particular, shortly before ovulation. In mice, only the DBA/2NCr strain showed significant cadmium-induced ovarian hemorrhagic necrosis, and uterine lesions in any of the mouse strains were rare. Though rats showed dose and age-dependent toxicity of the ovaries, uterus, cervix, and liver, cadmium induced uterine lesions only in mature F344 rats, not WF rats. Thus species and strain differences in cadmium-induced reproductive toxicity were reported.
As female mice were more sensitive than female rats to the ovotoxic effects of 1,3-butadiene (Doerr et al., 1996), male mice were found to be more sensitive to the reproductive effects of 1,3-butadiene, while the same doses produced no effects in rats. Anderson et al. (1998) compared the effects of exposure by inhalation to 1,3-butadiene (up to 1250 ppm in rats and up to 130 ppm in mice) for 10 weeks prior to mating, in male CD-1 mice and male SD rats. Exposure in mice resulted in F₁ abnormalities and increases in early deaths. None of these effects were observed at the same exposure concentrations in SD rats.
In an earlier study by Brinkworth, Anderson, and McLean (1992), dietary restriction in CD-1 mice was found to increase abnormal sperm which may have been related to a decrease in calories, and to decrease epididymal sperm counts, which may have been due to a lack of protein. In CD rats, dietary restriction only reduced epididymal sperm count. Thus dietary changes had different impacts on spermatogenesis in mice compared to rats.
A differential sensitivity to the developmental toxicity of BPA was reported by Morrissey et al. (1987) in CD-1 mice compared to CD rats. Mice and rats were dosed daily from gd 6 to 15, with 0-640 mg/kg/day BPA (rats) and 0-1250 mg/kg/day BPA (mice). Maternal toxicity (as evidenced by significant decreases in maternal weight gain) occurred in rats at ≥ 160 mg/kg/day and at a much higher dose (1250 mg/kg/day) in mice. No fetal toxicity occurred at doses up to 640 mg/kg/day in rats, and in mice, at 1250 mg/kg/day, there were significant increases in the number of resorptions, and a decreases in average fetal body weight. Maternal mice were more sensitive to toxic doses of BPA than maternal rats, and fetal malformations were not observed in either rats or mice.
Species differences in endocrine receptor binding characteristics have been reported in *in vitro* experiments. Matthews et al. (2000) examined the ability of several natural and synthetic chemicals to compete with [³H]-E2 for binding to bacterially expressed estrogen receptor alpha D, E and F domains from five different species (human, mouse, chicken, anole, and rainbow trout) fused to glutathione S-transferase (GST). While all of these fusion proteins displayed high affinity for E2 (Kd of 0.3 to 0.9 nM), species differed in affinity of binding other estrogenic chemicals. With, for example, rainbow trout, ER fusion proteins showing much higher relative binding affinity for alpha-zearalenol, Bisphenol A, octylphenol, o,p' DDT, Methoxychlor, p,p'-DDT, o,p'-DDE, p,p'-DDE, alpha-endosulfan and dieldrin than ERs from the human, mouse, chicken and green anole (Matthews, Celius et al. 2000). Thus sequence variation in the ER ligand binding domain between strains and between species could be a major source of variation in EDSP assays, in evaluating compound which act via the ER.
### 4.0 Summary and Conclusions of Intraspecies and Interspecies Similarities and Differences in Endocrine Endpoints and Conclusions
Endocrine-mediated toxicity of chemicals varies among strains of animals of the same species and among different species. Endocrine endpoints vary in sensitivity to
chemicals across strains and species. The sensitivity to endocrine-active chemicals is obviously dependent on the endpoint evaluated, the chemical administered, and the genotype of the animal model. Genetic variability which exists within outbred strains, between inbred and outbred strains as well as in the animal and human populations to which the results will be applied has the potential to confound the detection and interpretation of reproductive toxicity. The following table summarizes strain-related similarities and differences in endocrine endpoints in many key research articles designed to investigate the effects of endocrine disruptors. The focus of the comparison is on rat strains, with some examples in mouse strains.
| Endocrine Endpoints | Strains | Chemical | Similarities | Differences | Key References |
|---------------------|--------------------------|---------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------|----------------|
| Uterine Weight | Wistar, SD, Da/Han | Ethinyl Estradiol (EE), BPA | Wistar and SD less sensitive to BPA than Da/Han | SD less sensitive than Da/Han to EE and BPA, Wistar more sensitive than SD to EE. | 1Diel et al., 2001 |
| | Alderley Park, SD | NP | Results were similar in both strains with a positive response to NP (250 mg/kg/day), which were of the same magnitude as previous studies performed in Noble rats (Odum et al., 1999b) | 250 mg/kg NP resulted in slightly greater uterotrophic response in Alderley Park (1.84-fold) than in SD (1.55-fold). | 2Odum et al., 1999a |
| | SD, F344 | E2, BPA | Both strains showed increased uterine weight in response to E2 | F344 more sensitive to BPA and E2 | 3Steinmetz et al., 1998 |
| | SD, F344 | EE, DES and octamethylcyclotetrasiloxane (D4) | EE, DES and D4 produced similar positive uterine response in SD and F344 rats. | Maximal uterine response to D4 was two-fold higher in SD than F344 rats. F344 more sensitive to EE. | 4McKim et al., 2001 |
| | Wistar, SD | DES | SD and Wistar(CRL) had similar control values for uterine weight; similar uterotrophic response in both Wistar strains. | Wistar(Chbb-THOM-SPF) had lower mean uterine weights in controls versus SD and Wistar(CRL), less variability in response to DES | 5Christian et al., 1998 |
| | SD, F344 | E2, 4-OH tamoxifen (4-OHT) | F344 and SD showed similar uterine weight responses to E2 | F344 more sensitive than SD to E2 induced uterine epithelial cell height. SD more sensitive than F344 to induction of uterine weight and epithelial cell height by 4-OHT | 6Bailey et al., 2002 |
| Endocrine Endpoints | Strains | Chemical | Similarities | Differences | Key References |
|---------------------|------------------|----------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------|----------------|
| Male and female sexual development (AGD, PPS, VO) | SD, LE | p,p'-DDE: flutamide | PPS same time in LE and SD controls; p,p'-DDE had no effect on PPS or VO in either strain; flutamide decreased AGD and caused nipple retention and changes in male reproductive organ weights in both strains | p,p'-DDE produced significant decrease in AGD in male LE rats, not in SD; p,p'-DDE produced nipple retention at a lower dose in SD than LE; VO earlier in LE controls versus SD controls | 7 You et al., 1998 |
| | SD, F344 | β-E2-3-benzoate | neonatal exposure to 1.5 mg/kg/day decreased male reproductive organ weights in both F344 and SD males (pnd 90), and delayed PPS | At pnd 90, in rats exposed to 1.5 mg/kg/day, there were greater reductions in reproductive organ weights in F344 rats than in SD rats (greater responsiveness in F344, higher sensitivity in SD) | 8 Putz et al., 2001 |
| Female reproductive tract | SD, Alderley Park | BPA | There was no effect on age of first estrus in either strain. | VO delayed in Alderley Park, but not in SD rats | 9 Tinwell et al., 2002 |
| | F344, SD | BPA | Metabolic clearance of BPA is same | BPA increased DNA synthesis and cell proliferation in the vaginal epithelium of F344 rats but not of SD rats | 10 Long et al., 2000 |
| | LE, Holtzmann | TCDD | 1 μg/kg on gd 15 produced malformations of F1 offspring female external genitalia and increased UVD in both strains. | | 11 Gray and Ostby, 1995 |
| Endocrine Endpoints | Strains | Chemical | Similarities | Differences | Key References |
|---------------------|------------------|---------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------|----------------|
| Male reproductive tract/andrology | F344, ACI, Lewis, SD, Wistar | 3,2'-dimethyl 4-amino-biphenyl (DMAB) | In both SD and LE rats, there were increases in liver weight, and in E2; decreases in T4; and no change in PRL or LH | Tumorigenic response in the prostate of F344, ACI, Lewis, CD and Wistar rats to DMAB was more sensitive in F344>ACI>Lewis>CD and Wistar rats were insensitive. | 12 Shirai et al., 1990 |
| | LE, SD | p,p’-DDE | | In SD rats there was a decrease in epididymis weight, no change in T, DHT, or TSH, and a decrease in FSH. In LE rats, there was an increase in epididymis weight, increases in T, DHT and TSH, and no change in FSH. | 13 O’Connor et al., 1999 |
| | LE, Wistar | Vinclozolin | 200 mg/kg/day from gd 14 to pnd 3 produced malformations of male external genitalia, nipple retention lasting into adulthood, and increased inflammation of epididymides, prostate, seminal vesicles and coagulating glands in both strains. 12 mg/kg produced transient retention of nipples/areolae in preweaning males of both strains. Similar NOAEL in both strains, 12 and 6 mg/kg bod wt. In Wistar and LE respectively. | 200 mg/kg/day from gd 14 to pnd 3 reduced maternal body weights and pup weights in Wistar but not LE. Chronic inflammation of urinary bladder, and testis atrophy were noted in LE and not Wistar. At 12 mg/kg/day, nipple/areolae retention persisted into adulthood in LE, not in Wistar. Adult offspring LE also had reduced prostate, seminal vesicle, and coagulating gland weights; these effects were not seen in Wistar adult offspring. | 14 Hellwig et al., 2000 |
| Endocrine Endpoints | Strains | Chemical | Similarities | Differences | Key References |
|---------------------|--------------------------------|---------------------------|---------------------------------------------------|-----------------------------------------------------------------------------|----------------|
| | AP, SD | BPA | No difference in PPS | 50 mg/Kg BPA decreased total sperm count and daily sperm production in AP (Wistar-derived) rats but not SD. | 15 Tinwell et al., 2000 |
| | Wistar, SD and Dark Agouti (DA)| none | | DA exhibited lower absolute testis weight than the other two strains, no difference in sperm count among the three strains. | 16 Wilkinson et al., 2000 |
| | SD versus other strains (review)| lead | | SD more sensitive than other strains to testicular toxicity of lead. | 17 Apostoli et al., 1998 |
| | F344, SD | Neonatal E2-3-Benzo-ate (EB) | | F344 more sensitive and responsive than SD to reduction in TW, Epididymis, Seminal Vessicle and Coagulating gland weights. | 18 Putz et al., 2001 |
| Endocrine Endpoints | Strains | Chemical | Similarities | Differences | Key References |
|---------------------|---------------|-------------------|------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|
| Estrous cycle/ovulation | F344, LE, SD | diet (feed restriction) | Prior to food restriction, Brown Norway rats had irregular estrous cycle patterns while SD, LE and F344 rats had regular estrous cycle patterns. By day 5 of food restriction, 75% of F344 rats and 100% of Brown Norway rats stopped cycling and SD and LE rats were unaffected. | 19Tropp et al., 2001 |
| | Donryu, F344 | none | In aging Donryu rats, estrous cycle abnormalities (e.g. persistent estrus) were more common than in F344 rats. | 20Ando-Lu et al., 1998 |
| | LE, SD | atrazine | LE rats were more sensitive than SD rats to atrazine-induced disruption of the ovarian cycle. | 21Cooper et al., 2000 |
| | SD, F344 | atrazine | Atrazine administration to SD and F344 rats for up to 12 months produced changes in estrous cyclicity in SD rats (increased the number of days of vaginal estrus), increased E2, decreased P, and increased incidence of mammary tumors in SD rats only, with no effect in F344 rats. | 22Eldridge et al. (1994) |
| Endocrine Endpoints | Strains | Chemical | Similarities | Differences | Key References |
|---------------------|------------------|------------|-------------------------------------------------------------------------------|-----------------------------------------------------------------------------|----------------|
| Gonadal/Pituitary Hormone levels | Han/Wistar, SD, LE | TCDD, DES | gestational exposure to DES (100 µg/kg in Han/Wistar rats; 100, 200 or 300 µg/kg in SD rats) significantly decreased prenatal T production in SD and Han/Wistar male rats | In 19.5 day old male rat fetuses, gestational exposure to TCDD (0, 0.5, 0.1, 0.5 or 1.0 µg/kg) was associated with increases in prenatal T and pituitary LH production in Han/Wistar but not LE rats. | 23 Haavisto et al., 2001 |
| | SD, LE | atrazine | | LH and Prl were suppressed in LE but not SD rats after 1 and 3 doses of atrazine. | 24 Cooper et al., 2000 |
| | SD, F344 | E2, BPA | After 21 doses of atrazine LH and Prl were suppressed in SD and LE rats. BPA had no effect on pituitary weight in either strain. | After 3 days of E2 exposure, Prl was increased 10X in F344, only 3X in SD; increased pituitary weight in F344, not in SD. BPA increased Prl 7-8X in F344, no effect on SD rats. | 25 Steinmetz et al., 1997 |
| | SD, LE, Holtzman, F344 | atrazine | | Serum E2 increased by atrazine in SD rats, Only Holtzman strain with decreased P levels. Control and treated levels of E2 much lower and non-detectable in Dirunal F344 rats. Control levels of E2 higher nocturnally in LE rats. Diurnal atrazine reduced LH in Holtzman and LE but not in SD or F344. Nocturnal atrazine reduced LH in LE and F344 but not in SD or Holtzman. | 26 Cummings et al., 2000 |
| Endocrine Endpoints | Strains | Chemical | Similarities | Differences | Key References |
|---------------------|--------------------------|-----------|------------------------------------------------------------------------------|------------------------------------------------------------------------------|----------------|
| Thyroid hormone levels | LE, Han/Wistar | TCDD | $T_4$ levels in controls were comparable | $T_4$ and TSH levels were higher in Han/Wistar controls; TCDD produced a greater reduction in $T_4$ levels in LE than in Han/Wistar rats. | 27 Pohjanvirta et al., 1989 |
| | SD, F344 | TRH, TSH | | Challenge with TSH and TRH increases levels of $T_4$ and $T_3$ in F344 rats, and increased $T_4$ only in SD rats. | 28 Fail et al., 1999 |
| Fertility/reproductive parameters | SD, LE, Holtzman, F344 | atrazine | no effect of atrazine on pre- or post-implantation loss in SD and LE rats | Increase in % preimplantation loss in F344; SD and LE not affected. Increase in % postimplantation loss only in Holtzman rats only. | 29 Cummings et al., 2000 |
| | SD, LE, F344 | atrazine | full litter resorption at highest dose (200 mg/kg/day) in all three strains. | | 30 Narotsky et al., 2001 |
| | F344, SD | BDCM | 75 mg/kg/day significantly reduced body weights in both strains | At lower doses (50 and 100 mg/kg/day) pregnancy loss in F344, not in SD or LE | 31 Bielmeier et al., 2001 |
| | Wistar Hannover, CD | none | | 75 mg/kg/day BDCM produced 62% litter resorption in F344, no effect on SD | 32 Liberati et al., 2002 |
| | | | | Control Wistar Hannover have lower pregnancy rates and litter sizes; higher % pre- and post-implantation loss and resorptions versus control CD®. | |
| Endocrine Endpoints | Strains | Chemical | Similarities | Differences | Key References |
|---------------------|------------------|----------|-------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------|
| organ weights, histopathology (rats) | SD, F344 | E2 | | In ovariectomized rats exposed to E2 (silastic implants) for 10 or 20 days, pituitary weights and Prl levels were dramatically increased in F344, minimal effects in SD rats. | 33Schechter et al., 1987 |
| | Brown-Norway, F344, Wistar, Donryu | E2 | No difference in pituitary weights of Wistar and Donryu rats, after E2 exposure. | In ovariectomized rats, control pituitary gland weights were the lowest in Brown-Norway > Wistar rats; After E2 treatment (10 mg s.c. pellet for 4 weeks), there was a significant > 3 fold increase in pituitary weights in F344 rats, a significant >0.5 fold increase in pituitary weights in Brown-Norway rats | 34Yin et al., 2001 |
| | F344, BN, SD | DES | | F344 sensitive to DES-induced pituitary tumors; SD and BN resistant to effects of DES on pituitary gland. | 35Wendell et al., 1996, 1997, 1998, 2000; Chun et al., 1998 |
| | F344, WF | Cadmium | | Cadmium-induced toxicity of the ovaries, uterus, cervix, and liver, cadmium induced uterine lesions in F344 not WF | 36Rehm and Waalkes, 1988 |
| Endocrine Endpoints | Strains | Chemical | Similarities | Differences | Key References |
|---------------------|---------|----------|--------------|-------------|---------------|
| Organ weights, histopathology spermatogenesis (mice) | Jcl:ICR and CD-1 mice | di-2-ethylhexylphthalate (DEHP) | Increase in liver weight, both strains | DEHP decreased testicular weight in CD-1 mice not Jcl:ICR | 37Oishi et al., 1993 |
| | C57BL/6N and ICR mice | BPA, E2 | BPA did not affect male reproductive organ weights during any dose/exposure period | E2 (10 μg/kg from pnd 27 to 48, as a positive control) produced significant decreases in absolute and relative testes, epididymides, and seminal vesicle weights compared to controls in C57BL/6N mice, while ICR mice were unaffected. | 38Nagao et al., 2002 |
| | C57BL/6J (B6), CD-1, C17/Jls, and S15 | E2 | | B6 and C17/Jls were sensitive to E2 showing a maximal suppression of testis weight and spermatogenesis even at the lowest dose of E2 (2.5 μg), with no effect on testis weight or spermatogenesis in CD-1 or S15 up to 10 μg E2. | 39Spearow et al., 1999; 2001 |
| | BALB/cAnNCrD BA/2NCr, C57BL/6NCr, NFS/NCr | Cadmium | | cadmium-induced ovarian hemorrhagic necrosis only observed in DBA/2NCr mice | 40Rehm and Waalkes, 1988 |
1-40 Footnotes are used for the identification of references in Table 3
Thus after conducting this literature review, strain-related differences in effects on endocrine-mediated endpoints in response to a wide variety of endocrine-disrupting chemicals was obvious. The sensitivities of various strains to chemicals producing effects on various endocrine endpoints are summarized in Table 3. Since effects on many of the endpoints were also chemical specific, the chemicals are also included in Table 3.
Table 3 Summary of Agent- and Endpoint-Specific Intraspecies Differences
| Endocrine Endpoint | Chemical | Sensitive* Strains | Less Sensitive/Insensitive Strains | References (from Table 2) |
|--------------------|------------|--------------------------|-----------------------------------|---------------------------|
| Uterine Weight | EE | Wistar, Da/Han | SD | 1 |
| | BPA | Da/Han | Wistar, SD | 1 |
| | NP | AP>SD | | 2 |
| | EE, DES | SD, F344 | | 3 |
| | D4 | SD | F344 | 4 |
| | E2 | SD - F344 | | 6 |
| | tamoxifen | SD | F344 | 6 |
| AGD | p,p'-DDE | LE | SD | 7 |
| | flutamide | SD, LE | | 7 |
| Nipple retention | p,p'-DDE | SD | LE | 7 |
| | flutamide | SD, LE | | 7 |
| | vinclozolin| LE > Wistar | | 14 |
| PPS | E2 | F344, SD | | 8 |
| | p,p'-DDE | | SD, LE | 7 |
| VO | p,p'-DDE | | SD, LE | 7 |
| | BPA | AP | SD | 9 |
| Male reproductive organ wts. | flutamide | LE, SD | | 7 |
| | E2 | F344, SD | | 8 |
| | low dose E2| SD | F344 | 8 |
| | vinclozolin| LE | Wistar | 14 |
| | BPA | | C57BL/6N, ICR | 38 |
| | E2 | C57BL/6N | | 38 |
| | E2 | B6, C17/Jls | ICR, CD-1, S15 | 39 |
| | DEHP | CD-1 | Jcl:ICR | 37 |
| Endocrine Endpoint | Chemical | Sensitive Strains | Less Sensitive/Insensitive Strains | References (from Table 2) |
|--------------------|----------|------------------|-----------------------------------|---------------------------|
| Estrous cycle/ovulation | feed restriction | F344, BN | SD, LE | 18 |
| | atrazine | LE | SD | 21 |
| | atrazine | SD | F344 | 22 |
| Fertility/gestational effects | atrazine | Holtzman | F344, SD, LE | 29 |
| | atrazine | F344 | SD, LE | 30 |
| | BDCM | F344 | SD | 31 |
| Andrology | BPA | AP | SD | 15 |
| | lead | SD | | 17 |
| | E2 | B6, C17/Jls | CD-1, S15 | 39 |
| Hormone Levels | p,p'DDE | SD (FSH, E2, T₄) | LE (FSH, Prl, LH) | 13 |
| Hormone Levels (continued) | p,p'DDE | LE (E2, T₄, T, DHT, TSH) | SD (Prl, LH, T, DHT, TSH) | 13 |
| | E2 | SD (Prl) | F344 (Prl) | 22 |
| | TCDD | Han/Wistar (T, LH) | LE (T, LH) | 23 |
| | atrazine | LE (LH, Prl) | SD (LH, Prl) | 24 |
| | atrazine | Holtzman (P) | SD (E2, P) | 26 |
| | E2 | F344 (Prl) | SD (Prl) | 25 |
| | BPA | F344 (Prl) | SD (Prl) | 25 |
| | TCDD | LE (T₄) | | 27 |
| | TSH, TRH | SD, F344 (T₄) | SD (T₃) | 28 |
| | TSH, TRH | F344 (T₃) | | 28 |
| Pituitary Weights | E2 | F344 | SD | 33 |
| | E2 | F344>BN | Wistar, Donryu | 34 |
| | DES | F344 | SD, BN | 35 |
| Histopathology (reproductive organs) | BPA | F344 (females) | SD (females) | 10 |
| | DMAB | F344>ACI>Lewis>CD (males) | Wistar (males) | 12 |
| | vinclozolin | LE (males) | Wistar (males) | 14 |
Table 3. (continued)
| Endocrine Endpoint | Chemical | Sensitive* Strains | Less Sensitive/Insensitive Strains | References (from Table 2) |
|--------------------|----------|-------------------|-----------------------------------|---------------------------|
| | cadmium | F344 (females) | WF (cadmium) | 40 |
| Histopathology | atrazine | SD (females) | F344 (females) | 22 |
*“Sensitive” refers to a greater response, and may not reflect dose level
Table 3 summarizes the intraspecies comparisons of the effects of endocrine-disrupting chemicals on endocrine endpoints in EDSP assays. In the uterotrophic assay, the SD rat is sensitive to many endocrine disruptors, except for EE and BPA in one study, in addition to many other strains. Less sensitive strains were F344 in response to D4, and Wistar in response to BPA. LE and SD rats were differentially sensitive to the effects of flutamide and p,p'-DDE on AGD and nipple retention. SD and F344 rats showed sensitivity to PPS in response to E2, while SD and LE were insensitive to p,p'-DDE. AP rats were sensitive to BPA on VO, while SD and LE rats were less sensitive to BPA. Sensitivity to the effects on male reproductive organ weights was found in most rat strains and chemicals studies except for F344 (low dose E2), and Wistar (vinclozolin). Different strains of mice were differentially sensitive to the effects of E2 on male reproductive organ weights and andrology. Different strains of rats were differentially sensitive to the effects of BPA, atrazine, BDCM, p,p'-DDE, TCDD, E2, lead, cadmium, DES, vinclozolin, and DMAB, on estrous cycle, fertility, pregnancy loss, andrology, hormone levels and histopathology of the reproductive organs. Pituitary weights were affected in F344 rats after exposure to DES and E2 while in other strains pituitary weights were unaffected. Overall, there was no one strain which was sensitive to all of the endocrine-disrupting chemicals at most of the endocrine endpoints. Effects on endocrine endpoints were dependent on strain and chemical, and effects on strains were dependent on the endpoints and chemicals. There were no clear patterns indicating the optimal strain for detection of effects due to the majority of endocrine-active compounds tested. In addition, it was not clear that inbred strains or outbred strains would be the better choice in species/strain selection. In selection of the appropriate species/strain for EDSP assays, it may be important to consider the endocrine endpoints assessed and the test chemical employed. Obviously, it would be more thorough to conduct multi-strain assays to increase to chances of detecting endocrine effects, but extremely difficult to determine which strains should be chosen considering the considerable variation across strains, even under the same experimental conditions (which would minimize confounders).
The five example assays under consideration by the EDSP (Table 1) were compared against strain sensitivities (Table 3). In the one-generation, two-generation, and *in utero* lactational assays, a similar array of endpoints are assessed, from gestational indices, reproductive development, onset of puberty, estrous cyclicity, hormone levels, andrology, organ weights, and histopathology. Selecting a rat strain that is sensitive at the greatest number of endpoints is difficult, due to the different strain sensitivities observed with different chemicals. For example, if selecting one chemical, in this case, $p,p'$-DDE, SD rats are sensitive at about half of the endpoints, and LE rats are sensitive at the other half of the endpoints. These data support the case for performing reproductive toxicity assays in more than one strain to maximize the probability of detecting an effect at an endocrine endpoint. Uterine weight, as assessed in the uterotrophic assay, which is currently in the proposed *in utero* lactational assay, is an endpoint which appears relatively sensitive across strains. In a review of uterotrophic assays performed in 19 different laboratories and two model systems (ovariectomized and immature rats), the response to EE was robust, reproducible and sensitive across laboratories, regardless of differences in strain, diet, bedding, housing, and vehicle (Kanno et al., 2001). In endpoints like fertility and gestational parameters, the SD rat appears less sensitive than the F344 rat to several chemicals, suggesting that the F344 rat may be a better strain for assessing the effects of chemicals at these endpoints. However, the F344 has a small litter size, thus reducing the number of animals available for multiple evaluations, and has a high incidence of spontaneous testicular tumors, which may confound potential effects on male reproductive organs. Both the adult male and pubertal male and female assays include assessments of hormone levels. Chemical effects on hormone levels were highly strain dependent, with no one strain insensitive to changes in every hormone level. With inclusion of enough different hormone level determinations, the possibility detecting a chemical-induced change in one strain would be enhanced. Inclusion of an additional rat strain (e.g. the LE rat) in the pre-validation of EDSP assays may provide more information on the strain sensitivity of the various endpoints and assays, in addition to providing more flexibility to laboratories in selecting strains for performance of the assays.
Since interspecies comparisons performed under the same experimental conditions are few, and strain is obviously also a confounder in comparing species-related differences in sensitivity to endocrine-disrupting chemicals, conclusions about comparisons are limited. There are differences in the response of the rat and mouse reproductive tracts to various reproductive toxicants. In some cases, mice appear more sensitive and in other cases rats are more sensitive. In a comparison of reproductive organ weights, sperm parameters, and vaginal cytology from fifty 13-week studies involving 24 chemicals in seven different laboratories (and four routes of exposure) for the National Toxicology Program in B6C3F$_1$ mice and F344 rats (Morrissey et al., 1988), there was considerable interlaboratory variability due to confounding factors (such as different suppliers and environmental conditions) resulting in only a 58% agreement in endocrine endpoints (i.e. either no response in either species, or at least one endpoint affected in both species) in response to reproductive toxicants. It is possible that EDSP assays involving only one species and strain, namely the SD rat, may not detect effects in endocrine endpoints that occur in mice (and vice versa). However, a major question
that cannot be answered is which animal model will provide the most appropriate data on the ability of the test chemical to interact with the endocrine system, in order to predict the effects of endocrine-active chemicals in humans, and/or other species of concern. Since we cannot identify the most relevant or sensitive animal model with the existing data, because the sensitivity depends on the endpoint(s) chosen, the chemicals evaluated, the timing, duration and route of exposure, the dose levels, and is confounded by the varying genotype by environment interactions, the main issue is whether to use inbred versus outbred strains. Inbred strains are homogeneous at all loci, and have a limited range of responses (less variability, but an effect may be missed), though using several genetically-defined inbred strains in endocrine screens may provide a wider spectrum of responsivity. If selecting a single strain for endocrine screens, outbred strains have more genetic variability, exhibit a broader range of responsivity (with a greater likelihood of detecting an effect), and may be more appropriate.
5.0 References
Adham, I.M., Steding, G., Thamm, T., Bullesbach, E.E., Schwabe, C., Paprotta, I. And Engel, W. 2002. The overexpression of the insl3 in female mice causes descent of the ovaries. *Mol. Endocrinol.* **16**, 244-262.
Anderson, D., Hughes, J.A., Edwards, A.J., and Brinkworth, M.H. (1998). A comparison of male-mediated effects in rats and mice exposed to 1,3-butadiene. *Mutation Research* **397**, 77-84.
Ando-Lu, J., Sasahara, K., Nishiyama, K., Takano, Satoshi, Takahashi, M., Yoshida, M., and Maekawa, A. (1998). Strain-differences in proliferative activity of uterine endometrial cells in Donryu and Fischer 344 rats. *Exp. Toxic. Pathol.* **50** *(3)*, 185-190.
Apostoli, P., Kiss, P., Porru, S., Bonde, J.P., Vanhoorne, M., and the ASCLEPIOS study group. (1998). Review: Male reproductive toxicity of lead in animals and humans. *Occup. Environ. Med.* **55**, 364-374.
Ashby, J., Odum, J. and Foster, J.R. (1997). Activity of raloxifene in immature and ovariectomized rat uterotrophic assays. *Fundamental and Applied Toxicology* **25**(3), 226-31.
Bailey, J. A. and Nephew, K.P. (2002). Strain differences in tamoxifen sensitivity of Sprague-Dawley and Fischer 344 rats. *Anticancer Drugs* **13**(9): 939-47.
Barkley, M. S. and Bradford, G.E. (1981). Estrous cycle dynamics in different strains of mice. *Proc. Soc. Exp. Biol. Med.* **167**, 70-77.
Bartolomucci, A., Palanza, P., Gaspani, L., Limioli, E., Panerai, A.E., Cerzolini, G., Poli, M.D., and Parmigiani, S. (2001). Social status in mice: behavioral, endocrine, and immune changes are context dependent. *Physiol. Behav.* **73**(3), 401-410.
Becker, G., and Flaherty, T.B. (1968). Group size as a determinant of dominance-hierarchy stability in the rat. *J. Comp. Physiol. Psychol.* **66**(2), 473-476.
Biegel, L.B., J.C. Cook, M.E. Hurtt, and J.C. O'Connor (1998). Effects of 17 beta-estradiol on serum hormone concentrations and estrous cycle in female Crl:CD BR rats: effects on parental and first generation rats. *Toxicol. Sci.* **44**, 143-154.
Bielmeier, S.R., Best, D.S., Guidici, D.C., and Narotsky, M.G. (2001). Pregnancy loss in the rat caused by bromodichloromethane. *Tox. Sci.* **59**(2), 309-315.
Bindon, B. M. (1984). Reproductive biology of the Booroola Merino sheep. *Aust J Biol Sci* **37(3)**: 163-89.
Blanchard, D.C., Sakai, R.R., McEwen, B., and Weiss, S.M. (1993). Subordination stress: behavioral, brain and neuroendocrine correlates. *Behav. Brain Res.* **58(1-2)**, 113-121.
Bradford, G. E. (1968). Selection for litter size in mice in the presence and absence of gonadotropin treatment. *Genetics* **58**: 283-295.
Bradford, G. E. (1969). Genetic control of ovulation rate and embryo survival in mice. I. Response to selection. *Genetics, Princeton* **61**: 905-921.
Bradford, G. E. (1979). Genetic variation in prenatal survival and litter size. *J Anim Sci* **49(Suppl 2)**: 66-74.
Bradford, G. E., Barkely, M.S. et al. (1979). Physiological effects of selection for aspects for efficiency of reproduction. Symposium on Selection, E.A.A.P., Harrowgate, England, Commonwealth Bureau.
Bradford, G.E. 1969. Genetic control of ovulation rate and embryo survival in mice. I. Response to selection. *Genetics, Princeton*. **61**, 905-921.
Bradley, D.J., Young, W.S., and Weinberger, I.C.. (1989). Differential expression of alpha and beta thyroid hormone receptor genes in rat brain and pituitary. *Proc Natl Acad Sci USA*. **(86)**, 7250-7254.
Bresnahan, J.F., Kitchell, B.B., and Wildman, M.F. (1983). Facial hair barbering in rats. *Lab. Anim. Sci.* **33(3)**, 290-291.
Brinkworth, M.H., Anderson, D., and McLean, A.E.M. (1992). Effects of dietary imbalances on spermatogenesis in CD-1 mice and CD rats. *Food Chem. Toxic.* **30(1)**, 29-35.
Brown, S. L., Brett, S. M., Gough, M., Rodericks, J. V., Tardiff, R. G., And Turnbull, D. 1988. Review of interspecies risk comparisons. *Regulatory Toxicology and Pharmacology* **8**, 191-206.
Canzian, F. (1997). Phylogenetics of the laboratory rat Rattus norvegicus. *Genome Res.* **7(3)**: 262-7.
Carney, E.W., Schortichini, B.S., and Crissman, J.W. (1998) Feed restriction during in utero and neonatal life: Effects on reproductive and developmental endpoints in the CD rat. *The Toxicologist* **42**(Suppl. 1), 102-103.
Chapin, R.E., Gulati, D. K., Barnes, L. H., and Teague J. L. 1993. The effects of feed restriction on reproductive function in SD rats. *Fundamental and Applied Toxicology* **20**, 23-29.
Christian, M.S., Hoberman, A.M., Bachmann, S., and Hellwig, J. (1998). Variability in the uterotrophic response assay (an in vivo estrogenic response assay) in untreated control and positive control (DES-DP, 2.5 µg/kg, BID) Wistar and Sprague-Dawley rats. *Drug and Chemical Toxicology* **21**(Suppl. 1), 51-100.
Chun, T. Y., D. Wendell, et al. (1998). Estrogen-induced rat pituitary tumor is associated with loss of retinoblastoma susceptibility gene product. *Mol. Cell. Endocrinol.* **146**(1-2), 87-92
Cooper, R.L., Stoker, T.E., Tyrey, L., Goldman, J.M., and McElroy, W.K. (2000). Atrazine disrupts the hypothalamic control of pituitary-ovarian function. *Toxicol. Sci.* **53**, 297-307.
Cowley, A. W., Jr., Roman, R.J., et al. (2001). Brown Norway chromosome 13 confers protection from high salt to consomic Dahl S rat. *Hypertension* **37**(2 Part 2), 456-61.
Cummings, A.M., Rhodes, B.E. and Cooper, R.L. (2000). Effect of atrazine on implantation and early pregnancy in 4 strains of rats. *Toxicol. Sci.* **58**, 135-143.
D’Arbe, M., Einstein, R., and Lavidis, N.A. (2002). Stressful animal housing conditions and their potential effect on sympathetic neurotransmission in mice. *Am. J. Physiol. Regul. Integr. Comp. Physiol.* **282**(5), R1422-1428.
De Leon, D. D. and Barkley, M.S. (1987). Male and female genotype mediate pheromonal regulation of the mouse estrous cycle. *Biol. Reprod.* **37**(5), 1066-74.
Dhaher, Y. Y., B. Greenstein, de Fougerolles, N.E., Khamashta, M., Hugh, G.R. (2000). Strain differences in binding properties of estrogen receptors in immature and adult BALB/c and MRL/MP-lpr/lpr mice, a model of systemic lupus erythematosus. *Int. J. Immunopharmacol.* **22**(3), 247-54
Diel, P., Laudenbach, U., Smolnikar, K., Schulz, T., and Michna, H. (2001) Bisphenol A: morphological and molecular uterine and mammary gland reactions in different strains of the rat (Wistar, Sprague-Dawley, Da/Han). *Toxicologist* **60**(1), 296.
Durrant, B.S., Eisen, E.J., et al. (1980). Ovulation rate, embryo survival and ovarian sensitivity to gonadotropins in mice selected for litter size and body weight. *J. Reprod. Fert.* **59**, 329-339.
Dussault, J.H., and Ruel, J. (1987). Thyroid hormones and brain development. *Annu Rev Physiol.* **49**(4), 321-334.
EDSTAC (Endocrine Disruptor Screening and Testing Advisory Committee) (1998). Final Report: Volume I and II.
Eisen, E. J., Legates, J.E. et al. (1970). Selection for 12-day litter weight in mice. *Genetics* **64**(3): 511-32.
Eisen, E. J. (1972). Long-term selection response for 12-day litter weight in mice. *Genetics* **72**(1): 129-42.
Eisen, E. J. (1975). Population size and selection intensity effects on long-term selection response in mice. *Genetics* **79**(2): 305-23.
Eisen, E. J. and Durrant, B.S. (1980). Genetic and maternal environmental factors influencing litter size and reproductive efficiency in mice. *J. Anim. Sci.* **50**(3): 428-41.
Eisen, E. J. and Johnson, B.H. (1981). Correlated responses in male reproductive traits in mice selected for litter size and body weight. *Genetics* **99**(3-4): 513-24.
Eklund, J. and Bradford, G.E. (1977). Genetic analysis of a strain of mice plateaued for litter size. *Genetics* **85**(3): 529-42.
Eklund, J. and Bradford, G.E. (1977). Longevity and lifetime body weight in mice selected for rapid growth. *Nature* **265**(5589): 48-9.
Eldridge, A.C., and Kwolek, W.F. (1983). Soybean isoflavones: effect of environment and variety on composition. *J. Agric. Food Chem.* **31**, 394-396.
Eldridge, A.C., Tennant, M.K., Wetzel, L.T., Breckenridge, C.B., and Stevens, J.T. (1994). Factors affecting mammary tumor incidence in chlortriazine-treated female rats: hormonal properties, dosage, and animal strain. *Environmental Health Perspectives* **102**(Suppl. 11), 29-36.
Emery, D.E and Larsson, K. (1979). Rat strain differences in copulatory behavior after para-chlorophenylalanine and hormone treatment. *J. Comp. Physiol. Psychol.* **93**(6), 1067-1084.
Escobar, G., Obregon, M.J., and Rey, F. (1990). Contribution of maternal thyroxine to fetal thyroxine pools in normal rats near term. *Endocrinology* **126**(4), 2765-2767.
Escobar, G., Obregon, M.J., and Rey, F. (1987). Fetal and maternal thyroid hormones. *Hormone Research.* **26**, 12-27.
Escobar, G., Obregon, M.J., and Rey, F. (1988). Transfer of Thyroid hormones from the mother of the fetus. In Delang, F., Fisher, D.A., and Glinoer, D. (Eds.) *Research in Congenital Hypothyroidism.* Plenum Press, New York, NY, p 15-28.
Fail, P.A., Anderson, S.A., and Friedman, M.A. (1999). Response of the Pituitary and Thyroid to Tropic Hormones in Sprague-Dawley versus Fischer 344 Male Rats. *Toxicol. Sci.* **52**, 107-121.
Falconer, D. S. (1971). Improvement of litter size in a strain of mice at a selection limit. *Genet. Res.* **17**(3): 215-35.
Falconer, D. S. (1989). Introduction to Quantitative Genetics. Essex, England, Longman Scientific & Technical.
Festing, M. F. (1979). Properties of inbred strains and outbred stocks, with special reference to toxicity testing. *J. Toxicol. Environ. Health* **5**(1): 53-68.
Festing, M. F. W. (1986). The case for isogenic strains in toxicological screening. *Arch. Toxicol. Suppl.* **9**, 127-137.
Festing, M. F. (1987). Genetic factors in toxicology: implications for toxicological screening. *Crit. Rev. Toxicol.* **18**(1): 1-26.
Festing, M. F. W. (1991). Genetic factors in neurotoxicology and neuropharmacology: A critical evaluation of the use of genetics as a research tool. *Experientia* **47**, 990-998.
Festing, M. F. W. (1992). The scope for improving the design of laboratory animal experiments. *Laboratory Animals* **26**, 256-267.
Festing, M. F. (1993). Genetic variation in outbred rats and mice and its implications for toxicological screening. *J. Exp. Anim. Sci.* **35**(5-6): 210-20.
Festing, M. F. (1995). Use of a multistrain assay could improve the NTP carcinogenesis bioassay. *Environ. Health Perspect.* **103**(1): 44-52.
Festing, M. F. and Altman, D.G. (2002). Guidelines for the design and statistical analysis of experiments using laboratory animals. *Ilar J.* **43**(4): 244-58.
Festing, M. F., Diamanti, P., and Turton, J.A. (2001). Strain differences in haematological response to chloramphenicol succinate in mice: implications for toxicological research. *Food. Chem. Toxicol.* **39**(4): 375-83.
Flaws, J.A., Sommer, R.J., Silbergeld, E.K., Peterson, R.E., Hirshfield, A.N. 1997. In utero and lactational exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) induces genital dysmorphogenesis in the female rat. *Toxicol. Appl. Pharmacol.* **147**(2), 351-362.
Gallavan, R.H., J.F. Holson, J.F. Knapp, V.L. Reynolds, and D.G. Stump (1998). Anogenital distance: Potential for confounding effects of progeny body weights on interpreting toxicologic significance. *Teratology* **57**(4/5), 245 (Abstract 87).
Goldey, E.S., Kehn, L.S., Rehnberg, G.L., and Crofton, K.M. 1995. Effects of developmental hypothyroidism on auditory and motor function in the rat. *Toxicol. Appl. Pharmacol.* **135**(1), 67-76.
Goldman, J.M., S.C. Laws, S.K. Balchak, R.L. Cooper, and R.J. Kavlock (2000). Endocrine-disrupting chemicals: prepubertal exposures and effects on sexual maturation and thyroid activity in the female rat. A focus on the EDSTAC recommendations. *Crit. Rev. Toxicol.* **30**(2), 135-196.
Gray, J. E. (1977). Chronic progressive nephrosis in the albino rat. *CRC Crit. Rev. Toxicol.* **5**(2): 115-44.
Gray, L.E., and Ostby, J.S. (1995). In utero 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) alters reproductive morphology and function in female rat offspring. *Toxicol. Appl. Pharmacol.* **133**, 285-294.
Gray, L.E. Jr., J. Ostby, C. Wolf, C. Lambright, and W. Kelce (1998). Annual Review. The value of mechanistic studies in laboratory animals for the production of reproductive effects in wildlife: endocrine effects on mammalian sexual differentiation. *Environm. Toxicol. Chem.* **17**(1), 109-118.
Gray, L.E. Jr., and J. Ostby (1998). Effects of pesticides and toxic substances on behavioral and morphological reproductive development: endocrine versus nonendocrine mechanisms. *Toxicol. and Indust. Health* **14**(1/2), 159-184.
Greco, A.M., Gambardella, P., Sticchi, R., D'Aponte, D., Di Renzo, G., and DeFranciscis, P. (1989). Effects of individual housing on circadian rhythms of adult rats. *Physiol. Behav.* **45**(2), 363-366.
Gregg, D. W., Galkin, M., et al. (1996). Effect of estrogen on the expression of galanin mRNA in pituitary tumor-sensitive and tumor-resistant rat strains. *Steroids* **61**(8): 468-72.
Griffith, J. S., S. M. Jensen, et al. (1997). Evidence for the genetic control of estradiol-regulated responses. Implications for variation in normal and pathological hormone-dependent phenotypes. *Am. J. Pathol.* **150**(6): 2223-30.
Grimm, M.S., Emerman, J.T., and Weinberg, J. (1996). Effects of social housing condition as behavior on growth of the Shionogi mouse mammary carcinoma. *Physiol. Behav.* **59**(4-5), 633-642.
Haavisto, T., Nurmela, K., Pohjanvirta, R., Huuskonen, H., El-Gehani, F., and Paranko, J. (2001). Prenatal testosterone and luteinizing hormone levels in male rats exposed during pregnancy to 2,3,7,8-tetrachlorodibenzo-p-dioxin and diethylstilbestrol. *Molecular and Cellular Endocrinology* **178**, 169-179.
Haemisch, A., Vass, T., and Gartner, K. (1994). Effects of environmental enrichment on aggressive behavior, dominance hierarchies, and endocrine states in male DBA/2J mice. *Physiol. Behav.* **56**(5), 1041-1048.
Haseman, J.K., Bourbina, J., and Eustis, S.L. (1994). Effect of individual housing and other experimental design factors on tumor incidence in B6C3F1 mice. *Fundam. Appl. Toxicol.* **23**(1), 44-52.
Hayes, T., K. Haston, M. Tsui, A. Hoang, C. Haeffele and A. Vonk (2003). "Atrazine-Induced Hermaphroditism at 0.1 ppb in American Leopard Frogs (*Rana pipiens*): Laboratory and Field Evidence." *Environ Health Perspect* **111**(4): 568-75. http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=12676617
Hayes, T. B., A. Collins, M. Lee, M. Mendoza, N. Noriega, A. A. Stuart and A. Vonk (2002). "Hermaphroditic, demasculinized frogs after exposure to the herbicide atrazine at low ecologically relevant doses." *Proc Natl Acad Sci U S A* **99**(8): 5476-80. http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=11960004
Hellwig, J., van Ravenzwaay, B., Mayer, M., and Gembardt, C. (2000). Pre- and postnatal oral toxicity of vinclozolin in Wistar and Long-Evans rats. *Regul. Toxicol. Pharmacol.* **32**, 42-50.
Holsapple, M., Reynolds, B. Wiescinski, C., Anderson, P., Carney, E. 1998. Feed restriction during in utero and neonatal life: effects on immune parameters in the rat. *Toxicologist* **42**(1-S), 102.
Hossaini, A., Larsen, J.-J., and Larsen, J.C. (2000). Lack of oestrogenic effects of food preservatives (parabens) in uterotrophic assays. *Food Chem. Toxicol.* **38**, 319-323.
Hunt, P.A., Koehler, K.E., Susiarjo, J., Hodges, C.A., Ilagan, A., Voight, R.C., Thomas, S., Thomas, B.F., and Hassold, T.J. (2003). Bisphenol A exposure causes meiotic aneuploidy in the female mouse. *Current Biology* **13**, 546-553.
Inano, H., Suzuki, K., et al. (1996). Relationship between induction of mammary tumors and change of testicular functions in male rats following gamma-ray irradiation and/or diethylstilbestrol. *Carcinogenesis* **17**(2), 355-60.
ILAR Journal Online (Fall 1992), Volume 34(4); http://dels.nas.edu/ilar/jour_online/34_4/definitionandnomenrat.asp
Johnson, R.K., Eckardt, G.R., Rathje, T.A., Drudik, D.K. (1994). Ten generations of selection for predicted weight of testes in swine: direct response and correlated response in body weight, backfat, age at puberty, and ovulation rate. *J. Anim. Sci.* **72**(8), 1978-1988.
Kacew, S., Ruben, Z., and McConnell, R. F. (1995). Strain as a determinant factor in the differential responsiveness of rats to chemicals. *Toxicologic Pathology* **23**(6), 701-714.
Kacew, S. and Festing, M. F. W. (1996). Role of rat strain in the differential sensitivity to pharmaceutical agents and naturally occurring substances. *Journal of Toxicology and Environmental Health* **47**, 1-30.
Kanno, J., Onyon, L., Haseman, J., Fenner-Crisp, P., Ashby, J., and Owens, W. (2001). The OECD program to validate the rat uterotrophic bioassay to screen compounds for *in vivo* estrogenic responses: phase 1. *Environ. Health Perspect.* **109**(8), 785-794.
Karolewicz, B., and Paul, I.A. (2001). Group housing of mice increases immobility and antidepressant sensitivity in the forced swim and tail suspension tests. *Europ. J. Pharmacol.* **415**(2-3), 197-201.
Kennedy, G.C., and Mitra, J. (1963). Body weight and food intake as initiating factors for puberty in the rat. *Physiology* **166**, 408-418.
Kirkpatrick, B. W., Mengelt, A., et al. (1998). Identification of quantitative trait loci for prolificacy and growth in mice. *Mammalian Genome* **9**: 97-102
Klinefelter, G.R., Strader, L.F., Suarez, J.D., and Roberts, N.L. (2003). Bromochloroacetic acid exerts qualitative effects on rat sperm: implications for a novel biomarker. *Toxicol. Sci.* **68**(1), 164-173.
Klinger, M. M., MacCarter, G.D., et al. (1996). Body weight and composition in the Sprague Dawley rat: comparison of three outbred sources. *Lab. Anim. Sci.* **46**(1): 67-70.
Koehler, K., Voight, R., Thomas, S., Lamb, B., Hassold, T., and Hunt, P.A. (2003). When disaster strikes: rethinking polycarbonate caging. *Lab. Animal* **32**, 32-36.
Koyama, S., and Kamimura, S. (1999). Lowered sperm motility in mice of subordinate social status. *Physiol. Behav.* **65**(4-5), 665-669.
Koyama, S., and Kaminura, S. (2000). Influence of social dominance and female odor on the sperm activity of male mice. *Physiol. Behav.* **71**(3-4), 415-422.
Kumagai, J., Hsu, S.Y., Matsumi, H., Roh, J.-S., Fu, P., Wade, J.D., Bathgate, R.A.D., and Hseuh, A.J.W. 2002. INSL3/Leydig Insulin-like Peptide Activates the LGR8 Receptor Important in Testis Descent. *J. Biol. Chem.* **277**(35), 31283-31286.
Lamm, S.H., and Doemland, M. (1999). Has perchlorate in the drinking water increased the rate of congenital hypothyroidism? *J. Occup. Environ. Med.* **41**(5), 409-411.
Land, R. B. (1970). Genetic and phenotypic relationships between ovulation rate and body weight in the mouse. *Genet. Res.* **15**: 171-182.
Land, R. B., De Reviers, M.M., et al. (1974). Quantitative physiological studies of genetic variation in the ovarian activity of the rat. *J. Reprod. Fertil.* **38**(1): 29-39.
Land, R. B. and Falconer, D.S. (1969). Genetic studies of ovulation rate in the mouse. *Genetical Research* **13**: 25-46.
Laws, S.C., Carey, S.A., Ferrell, J.M., Bodman, G.J., and Cooper, R.L. 2000. Estrogenic activity of octylphenol, nonylphenol, bisphenol A and methoxychlor in rats. *Toxicol. Sci.* **54**(1), 154-167.
Lewis, R.W., Brooks, N., Milburn, G.M., Soames, A., Stone, S., Hall, M., and Ashby J. (2003). The effects of the phytoestrogen genistein on the postnatal development of the rat. *Toxicol. Sci.* **71**(1), 74-83.
Liang, M., Yuan, B. et al. (2002). Renal medullary genes in salt-sensitive hypertension: a chromosomal substitution and cDNA microarray study. *Physiol. Genomics* **8**(2): 139-49.
Liberati, T.A., Roe, B.J., and Feuston, M.H. (2002). An oral gavage control embryo-fetal development study of the Wistar Hannover rat. *Drug and Chemical Toxicology* **25**(1), 109-130.
Long, S.Y. (1972). Hair nibbling and whisker trimming as indicators of social hierarchy in mice. *Anim. Behav.* **20**(1), 10-12.
Long, X., Steinmetz, R., Ben-Jonathan, N., Caperell-Grant, A., Young, P.C.M., Nephew, K.P., and Bigsby, R.M. (2000). Strain differences in vaginal responses to the xenoestrogen Bisphenol A. *Environmental Health Perspectives* **108**(3), 243-247.
Lubritz, D. L., Eisen, E.J., et al. (1991). Effect of selection for litter size and body weight on hormone-induced ovulation rate in mice. *J. Anim. Sci.* **69**(11): 4299-4305.
Lynch, M. and Walsh, B. (1998). *Genetics and Analysis of Quantitative Traits*. Sunderland, MA 01375, Sinauer Associates, Inc.
Manly, K. F., Cudmore, R.H. Jr., et al. (2001). Map Manager QTX, cross-platform software for genetic mapping. *Mamm. Genome* **12**(12): 930-2.
McIntyre, B.S., Barlow, N.J., Foster, P.M. 2001. Androgen-mediated development in male rat offspring exposed to flutamide in utero: permanence and correlation of early postnatal changes in anogenital distance and nipple retention with malformations in androgen-dependent tissues. *Toxicol. Sci.* **62**(2), 236-249.
McIntyre, B.S., Barlow, N.J., Foster, P.M. 2002. Male rats exposed to linuron in utero exhibit permanent changes in anogenital distance, nipple retention, and epididymal malformations that result in subsequent testicular atrophy. *Toxicol. Sci.* **65**(1), 62-70.
McKim, J.M., Wilga, P.C., Breslin, W.J., Plotzke, K.P., Gallavan, R.H. and Meeks, R.G. (2001). Potential estrogenic and antiestrogenic activity of the cyclic siloxane octamethylcyclotetrasiloxane (D4) and the linear siloxane hexamethyldisiloxane (HMDS) in immature rats using the uterotrophic assay. *Toxicol. Sci.* **63**, 37-46.
McKinney, T.D., and Pasley, J.N. (1973). Effects of social rank and social disruption in adult male house mice. *Gen. Comp. Endocrinol.* **20**(3), 579-583.
McNatty, K. P., Henderson, K.M., et al. (1985). Ovarian activity in Booroola X Romney ewes which have a major gene influencing their ovulation rate. *J. Reprod. Fertil.* **73**(1): 109-20.
McNatty, K. P., Smith, P., et al. (1995). Development of the sheep ovary during fetal and early neonatal life and the effect of fecundity genes. *J. Reprod. Fertil. Suppl.* **49**: 123-35.
Marshall, K. E., Godden, E.L., et al. (2002). In silico discovery of gene-coding variants in murine quantitative trait loci using strain-specific genome sequence databases. *Genome Biol.* **3**(12): RESEARCH0078.
Matthews, J., T. Celius, et al. (2000). Differential estrogen receptor binding of estrogenic substances: a species comparison. *J. Steroid Biochem. Mol. Biol.* **74**(4), 223-34.
Matin, A., Collin, G.B., et al. (1999). Susceptibility to testicular germ-cell tumours in a 129.MOLF-Chr 19 chromosome substitution strain. *Nat. Genet.* **23**(2): 237-40.
Miller, R.A., Burke, D., and Austad, S. 1999. Exotic mice for aging: Authors' reply. *Neurobiology of Aging* **20**, 245-246.
Morrissey, R.E., George, J.D., Price, C.J., Tyl, R.W., Marr, M.C. and Kimmel, C.A. (1987). The developmental toxicity of bisphenol A in rats and mice. *Fundam. Appl. Toxicol.* **8**, 571-582.
Morrissey, R.E., Schwetz, B.A., Lamb, J.C. IV, Ross, M.D., Teague, J.L., and Morris, R.W. (1988). Evaluation of rodent sperm, vaginal cytology, and reproductive organ weight data from National Toxicology Program 13-week studies. *Fundam. Appl. Toxicol.* **11**, 343-358.
Myant, N.B. (1971). The role of thyroid hormone in the fetal and postnatal development of mammals. *Hormones in Development*.
Nadeau, J. H., Singer, J.B. et al. (2000). Analysing complex genetic traits with chromosome substitution strains. *Nat. Genet.* **24**(3): 221-5.
Nagao, T., Saito, Y., Usumi, K., Yoshimura, S., and Ono, H. (2002). Low-dose bisphenol A does not affect reproductive organs in estrogen-sensitive C57BL/6N mice exposed at the sexually mature, juvenile, or embryonic stage. *Reproductive Toxicology* **16**, 123-130.
Nagy, T.R., Krzywanski, D., Li, J., Meleth, S., and Desmond, R. (2002). Effect of group vs. single housing on phenotypic variance in C57BL/6J mice. *Obes. Res.* **10**(5), 412-415.
Narotsky, M.G., Best, D.S., Guidici, D.L., and Cooper, R.L. (2001). Strain comparisons of atrazine-induced pregnancy loss in the rat. *Reproductive Toxicology* **15**, 61-69.
Narotsky, M.G., Hamby, B.T., Mitchell, D.S., and Kavlock, R.J. (1993). Evaluation of the critical period of bromodichloromethane-induced full-litter resorption in F-344 rats. *Teratology* **47**(5), 429 (Abstract No. P58).
National Toxicology Program's Report of the Endocrine Disruptors Low Dose Peer Review. 2000. Organized by the National Institute of Environmental Health Sciences, NIH, National Toxicology Program.
O'Connor, J.C., J.C. Cook, S.C. Craven, C.S. Van Pelt, and J.P. Obourn (1996). An *in vivo* battery for identifying endocrine modulators that are estrogenic or dopamine regulators. *Fundam. Appl. Toxicol.* **33**, 182-195.
O'Connor, J.C., Frame, S.R., Davis, L.G., and Cook, J.C. (1999). Detection of the environmental antiandrogen p,p'-DDE in CD and Long-Evans rats using a Tier 1 Screening Battery and a Hershberger Assay. *Toxicol. Sci.* **51**, 44-53.
O'Connor, J.C., S.R. Frame, L.G. Davis, and J.C. Cook (1999). Detection of thyroid toxicants in a tier I screening battery and alterations in thyroid endpoints over 28 days of exposure. *Toxicol. Sci.* **51(1)**, 54-70.
O'Connor, J.C., S.R. Frame, and G.S. Ladics (2002a). Evaluation of a 15-day screening assay using intact male rats for identifying steroid biosynthesis inhibitors and thyroid modulators. *Toxicol. Sci.* **69**, 79-91.
O'Connor, J.C., S.R. Frame, and G.S. Ladics (2002b). Evaluation of a 15-day screening assay using intact male rats for identifying antiandrogens. *Toxicol. Sci.* **69**, 92-108.
Odum, J., Pyrah, T.G., Soames, A.R., Foster, J.R., Van Miller, J.P., Joiner, R.L., and Ashby, J. (1999a). Effects of p-nonylphenol (NP) and diethylstilboestrol (DES) on the alderley park (Alpk) rat: comparison of mammary gland and uterus sensitivity following oral gavage or implanted mini-pumps. *J. Appl. Toxicol.* **19**, 367-368.
Odum, J., Pyrah, T.G., Foster, J.R., Van Miller, J.P., Joiner, R.L., and Ashby, J. (1999b). Comparative activities of p-nonylphenol and diethylstilbestrol in Noble rat mammary gland and uterotrophic assays. *Regul. Toxicol. Pharmacol.*, **29**, 184-195.
Oishi, S. (1993). Strain differences in susceptibility to di-2-ethylhexyl phthalate-induced testicular atrophy in mice. *Toxicology Letters* **66**, 47-52.
Okwun, O.E., Igboeli, G., Ford, J.J., Lunstra, D.D., Johnson, L. (1996a). Number and function of Sertoli cells, number and yield of spermatogonia, and daily sperm production in three breeds of boar. *J. Reprod. Fertil.* **107(1)**, 137-149.
Okwun, O. E., Igboeli, G., et al. (1996b). Testicular composition, number of A spermatogonia, germ cell ratios, and number of spermatids in three different breeds of boars. *J. Androl.* **17(3)**: 301-9.
Oppenheimer, J.H., Schwartz, H.L., and Strait, K.A. (1994). Thyroid hormone action 1994: The plotthickens. *Eur J Endocrinol.* **130**, 15-24.
Parmar, R., Spearow, J., et al. (2003). Genetic Susceptibility of Mice to Candida Albicans Vaginitis Correlates With Estrogen Sensitivity. IV congress of the International Society for Human and Animal Mycoses, San Antonio, Texas.
Pirchner, F. (1969). Population Genetics in Animal Breeding. San Francisco, W.H. Freeman and Company.
Pomp, D., E. J. Eisen, et al. (1988). LH receptor induction and ovulation rate in mice selected for litter size and body weight. *J. Reprod. Fertil.* **84(2)**: 601-9.
Perez-Castillo, A., Bernal, J., Ferreiro, B., and Pans, T. (1985). The early ontogenesis of thyroid hormone receptor in the rat fetus. *Endocrinology*. **(117)**, 2457-2461.
Pettersen, J. C., Morrissey, R.L., et al. (1996). A 2-year comparison study of Crl:CD BR and Hsd:Sprague-Dawley SD rats. *Fundam. Appl. Toxicol.* **33(2)**: 196-211.
Pohjanvirta, R., Kulju, T., Morselt, A.F.W., Tuominen, R., Juvonen, R., Rozman, K., Mannisto, Pekka, Collan, Y., Sainio, E.-L., and Tuomisto, J. (1989). Target tissue morphology and serum biochemistry following 2,3,7,8-tetrachlorodibenzo-*p*-dioxin (TCDD) exposure in a TCDD-susceptible and a TCDD-resistant rat strain. *Fundam. Appl. Toxicol.* **12**, 698-712.
Pohjanvirta, R., Viluksela, M., Tuomisto, J.T., Unkila, M., Karasinksa, J., Franc, M.A., Holowenko, M., Giannone, J.V., Harper, P.A., Tuomisto, J., Okey, A.B. (1999). Physicochemical differences in the Ah receptors of the most TCDD-susceptible and the most TCDD-resistant rat strains. *Toxicol. Appl. Pharmacol.* **155(1)**, 82-95.
Pohorecky, L.A., Skiandos, A., Zhang, X., Rice, K.C., and Benjamin, D. (1999). Effect of chronic social stress on delta-opioid receptor function in the rat. *J. Pharmacol. Exp. Ther.* **290(1)**, 196-206.
Popova, N.K., and Naumenko, E.V. (1972). Dominance relations and the pituitary-adrenal system in rats. *Anim. Behav.* **20(1)**, 108-111.
Porterfield, S.P. (1994). Vulnerability of the developing brain to thyroid abnormalities: Environmental insults to the thyroid system. *Environmental Health Perspectives.* **102(2)**, 125-130.
Porterfield, S.P., and Stein, S.A. (1994). Thyroid hormones and neurological development: Update 1994. *Endocrine Rev.* **(3)**, 357-363.
Porterfield, S.P., and Hendrich, C.E. (1993). The role of thyroid hormones in prenatal neonatal neurological development – current perspectives. *Endocrine Rev.* **(14)**, 94-106.
Porterfield, S.P., and Hendrich, C.E. (1992). Tissue iodothyroidine levels in fetuses of control and hypothyroid rats at 13 and 16 days gestation. *Endocrinol.* **(131)**, 195-206.
Putz, O., Schwartz, C.B., LeBlanc, G.A., Cooper, R.L. and Prins, G.S. (2001). Neonatal low- and high-dose exposure to estradiol benzoate in the male rat: II. Effects on male puberty and the reproductive tract. *Biology of Reproduction* **65**, 1506-1517.
Rehm, S., and Waalkes, M.P. (1988). Cadmium-induced ovarian toxicity in hamsters, mice and rats. *Fundam. Appl. Toxicol.* **10**, 635-647.
Reinhardt, W., and Militzer, K. (1979). "Schnurrbartbeissen" und sozialer rang bei mausen [whisker trimming and social rank in mice]. *Z. Versuchstierkd.* **21**(2), 83-91.
Roper, R. J., Griffith, J.S., et al. (1999). Interacting quantitative trait loci control phenotypic variation in murine estradiol-regulated responses. *Endocrinology* **140**(2), 556-61.
Sarma, J.R., Dyck, R.H., and Wishaw, I.Q. (2000). The Dalila effect: C57BL6 mice barber whiskers by plucking. *Behav. Brain Res.* **108**(1), 39-45.
Schechter, J., Ahmad, N., Elias, K., and Weiner, R. (1987). Estrogen-induced tumors: changes in the vasculature in two strains of rat. *Am. J. Anat.* **179**(4), 315-323.
Schechter, J. and R. Weiner (1991). Changes in basic fibroblast growth factor coincident with estradiol-induced hyperplasia of the anterior pituitaries of Fischer 344 and Sprague-Dawley rats. *Endocrinology* **129**(5), 2400-8.
Schardein, J.L. (1999). Reproductive Hazards. In: Product Safety Evaluation Handbook. (S. Gad, ed.) Marcel Dekker, N.Y. 299-380.
Sharp, J., Zammit, T., Azar, T., and Lawson, D. (2003). Stress-like responses to common procedures in individually and group-housed female rats. *Contemp. Top. Lab. Animal Sci.* **42**(1), 9-18.
Shellabarger, C. J., Stone, J.P., et al. (1978). Rat differences in mammary tumor induction with estrogen and neutron radiation. *J. Natl. Cancer Inst.* **61**(6): 1505-8.
Silver, L. (1995). Mouse Genetics: Concepts and Applications, Oxford University Press.
Shirai, T., Nakamura, A., Fukushima, S., Yamamoto, A., Tada, M., and Ito, N. (1990). Different carcinogenic responses in a variety of organs, including the prostate, of five different rat strains given 3,2'-dimethyl-4-aminobiphenyl. *Carcinogenesis* **11**(5), 793-797.
Smith, G.C., Cizza, G., Gomez, M., Greibler, C., Gold, P.W., and Sternberg, E.M. (1994). The estrous cycle and pituitary-ovarian function in Lewis and Fischer rats. *Neuroimmunomodulation* **1**, 231-235.
Spearow, J. L. and Bradford, G.E. (1983). Genetic variation in spontaneous ovulation rate and LH receptor induction in mice. *J. Reprod. Fertil.* **69**(2): 529-537.
Spearow, J. L. (1985). The mechanism of action of genes controlling reproduction. Genetics of Reproduction in Sheep. R. B. Land and D. W. Robinson: Ch. 22, pp.203-215.
Spearow, J. L. (1986). Changes in the kinetics of follicular growth in response to selection for large litter size in mice. *Biol. Reprod.* **35**: 1175-1186.
Spearow, J. L., Turgai, J.T., et al. (1987). Genetic variation in estradiol negative feedback on testes and vesicular gland weight and LH in mice. *J. Animal Sci.* **65 Suppl. 1**: 399A.
Spearow, J. L. (1988). Characterization of genetic differences in hormone-induced ovulation rate in mice. *J. Reprod. Fert.* **82**: 799-806.
Spearow, J. L. (1988). Major genes control hormone-induced ovulation rate in mice. *J. Reprod. Fertil.* **82**: 787-797.
Spearow, J. L., Erickson, R.P., et al. (1991). The effect of H-2 region and genetic background on hormone-induced ovulation rate, puberty, and follicular number in mice. *Genet. Res.* **57(1)**: 41-9.
Spearow, J. L. and M. Barkley (1999). Genetic control of hormone-induced ovulation rate in mice. *Biol. Reprod.* **61(4)**: 851-856.
Spearow, J. L., Doemeny, P., Sera, R., Leffler, R., and Barkley, M. (1999). Genetic variation in susceptibility to endocrine disruption by estrogen in mice. *Science* **285**, 1259-1261.
Spearow, J. L. and Barkley, M. (2001). Reassessment of models used to test xenobiotics for oestrogenic potency is overdue. *Hum. Reprod.* **16(5)**: 1027-9.
Spearow, J.L., O’Henley, P., Doemeny, P., Sera, R., Leffler, R., Sofos, T., Barkley, M. (2001). Genetic variation in physiological sensitivity to estrogen in mice. *APMIS* **109(5)**, 356-364.
Stefanski, V. (1998). Social stress in loser rats: opposite immunological effects in submissive and subdominant males. *Physiol. Behav.* **63(4)**, 605-613.
Stefanski, V. (2000). Social stress in laboratory rats: hormonal responses and immune cell distribution. *Psychoneuroendocrinology* **25(4)**, 389-406.
Steinmetz, R., Brown, N.G., Allen, D.L., Bigsby, R.M., and Ben-Jonathan, N. (1997). The environmental estrogen bisphenol A stimulates prolactin release in vitro and *in vivo*. *Endocrinology* **138(5)**, 1780-1786.
Steinmetz, R., Mitchner, N.A., et al. (1998). The xenoestrogen bisphenol A induces growth, differentiation, and c-fos gene expression in the female reproductive tract. *Endocrinology* **139(6)**: 2741-7.
Stevens, J. T., C. B. Breckenridge, et al. (1999). A risk characterization for atrazine: oncogenicity profile. *J. Toxicol. Environ. Health A* **56(2)**: 69-109.
Stoker, T.E., Laws, S.C., Guidici, D.L., and Cooper, R.L. 2000. The effect of atrazine on puberty in male Wistar rats: An evaluation in the protocol for the assessment of pubertal development and thyroid function. *Toxicological Sciences* **58**(N1), 50-59. (Reprint).
Strait, K.A., Schwartz, H.L., Perez-Castillo, A., and Oppenheimer, J.H. (1990). Relationship of c-erbA content to tissue triiodothyronine nuclear binding capacity and function in developing and adult rats. *J Biol Chem.* **265**, 10514-10521.
Strozik, K.E., and Festing, M.F. (1981). Whisker trimming in mice. *Lab. Anim.* **15**(4), 309-312.
Takizawa, K., Yagi, H., Jerina, D.M., and Mattison, D.R. (1985). Experimental ovarian toxicity following intraovarian injection of benzo(a)pyrene or its metabolites in mice and rats. *Reproductive Toxicology* ed. R.L Dixon, Raven Press, New York. 69-94.
Taylor, L.R., and Costanzo, D.J. (1975). Social dominance, adrenal weight, and the reticuloendothelial system in rats. *Behav. Biol.* **13**(2), 167-174.
Taylor, G.T., and Moore, S. (1975). Social position and competition in laboratory rats. *J. Comp. Physiol. Psychol.* **88**(1), 424-430.
Thigpen, J.E., Li, L.-A., Richter, C.B., Cebetkin, E.H., and Jameson, C.W. (1989). The mouse bioassay for the detection of estrogenic activity in rodent diets. II. Comparative estrogenic activity of purified, certified and standard open and closed formula rodent diets. *Lab. Anim. Sci.* **37**, 602-605.
Thigpen, J.E., Setchell, K.D.R., Ahlmark, K.B., Locklear, J., Spahr, T., Caviness, G.F., Goelz, M.F., Haseman, J.K., Newbold, R.R., and Forsythe, D.B. (1999). Phytoestrogen content of Purina open- and closed-formula laboratory animal diets. *Lab. Anim. Sci.* **49**(5), 530-534.
Timiras, P.S., and Nzekwe, E.U. (1989). Thyroid hormones and nervous system development. *Biol Neonate.* **55**, 376-385.
Tinwell, H., Haseman, J., Lefevre, P.A., Wallis, N., and Ashby, J. (2002). Normal sexual development of two strains of rat exposed in utero to low doses of Bisphenol A. *Toxicol. Sci.* **68**, 339-348.
Tropp J., and Markus, E.J. (2001). Effects of mild food deprivation on the estrous cycle of rats. *Physiol. Behav.* **73**, 553-559.
Tyl, R.W., C.B. Myers, M.C. Marr, D.R. Brine, P.A. Fail, J.C. Seely, and J.P. Van Miller (1999). Two-generation study with para-tert-octylphenol in rats. *Regulatory Toxicol Pharmacol.* **30**, 81-95.
Tyl, R.W., C.B. Myers, M.C. Marr, T.-Y. Chang, J.C. Seely, D.R. Brine, M.M. Veselica, P.A. Fail, R.L. Joiner, J.H. Butala, S.S. Dimond, and L. R. Harris (2000). Three-generation reproductive toxicity evaluation of bisphenol A administered in the feed to CD® (Sprague-Dawley) rats. Draft final report, submitted July 15, 2000 to the Society of the Plastics Industries, Bisphenol A Task Group. Washington, DC.
U. S. Environmental Protection Agency (EPA) (1985). Toxic Substances Control Act, EPA (TSCA) Section 798.4700, Reproduction and Fertility Effects. *Federal Register* **50** *(188)*, 39432-39433.
U.S. EPA (1996). Office of Prevention, Pesticides and Toxic Substances (OPPTS), Health Effects Test Guidelines, OPPTS 870.3800, Reproduction and Fertility Effects (Public Draft, February 1996).
U.S. EPA (1998). Office of Prevention, Pesticides and Toxic Substances (OPPTS), Health Effects Test Guidelines, OPPTS 870.3800, Reproduction and Fertility Effects (Final Guideline, August 1998).
U.S. Environmental Protection Agency (EPA) (1996). Part II, Environmental Protection Agency: Reproductive Toxicity Risk Assessment Guidelines; Notice. *Federal Register* **61** *(212)*, 56274-56322 (October 31, 1996).
Van Loo, P.L., Mol, J.A., Koolhaas, J.M., Van Zutphen, B.F., and Baumans, V. (2001). Modulation of aggression in male mice: influence of group size and cage size. *Physiol. Behav.* **72**(5), 675-683.
Waller, K., Swan, S.H., et al. (1998). Trihalomethanes in drinking water and spontaneous abortion. *Epidemiology* **9**(2): 134–40.
Wendell, D. L., Daun, S.B., et al. (2000). Different functions of QTL for estrogen-dependent tumor growth of the rat pituitary. *Mamm. Genome* **11**(10): 855-61.
Wendell, D. L. and Gorski, J. (1997). Quantitative trait loci for estrogen-dependent pituitary tumor growth in the rat. *Mamm. Genome* **8**(11), 823-9.
Wendell, D. L., A. Herman, et al. (1996). Genetic separation of tumor growth and hemorrhagic phenotypes in an estrogen-induced tumor. *Proceedings of the National Academy of Sciences* **93**(15), 8112-6.
Westenbroek, C., Ter Horst, G.J., Roos, M.H., Kuipers, S.D., Trentani, A., and den Boer, J.A. (2003). Gender-specific effects of social housing in rats after chronic mild stress exposure. *Prog. Neuropsychopharmacol. Biol. Psychiatry* **27**(2), 21-30.
Wetzel, L. T., Luempert, G.R., et al. (1994). Chronic effects of atrazine on estrus and mammary tumor formation in female Sprague-Dawley and Fischer 344 rats. *J. Toxicol. Environ. Health* **43(2)**: 169-82.
Whalen, R.E., Gladue, B.A., and Olsen, K.L. (1986). Lordotic behavior in male rats: genetic and hormonal regulation of sexual differentiation. *Horm. Behav.*, **20(1)**, 73-82.
Whitten, P.L., Lewis, C., Russell, E., and Naftolin, F. (1995). Phytoestrogen influences on the development of behavior and gonadotropin function. *Proc. Soc. Exp. Biol. Med.* **208**, 82-86.
Wilkinson, J.M., Halley, S., and Towers, P.A. (2000). Comparison of male reproductive parameters in three rat strains: Dark Agouti, Sprague-Dawley and Wistar. *Laboratory Animals* **34**, 70-75.
Williams, R. W., Gu, J., et al. (2001). The genetic structure of recombinant inbred mice: high-resolution consensus maps for complex trait analysis. *Genome Biol.* **2(11)**: RESEARCH0046.
Wilson, T., Wu, X.Y., et al. (2001). Highly Prolific Booroola Sheep have a Mutation in the Intracellular Kinase Domain of Bone Morphogenetic Protein 1B Receptor (ALK-6) That is Expressed in Both Oocytes and Granulosa Cells. *Biol. Reprod.* **64(4)**: 1225-1235.
Wolf, C.J., Ostby, J.S., Gray, L.E. Jr. 1999. Gestational exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) severely alters reproductive function of female hamster offspring. *Toxicol. Sci.* **51(2)**, 259-264.
Yin, H., Fujimoto, N., Maruyama, S., and Asano, K. (2001). Strain difference in regulation of pituitary tumor transforming gene (PTTG) in estrogen-induced pituitary tumorigenesis in rats. *Jpn. J. Cancer Res.* **92**, 1034-1040.
You, L., Casanova, M., Archibeque-Engle, S., Madhabananda, S., Li-Qun, F., and Heck, H. d’ A. (1998). Impaired male sexual development in perinatal Sprague-Dawley and Long-Evans hooded rats exposed in utero and lactationally to p,p'-DDE. *Toxicol. Sci.* **45**, 162-173.
Zidek, V., Musilova, A., et al. (1998). Genetic dissection of testicular weight in the mouse with the BXD recombinant inbred strains. *Mamm. Genome* **9(7)**: 503-5.
Zidek, V., Pintir, J., et al. (1999). Mapping of quantitative trait loci for seminal vesicle mass and litter size to rat chromosome 8. *J. Reprod. Fertil.* **116(2)**: 329-33.
|
FIELD MEASUREMENTS OF SAND MOTION
IN THE SURF ZONE
by
Douglas L. Inman; James A. Zampol; Thomas E. White
Daniel M. Hanes; B. Walton Waldorf; and Kim A. Kastens
Shore Processes Laboratory A-009
Scripps Institution of Oceanography
La Jolla, California 92093
ABSTRACT
Forcing functions and sediment response were measured during two comprehensive surf zone experiments. The experiments included simultaneous measurements of waves and currents, and the movement of sediment as bed and suspended load. The longshore transport of suspended load was found to be about 10 to 20% of the tracer-measured load. Results from tracer measurements of the longshore transport of bed load indicate that previous measurements may have misestimated the effective "tracer layer thickness," and a more rigorous method is proposed.
1. INTRODUCTION
There is extensive competition for the use of nearshore areas for harbors, recreational beaches and swimming, as recipients of thermal and waste discharges, and for coastal construction. Therefore, understanding of sediment transport processes has become essential to coastal technology.
As a consequence, the Nearshore Sediment Transport Study sponsored by the U.S. Office of Sea Grant, NOAA, and funded in part by the Office of Naval Research, has been engaged in extensive field experiments since 1977. The overall objective of the study is the development of improved engineering prediction techniques for sediment transport by waves and currents in the nearshore region. The studies have been conducted jointly by research teams from five universities. The first field experiments were carried out at Torrey Pines Beach near San Diego, California, in March 1977 and November-December 1978 (Gable ed), 1979; Seymour and Gable, 1980). Additional experiments have recently been conducted near Santa Barbara, California, in February 1980.
The purpose of this paper is to present the results from the two most comprehensive, high energy experiments conducted on 11 March 1977 and 6 December 1978. These experiments included simultaneous measurement of forcing functions (i.e., waves, winds and currents) from sensor arrays in and outside the surf zone, as well as sediment response in terms of bed and suspended load movement.
2. TORREY PINES BEACH SITE
A portion of Torrey Pines Beach in San Diego County, California, was selected as a site for this study. The site is located 5.5 km north of Point La Jolla, and 4 km north of the ocean pier of the Scripps Institution of Oceanography. A 3.0 km segment of this beach that has gently sloping offshore bathymetry and is terminated shoreward by 100 meter high sea cliffs was used for the beach and nearshore measurements (Figure 2-1). This beach satisfied the basic requirements for a straight beach with uncomplicated offshore bathymetry that is exposed to waves from all offshore quadrants. Chamberlain (1960) and State of California (1969) have estimated the net littoral transport in the vicinity of Torrey Pines Beach at about 200,000 m$^3$ per year, to the south.
The study beach undergoes typical seasonal changes in configuration due to changes in wave climate. During summer wave conditions, the beach has a 30 to 60 meter wide backshore, a relatively steep upper foreshore, and a pronounced berm. Winter storm waves overtop the summer berm and erode the backshore, thus reducing the width of the exposed beach. Winter beach profile configuration is typified by a gently sloping beach foreshore that in places extends shoreward to the toe of the sea cliff. The beach profiles for the two experiments are shown in Figure 2-2. Generally the beach face slopes about 1:25, the surf zone 1:75, the section seaward to the 3 meter depth contour 1:35, while that portion from 3 to 10 meters depth slopes 1:50.
The sediments on the beach and shelf are predominantly fine quartz sand with minor amounts of feldspars and heavy minerals. Light minerals, that is those with a specific gravity of less than about 2.85, usually comprise 90% of the sample, while heavy minerals total about 10%. Of the light minerals approximately 88% is quartz, 10% feldspars and 2% shell fragments and miscellaneous material. Of the total heavy minerals hornblende is most plentiful, comprising about 60% (Inman, 1953). The size distributions for typical sediments obtained by grab samplers in the surf zone are listed in Table 2-1.
Table 2-1 Size distribution measures for sieved surf zone samples. Measures are from Inman (1952).
| Date | 11 March 1977 | 6 December 1978 |
|---------------|---------------|-----------------|
| Measure | Md microns | $\alpha_\phi$ | $\alpha_\phi$ | Md microns | $\alpha_\phi$ | $\alpha_\phi$ |
| Inner | 184 | .37 | 0 | 152 | .43 | -.26 |
| Middle | 200 | .39 | +.18 | 157 | .51 | -.14 |
| Outer | 215 | .47 | +.17 | 152 | .63 | -.23 |
Fig. 2-1 Site and sampling grid for 11 March 77.
Fig. 2-2 Beach profiles and instrument locations for 11 March 77 and 6 December 78.
3. EXPERIMENT CONTROL
The dates and times of our tracer experiments were carefully selected to meet the following criteria: 1) fully operational status for the offshore wave array and other necessary sensors; 2) minimal change in tide elevation during a 3 hour period during daylight; 3) occurrence of dominant waves of sufficient height and breaker angle to generate a strong, unidirectional longshore current; and 4) absence of stationary rip currents across the sampling grid in the study area.
Figure 2-1 shows a stationary rip current pattern with separation between rip currents of 340 meters. In accordance with criterion 4, the injection line and sampling grid were placed within a 250 meter length of beach between rip currents. Reversal of longshore current during experiments resulted in the rejection of many of the experiments conducted on other days.
During the morning of the given day, Rhodamine-B dye was injected at several positions in the surf zone and observed from the cliff above the site. The dye dispersal patterns, such as that illustrated in Figure 4-4 indicated the approximate mean longshore current speed as well as the occurrence of complicating rip currents. If all conditions were satisfied, the injection location and sampling grid were determined appropriate to the current speed and direction, and flags marking the grid ranges were installed on the beach.
4. DRIVING FORCES
It is well known that the principal driving forces for longshore transport of sand in the nearshore waters are waves, currents, winds, and tides, in that order. Since strong winds did not occur during the two experiments described here, and because the experiments were scheduled to coincide with near still-stands in the tidal curve, neither wind nor tidal forces are considered to be important in these experiments. However, the stage of the tide may have some disequilibrium effect on the beach profile, as the water level during experiments was generally above mean sea level in order to cover the surf zone instruments. The tide in these waters is mixed, with a pronounced diurnal inequality. At the Scripps Institution of Oceanography pier the mean and spring tidal ranges are 3.6 and 5.2 feet (1.10 and 1.58 meters), and the mean tidal level is 2.7 feet (0.82 meters) above the datum of mean lower low water. During the experiments of 11 March 1977 and 6 December 1978 the water levels were about 20 cm and 60 cm respectively above mean tide level.
WAVES
The direction and flux of wave energy and momentum were measured from an array of wave pressure sensors placed parallel to shore in a water depth of 9.7 m below mean sea level, hereafter referred to as the 10 meter array. All data from it have been corrected to the actual depth of water at the times of the observations. Also, arrays of electromagnetic current meters, pressure sensors, and wave staffs were placed in and near the surf zone. Details of these near-surf arrays
are given in Guza and Thornton (1978, 1980) and Gable (ed 1979).
The spacings of the offshore line array were in a 1-2-4-5 and a 2-2-2-0 configuration with a unit lag of 33 meters and total array lengths of 396 and 363 meters respectively. The first was used on 11 March 1977 while the latter was used on 6 December 1978. The length of the array was designed for resolution of long period waves with directional peaks separated by $5^\circ$ to $10^\circ$ in ten meters of water. Data from the offshore array was telemetered back to the Shore Processes Laboratory of the Scripps Institution of Oceanography using the Shelf and Shore (SAS) system described by Lowe et al (1973).
The wave spectrum consists of the squares of the absolute values of the complex Fourier coefficients, which serve as estimates (having dimensions of length squared) of the energy density in each elemental frequency band. The sum of the energy densities under the spectral peak of the incident waves i.e., the variance, is the mean-square elevation $\langle \eta^2 \rangle$ of the water surface described by the time series. The mean wave energy per unit area of the water surface is given by the product of $\langle \eta^2 \rangle$ and the weight per unit volume $\rho g$ of the water,
$$E = \rho g \langle \eta^2 \rangle = \frac{1}{8} \rho g H_{\text{rms}}^2$$
where $H_{\text{rms}}$ is the root-mean-square wave height.
The energy spectrum was computed for time runs of about 17 minutes during which pressure was sampled two times per second ($At = 0.5$ sec), for a total number of data points $n = 2048$, which when transformed give 1024 elemental frequency bands. Energy and cross-spectra were computed from the summation of 8 ($q = 8$) elemental frequency bands, giving a total of 128 merged frequency bands and 16 degrees of freedom. The most important frequency bands were plotted, and examples are shown in Figures 4-1, 2.
Directional spectra were computed for every frequency band using the maximum likelihood estimator described by Capon (1969) and Davis and Regier (1977). Important frequency bands with high energy peaks were graphed, and examples are shown in the lower part of Figures 4-1, 2.
There is in general a flux of energy in the direction of wave propagation, which in the absence of refraction and dissipation is a constant per unit width of wave crest
$$EC_n = [EC_n]_\infty = \text{const}$$
where $E$ is defined in relation (4-1), $C$ is the wave phase velocity, and $C_n$ is the wave group velocity.
Waves also transport momentum which is a tensor quantity consisting
Fig. 4-1 Examples of wave energy and directional spectra for 11 March 1977.
Fig. 4-2 Examples of wave energy and directional spectra for 6 December 1978.
of four components. The onshore flux of y-directed momentum $S_{yx}$ has been shown to drive the longshore currents inside the surf zone (Bowen, 1969; Longuet-Higgins, 1972). Outside the surf zone $S_{yx}$ is conserved, and is given by the relation
$$S_{yx} = En \cos \alpha \sin \alpha = (En \cos \alpha \sin \alpha)_o = \text{const} \quad (4-3)$$
$S_{yx}$ is obtained by integrating the energy, E, in the directional spectra with respect to $\cos \alpha \sin \alpha$ to obtain relation (4-3) for each 17-minute run.
The wave energy at the breakpoint was obtained by refracting each frequency band to an estimated breaker depth using Snell's law and relation (4-2). The actual breaker depth was then computed by combining equation (4-1) relating energy and wave height with the criterion for a breaking solitary wave, $\gamma_b = (H/h)_b = 0.78$ (Munk, 1949), and then recomputing the energy at the breakpoint, where h is the depth of water.
Wave heights at the breakpoint depth were then computed using equation (4-1); the radiation stress, $S_{yx}$, from equation (4-3). The phase velocities at the breaker depth were computed from the Boussinesq dispersion relation (Whitham, 1974, p. 462):
$$c_b^2 = \frac{gh_b}{1 + \frac{1}{3}(kh)_b^2} \quad (4-4)$$
It was then possible to estimate the power expended by the waves, $(CS_{yx})$, which has been related to the sand transport rate (Figure 6-1). Measured and computed wave parameters are summarized in Table 4-1.
Table 4-1. Forcing Functions
| h (m) | Array 9.3 m | Breakpoint $h_b = 1.43$ m | Array 6.0 m | Breakpoint $h_b = 1.65$ m |
|-------|-------------|---------------------------|-------------|---------------------------|
| $\alpha$ (degrees) Mean | 5.3-7.0 | 2.9-4.1 | 3.5-8.0 | 2.0-5.8 |
| C (m/sec) | 9.1-9.3 | 3.7-4.6 | 7.40-7.54 | 3.2-4.0 |
| Cn (m/sec) | 8.5-8.7 | 3.7-4.6 | 6.90-7.03 | 3.2-4.0 |
| H (m) | 0.67-0.83 | 0.99-1.19 | 0.91-0.97 | 1.26-2.70 |
| E joules/m² | 555-853 | 1198-1725 | 1008-1186 | 2089-2768 |
| Syx joules/m² | 67.0-95.4 | 78.7-119.3 | 81.9-129.6 | 90.7-159.0 |
| CSyx watts/m | 624-868 | 291-537 | 617-976 | 290-636 |
| | 742 | 1529 | 1090 | 2483 |
|-------|-----|------|------|------|
| | 72.1 | 90.5 | 95.3 | 147.0 |
| | 663 | 353 | 715 | 529 |
CURRENTS
The general circulation of water in the surf zone, although complex, appears to follow patterns shown schematically in Figure 4-3. Early measurements showed that outside the surf zone, and between rip currents, water tended to move onshore from surface to bottom (Shepard and Inman, 1950). However, inside the surf zone, surface, mid-depth, and bottom waters have different net directions: the surface water moves onshore and longshore and has the greatest speed; the mid-depth water moves on-offshore with a net longshore motion; while the bottom water has a very slow net offshore motion, even though the highest velocities during the passage of a bore are decidedly onshore. There is also an upward circulation associated with the passage of the breaking wave and its bore (Inman and Quinn, 1952; Inman and Nasu, 1956; Inman et al, 1971).
Surf zone currents in these studies were measured by three separate methods: 1) dye injection into the water; 2) weighted drift bottles; and, 3) electromagnetic current meters mounted at mid-depths in the surf zone. The first method gives a pattern of water motion and was particularly effective in locating rip currents (Figures 4-4). Drift bottles give Lagrangian trajectories which can be plotted as current velocities representative of the upper layers of the surf zone. The current meters give continuous measures of values of x-y current components, and are most representative of the mid-depth motion of the surf zone (Figures 4-5, 6). They are more fully described by Guza and Thornton (1978).
5. SEDIMENT RESPONSE
The sediment response to the action of waves and currents is complex and not understood. For our purposes we will group these complex motions into bed and suspended load components. Bed load is arbitrarily defined as sand transported on and within 10 cm of the bed; while
DYE DISPERSION IN THE SURF ZONE
6 DEC 78
Fig. 4-4 Dye pattern from 13:22 injection.
suspended load is that transported in the water column from 10 cm to the water surface. This definition is used because our present suspended load samplers do not function properly when operated closer to the bed than 10 cm.
TRACER TECHNIQUE FOR BED LOAD
The only known means of estimating the "instantaneous" bed load transport in the field is by tagging the sand in some detectable manner and injecting the tracer into the surf zone (e.g., Inman and Chamberlain, 1959). More recently, fluorescent dyed sand grains have been used as tracers by several investigators (Aibulatov, 1961; Ingle, 1966; Komar and Inman, 1970) to study the motion of sand in the nearshore environment. The longshore transport of sand has been estimated by multiplying the rate of advection of sand tracer by the depth in the bed to which the sand tracer mixes (Inman, et al, 1968; Komar and Inman, 1970).
If sand motion consisted of uniform advection, then a line source of tracer sand placed across the surf zone would move at a velocity calculatable directly by sampling the bed at the proper time and distance following injection. However, granular diffusion in the bedload transport, turbulent diffusion in suspended load transport, and non-uniform advection tend to disperse as well as advect the original distribution of tracer. In view of these processes, a concentration-weighted mean velocity must be calculated in order to estimate the true advection rate of tracer and its admixed sand aggregate.
Fig. 4-5 Longshore currents from current meter array for 11 March 1977.
Fig. 4-6 Longshore currents from current meter array for 6 December 1978.
The tracer grain distribution was sampled following injection by two independent methods. The first method uses traditional "grab samplers" designed to sample the upper 2 cm of the sand bed, with the objective of surveying downcurrent as rapidly as possible to give a synoptic areal view of the tracer distribution at a given time. This method is referred to as "spacial sampling." The second method uses specially designed coring devices to repeatedly core the bottom along a fixed on-offshore range, downstream from the injection line and at constant intervals of time, and is analogous to sampling a dye plume in a river as it flows under a bridge. This method is called "temporal sampling." Each sample provides an estimate of the rate of advection. If a sample $i$ is taken $y_i$ meters from the injection line at an elapsed time $t_i$ seconds after injection, then the sand velocity is given by $V_i = y_i/t_i$. If $N_i$ is the tracer concentration of the sample, then a concentration weighted mean advection rate for sand in the longshore direction is given by
$$\bar{V}_x = \sum_{i=1}^{n} \left[ N_i \frac{y_i}{t_i} \right] / \sum_{i=1}^{n} N_i$$ \hspace{1cm} (5-1)
where $n$ is the total number of samples. This definition of the advection rate $V$ is also time weighted, thus freeing the transport calculation from bias introduced because all of the spacial samples cannot be taken at the same time. Spacial and temporal sampling concepts are shown schematically in Figure 5-1.
Tracer Layer Thickness
The vertical distribution of tracer in the cores was examined to determine the "tracer layer thickness," one of the most uncertain parameters in previous studies (Komar and Inman, 1970). Ideally the measured thickness is one which predicts the actual transport rate when multiplied by the advection rate and surf width. However, the determination of the tracer layer thickness from the core data has been highly subjective. In an effort to remove some of the subjectivity, we have chosen a concentration weighted depth to represent the "tracer layer thickness" $Z_0$, defined as
$$Z_0 = 2 \frac{\int_{z=0}^{L} N(z) \ dz}{\int_{z=0}^{L} N(z) \ dz}$$ \hspace{1cm} (5-2)
METHODS OF TRACER ANALYSIS
| TEMPORAL | SPACIAL |
|----------|---------|
| SAMPLE AT FIXED STATIONS OVER A RANGE OF TIME | SAMPLE A SPACIAL GRID AT FIXED TIME |
|  |  |
CONSERVATION OF TRACER SATISFIED?
$$M \cdot \int_{0}^{t_0} \int_{0}^{y_0} \frac{y}{t} Z_0 \bar{N} \ dx \ dt = M \cdot \int_{0}^{t_0} \int_{0}^{y_0} Z_0 \bar{N} \ dy \ dx$$
WHERE: $M =$ TOTAL MASS OF INJECTED TRACER
$Z_0(x) =$ TRACER LAYER THICKNESS
$\bar{N}(x, y, t)_i =$ AVE. MASS CONCENTRATION (mass/volume)
$i =$ REPRESENTS PARTICULAR SAMPLE
CHARACTERISTIC LONGSHORE VELOCITY OF SAND
$$V_x = \frac{1}{\pi N_0} \sum_{i=1}^{n} \frac{N_i}{t_i}$$
VOLUME FLUX OF SAND
$$Q_x = \delta \int_{0}^{x_0} V_x Z_0(x) \ dx$$
IMMERSED WEIGHT TRANSPORT
$$I_x = Q_x (\rho_s - \rho) g N_0$$
Fig. 5-1
where $z$ is the depth in the bed, $N(z)$ is the tracer concentration at depth $z$, and $\&$ is a depth in the bed greater than any tracer penetrates. It has been assumed here that the surface of the bed ($z = 0$) is fixed, with no erosion or accretion occurring. We realize that this definition is itself highly subjective, but it is an effort towards standardizing the determination of the tracer layer thickness. It is to be noted that the actual value of $Z_0$ can be greater than the measured value, and the velocity $V_a$ must grade from a maximum at the surface to zero at the bottom of the layer.
The longshore series of cores taken with the center grab sample on 11 March 1977 were analyzed for tracer layer thickness $Z_0$ using several systems including relation (5-2), and the results are plotted in Figure 5-2. The upper curve showing maximum thickness of tracer penetration more closely approximates the thickness used in previous studies (Komar and Inman, 1970); while the lower curve represents a more conservative cut-off thickness where the tracer concentration is 1 grain/gm or less.
**BED LOAD TRANSPORT**
The product of the tracer layer thickness $Z_0$, the advection rate of sand and tracer $V_a$, and the surf zone width $X_s$, results in an estimate of the longshore bed load sediment transport rate. This procedure is schematically depicted in Figure 5-1.
Tracer injections and bed load sampling were made on 11 March 1977 and 6 December 1978. On 11 March 1977, since only spacial sampling was attempted, a grid of $10 \times 18 = 180$ grab samples were recovered and 21 core samples. The grid is shown in Figure 2-1 and the contours of tracer concentration in Figure 5-3. On December 1978 two spacial and a temporal sampling were made, and the results are shown in Figure 5-4, 5. This procedure permits three independent estimates of transport, but with the same number of personnel, necessarily results in fewer samples per run.
The tracer layer thickness $Z_0$ as a function of surf zone width for both days is plotted in Figure 5-6. It is apparent that $Z_0$ is greatest in the swash zone and least in the central portion of the surf zone, and again increases towards the breakpoint. Separate investigation, not reported here, confirms this bimodal distribution of $Z_0$, i.e., minimum values in the center of the surf zone. This bimodality was used to estimate transport rates near the breakpoint on these high energy days where breakpoint samples were not obtained.
Fig. 5-3 Distribution of grab sample tracer concentrations on 11 March 77.
Fig. 5-4 Distribution of grab sample tracer concentrations on 6 December 78.
Three types of estimates for the transport rate were made in order to demonstrate the effect of the conceptual models in the interpretation of field data. The first estimate used the tracer layer thickness, $Z_0$, as defined in eq. 5-2 and assigned the mid-surf value of $Z_0$ to the unsampled outer portion of the surf zone. The second estimate extrapolated the tracer layer thickness to the unsampled portion of the surf-zone using a bimodal distribution as described above. The final estimate equated $Z_0$ to the maximum thickness of tracer penetration and applied a bimodal distribution for the tracer layer thickness.
The percent recovery of tracer was calculated for both spacial and temporal methods. For the spacial method, each sample yields an estimate for the total amount of tracer in the region from which it was taken. These estimates can be summed over the experimental grid and compared with the known quantity of tracer injected over the same area. For the temporal method, the amount of tracer which advected across the sampling line is estimated, and compared to the amount of tracer injected in that portion of the surf zone covered by the temporal sampling line.
Table 5-1. Sediment Response
| | 11 March 77 | 6 December 78 |
|------------------------|-------------|---------------|
| | Spacial | Temporal |
| | Run #1 | Run #2 |
| Sampling Start (min. after inj.) | 85 | 41 | 92 | 15 |
| Sampling End (min. after inj.) | 170 | 71 | 109 | 135 |
| Tracer Recovery (%) | 65 | 39 | 77 | 48 |
| Immersed Wt. Longshore Transport Rate, $I_g$ (nt/sec) | | | | |
| 1) Initial Estimate & Eq. 5-2 | 92 | 142 | 167 | 285 |
| 2) Bimodal $Z_0$ & Eq. 5-2 | 150 | 179 | 211 | 359 |
| 3) Max. Tracer Penetration & Bimodal Distribution | 474 | 240 | 283 | 481 |
| Suspended Load (nt/sec) | | | | |
| Under Crest Average | 34 | 60 | | |
| | 23 | 33 | | |
| Tracer Amount Injected (kg) | 113 | 90.7 | | |
| Surf Zone Width (m) | | | | |
| Range | 125-150 | 150-200 | | |
| Significant $X_b$ | 140 | 180 | | |
Fig. 5-5 Variation of core line tracer concentration with time after injection.
Fig. 5-6 On-offshore variation of tracer layer thickness ($Z_0$).
SUSPENDED LOAD
Suspended sediment samples were collected on 11 March 1977 and 6 December 1978 during the period between tracer sand injection and the start of grab sampling. The 11 March 1977 samples were collected at the offshore positions shown in Figure 5-7. Six sample series were collected below the crest of the bore, five series between bores, and five just before passage of the bore. On 11 March 1977 samples were taken where bore heights ranged from 30 cm to 102 cm. The breaker type on 11 March 1977 ranged from spilling to plunging, with some strongly plunging.
During the 6 December 1978 experiment suspended sediments were collected at four stations from under the passing crest of the bore. Because of high plunging breakers, these samples did not extend beyond the inner surf zone (Figure 5-8). Samples were collected under bores ranging in height from 49 to 72 cm on that day, with a total range of heights during the November-December 1978 experiments of 15 to 131 cm.
The suspended load was integrated over depth to give the suspended load in grams over a square meter of bed. The product of this load and the average longshore current was then assumed to give an estimate of the suspended load transport rate for that portion of the surf zone. These estimates, when integrated across the surf zone, give the estimates of total suspended load transport rate. The 11 March 1977 experiment provides data on the variation of suspension over a wave cycle. On this day the average suspended load over a wave was used to estimate the total suspended load transport rate. The 6 December 1978 rates were corrected using information from the 11 March 1977 experiment. The total suspended load transport rates are listed in Table 5-1.
FIELD AND LABORATORY PROCEDURES
The tracer experiment used natural Torrey Pines Beach sand dyed with either Florescein (green) or Rhodamine (red) dye. Plastic bags of wetted tracer sand were placed on the sand bed along the injection range to approximate a line source of tracer across the surf zone; then each bag was emptied simultaneously at the time of injection.
The grab samples were collected using a hand-held, scissors type, box grab sampler with the sample volume of 7.5 x 12.5 and 2.1 cm deep. The samples were transferred to plastic bags and labeled with date and grid location. Grab and core samplers were designed for both simplicity of operation in the surf zone and for recovery of sediment cores with undisturbed vertical structure, described in Waldorf and Zampol (in prep).
In the period between injection of tracer and grab sampling, the suspended load was sampled at a number of points across the surf zone. At each sample location a vertical series of 3 or 4 suspended load samples were taken using a "water-coring device" (Waldorf and Zampol, in prep). The swash was sampled with a mechanically closed "plastic bag" sampler.
After fresh water rinsing and drying, 10 gram sub-samples from each grab sample were spread to a single grain layer thickness on a counting grid. The fluorescent tracer grains in the sample were counted using a long-wave ultraviolet light in a completely darkened room. Using these techniques, tracer concentrations as low as 100 grains/kg are measured.
Fig. 5-7 Representative suspended load, profiles, and sampling locations for 11 March 77.
Fig. 5-8 Representative suspended load, profiles, and sampling locations for 6 December 78.
6. DISCUSSION
The bed and suspended load measurements presented here were made under the most intensely monitored forcing conditions yet reported for moderately high energy, dissipative beaches (gentle slope, wide surf zone). As a consequence more surf zone variables and their ranges are known than for previous field investigations. The measurements also showed some trends suggesting that previous simplistic models and methods of calculation should be revised.
FORCING FUNCTIONS
The surf zone width $X_b$ varies with breaker height and type, and a rigorous definition is not generally possible. However, the old concept of a "significant surf zone width" determined by the average of the one-third highest breakers seems appropriate from the standpoint of energy and momentum fluxes.
The complex nature of the longshore and the on-offshore currents was clearly demonstrated by the three methods of current measurement employed, and portrayed schematically in Figure 4-3. The strongest currents were always near the midpoint of the surf zone, and no dyed water passed seaward of the breakpoint except in rip currents. Rip currents may be stationary (Figure 2-1), or may progress down current through the experiment area as in Figure 4-5. The effect of rip currents on the overall transport is not known, but must be considered as an important mechanism in the overall transport processes.
SEDIMENT RESPONSE
The on-offshore bimodality of the tracer layer thickness $Z_t$ was an unexpected finding, and clearly emphasizes the shortcoming of basing transport estimates on limited core information. The concepts of "tracer layer thickness" are of sufficient importance to merit much of the discussion in Section 5, and resulted in the definition given in relation (5-2). The differences between this definition and others is shown graphically by Figure 5-2. The upper curve of maximum tracer penetration more closely approximates the value of tracer layer thickness used previously by Inman et al (1968) and Komar and Inman (1970). Since the interpretation of sand tracer data depends upon the conceptual models of the processes involved, we have interpreted the data in accordance with several models in order to demonstrate the ranges in sand transport rates to be expected. For purposes of comparison, transport estimates are plotted together with the previous data of Komar and Inman (1970) in Figure 6-1.
The difference in transport rates between the spacial and temporal sampling on 6 December 1978 is not clear. However, the persistence of rip currents moving through the sampling grid (Figure 4-5) suggest tracer was lost offshore. This effect would be more pronounced in the spacial sampling which is sensitive to the entire grid length, than to the temporal sampling which was relatively close to the injection point (Figure 5-4). If this is the cause for the different transport rates, then the temporal sampling is the more accurate value.
Suspended load measurements emphasized the complex nature of suspension processes and demonstrate the need for continuous monitoring sensors vs. the discrete type samplers employed here. Even so, these data show the importance of the changes in concentration as a function of the phase of the bore. Other data taken during the 1977-78 period by this laboratory indicate the importance of the type of breaking wave. Plunging waves suspend much more sediment than spilling waves, especially near the breakpoint. There also appears to be a bimodal distribution of suspension, with maxima in the swash zone and near the breakpoint. The ratio of suspended transport rate to tracer-determined transport rate is 10 to 20% on 11 March 1977 and 6 December 1978. Thus, a comprehensive model of longshore transport as well as on-offshore transport will need to incorporate suspended load as a fundamental component of the total load.
REFERENCES
Abulatov, N.A., 1961, "Quelques données sur le transfert des sédiments sableux le long d'un littoral, obtenues à l'aide de luminophores," Cahiers Océanogr., 13:292-300.
Bowen, A.J., 1969, "The generation of longshore currents on a plane beach," Jour. of Marine Research, 27, 2:125-206.
Capon, J., 1969, "High resolution frequency-wavenumber spectrum analysis," Proc. IEEE, 57:1408-1418.
Chamberlain, T.K., 1960, "Mechanics of mass sediment transport in Scripps Submarine Canyon," Ph.D. thesis, Univ. of Calif., San Diego, 200 pp.
Davis, R.E. and L.A. Regier, 1977, "Methods for estimating wave spectra," Jour. of Marine Research, 35, 3:454-477.
Gable, C.G., 1979, "Report on data from the Nearshore Sediment Transport Study experiment at Torrey Pines Beach, Calif., November-December 1978," Institute of Marine Resources, IMR Reference No. 79-8.
Guza, R.T. and E.B. Thornton, 1978, "Variability of longshore currents," Proc. 16th Coastal Eng. Conf., Amer. Soc. Civil Eng., 756-775.
___________, 1980, "Local and shoaled comparisons of sea surface elevations, pressures and velocities," Jour. Geophys. Res., 85, C3:1524-30.
Ingle, J.C., Jr., 1966, "The movement of beach sand," Developments in Sedimentology, 5, Elsevier Publ. Co., New York, 221 pp.
Inman, O.L., 1952, "Measures for describing the size distribution of sediments," Jour. Sed. Pet., 22:125-145.
__________ and T.K. Chamberlain, 1959, "Tracing beach sand movement with irradiated quartz," Jour. Geophys. Res., 64, 41-47.
__________, P.O. Komar and A.J. Bowen, 1968, "Longshore transport of sand," Proc. 11th Coastal Eng. Conf., Amer. Soc. of Civil Eng., 1:298-306.
__________ and N. Nasu, 1956, "Orbital velocity associated with wave action near the breaker zone," Beach Erosion Board, Tech Memo 79, 72 pp.
__________ and W.H. Quinn, 1952, "Currents in the surf zone," Proc. 2nd Coastal Eng. Conf., Council on Wave Research, Univ. of Cal., 24-36.
__________, R.J. Tait and C.E. Nordstrom, 1971, "Mixing in the surf zone," Jour. Geophys. Res., 76, 15:3493-3514.
Komar, P.D. and D.L. Inman, 1970, "Longshore sand transport on beaches," Jour. Geophys. Res., 75, 30:5914-27.
Longuet-Higgins, M.S., 1972, "Recent progress in the study of longshore currents," Waves on Beaches and Resulting Sediment Transport, Academic Press, New York and London, 203-243.
Lowe, R.L., O.L. Inman and B.M. Brush, 1973, "Simultaneous data system for instrumenting the shelf," Proc. 13th Coastal Eng. Conf., Amer. Soc. of Civil Eng., 95-112.
Munk, W.H., 1949, "The solitary wave theory and its application to surf problems," Ann. N.Y. Acad. Sci., 51:376-424.
Seymour, R.J. and C.G. Gable, 1980, "Nearshore Sediment Transport Study Experiments," Proc. 17th Coastal Eng. Conf., Amer. Soc. of Civil Engrs., in press.
Shepard, F.P. and O.L. Inman, "Nearshore water circulation related to bottom topography and wave refraction," Trans., Amer. Geophys. Union, 31, 2:196-212.
State of Calif., 1969, "Interim Report on Study of Beach Nourishment Along the Southern Calif. Coastline," Dept. of Water Resources, Southern District, Sacramento, Calif.
Whitham, G.B., 1974, Linear and Nonlinear Waves, John Wiley and Sons, New York, 636 pp.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.